What are Hyperscale Networks?
In order to understand what challenges hyperscale networks face, you need to know the definition. Essentially, hyperscale networks are a computing system with the ability to adapt in scale to the demands of a workload. It is this type of computing power and network scale that drives video streaming, social media, cloud computing, software platforms, and big-data storage. Data centers are the physical fleet of structures where this type of computing power occurs.
Continuity and Reliability
Two of the biggest challenges hyperscale networks face, continuity and reliability, go hand-in-hand. These terms are typically associated with service or “uptime”. The higher the uptime, the more continuity of service and the better the reliability. Why are these two factors so integral to the challenges hyperscalers face? Downtime, particularly unexpected downtime, gets expensive.
Even a single minute of downtime can cost a data center thousands of dollars. In October 2021, Meta (aka Facebook) experienced an unprecedented outage, affecting billions of users across its social network and instant messaging applications, as well as all its internal company systems.
Fortune estimated that the outage likely cost Meta around $100 million in lost revenue.
With so many businesses relying on hyperscale data centers to provide the IT backbone to their operations, any downtime can have a substantial impact and sometimes catastrophic ramifications. So how do hyperscalers ensure the uptime of millions of servers? By building a resilient network and utilizing redundancy as a safety net.
What exactly is redundancy in this context? In data centers, redundancy refers to a system design in which critical components are duplicated so that data center operations can continue even if a component fails, effectively ensuring a back-up plan in the event of outages and preventing a disruption in service.
Energy Costs and Sustainability
Two particular pain points that have come to the forefront this year for a lot of hyperscalers are rising energy costs and sustainability initiatives. Many data centers, hyperscalers included, are continuously searching out ways to utilize renewable energy sources and minimize their carbon footprint. With the cost of energy soaring, and the sheer amount required to power vast hyperscale networks, providers are having to increase their efforts tenfold.
The International Energy Agency estimates that 1% of all global electricity is used by data centers and that by 2025, data centers will consume 1/5 of the world’s power supply.
Much of the energy demands in data centers and hyperscale networks comes from powering servers, which not only require electricity to run, but also need constant cooling. Not only does this cooling process require large amounts of energy but it also generates a lot of excess heat, which in most cases is simply released into the surrounding environment. There are many innovative solutions being tested to see how this excess heat can be harnessed and reused, either within the data center or in some secondary ways.
Security for Hyperscale Networks
Another challenge hyperscale networks face is security and, as a result, customer confidence. The potential of data breaches and other cyberattacks make security one of the main considerations when selecting a data center provider. Both the geographical location of a data center and security systems such as biometric identification, physical security, and compliance provide the first line of defense against potentially costly threats.
In the data hall, virtual security measures such as heavy data encryption, log auditing, and access dependent on clearance levels are utilized to protect against both internal and external attacks. In hyperscale data centers, all activity is monitored with any anomalies or attempts to compromise communications reported. Servers are virtualized and workloads are managed without reference to specific hardware and mapped at the last moment.
According to the 2022 IBM Security Analysis Report, the average total cost of a data breach is $4.35 million.
Customer confidence is of huge importance to hyperscalers as they aim to convince customers that their confidential data, and the data of their customers, is in safe hands. Data breaches have more than just a financial impact, a security attack creates a lack of consumer trust, damaging the hyperscale provider’s image, integrity, and reliability.
Global Availability
Long-term growth often requires physical network expansion, which can be achieved by either “building up” or “building out”. Building out means acquiring or leasing land for future buildings, often where development sites are limited. Building up on the other hand, is more typical in circumstances where available land may be extremely expensive or hard to come by. There may also be a need to ensure low latency in densely populated areas. Building up quite simply consists of extending a building’s capacity by adding additional floors.
Hyperscalers need suppliers with a global presence as they need to be serviced everywhere, consistently.
Decisions may be made regionally and even globally, but installation takes place worldwide, meaning expert and local support is needed in these areas. Hyperscalers also require suppliers who can build specific variants for them, often adapted to meet local requirements or regulations. Individual country requirements, such as those concerning Construction Product Regulation (CPR) can cause roadblocks for hyperscale operators meaning suppliers need to be versatile and equipped to ease these pain points.
Technological Requirements for Hyperscale Networks
Constant technological advances are almost amongst the challenges hyperscale networks face. Operating out-of-date technology consumes more space, power, and time, meaning there is always a constant replenishment required.
According to Moore’s Law, the number of transistors in a chip doubles every 18 months, thus doubling the bandwidth and processing capacity. While most observers believe Moore’s Law in its original form is coming up against physical limits, innovation in chip functional design and software methods continue to drive dramatic forced evolution of computing.
This means that hyperscalers need to renew their technology every 3 to 4 years. This also makes entering the hyperscale market as a newcomer notoriously difficult, as it requires sizeable capital expenditure and continual investment in new technology. The entrance fee to compete with the big players in the market is astronomical and the pace of change too much for many. No rest for the weary; the race continues.
Data Consumption
As part of its Global Entertainment and Media Outlook 2022-2026, PwC predicts the continued growth of online content will push global data consumption to 8.1 million petabytes by 2026, compared to 2.6 million petabytes in 2021.
According to Statista, internet users generate around 2.5 quintillion bytes of data each day, with predictions estimating the world will generate 181 zettabytes of data by 2025.
Shortly after it was founded in 1998, Google processed only 10,000 internet searches per day. One year later, that number rose to 3.5 million search queries per day. Now, there are now approximately 2.4 million searches made every minute on Google.
The statistics show that data consumption shows no sign of slowing down and will continue to increase exponentially. This means that hyperscalers will continue to provide highly in demand services, which int urn will require constant scales and updates to cope with the ever-increasing workload.
Supply and Demand
The rapid growth of hyperscale data centers is dependent on the strength of their supply chain. From intermittent demand to the need for rapid technological innovation, many suppliers have serious difficulty addressing the needs of hyperscalers, meaning suppliers that want to be involved must rethink their approach to manufacturing, sales, and research and development.
Hyperscalers require rapid innovation and often act in advance of industry standards, meaning suppliers need to act in accordance with best practice and thought leadership, while also working within an efficient framework that means innovation remains economically viable.
For suppliers to consistently be able to support the growth of hyperscale operators from a manufacturing perspective, they must be equipped to accommodate inaccurate demand forecasts that can grow or disappear at a moment’s notice. This, in turn, impacts manufacturing as they need to be
able to fulfill orders where deadlines are often tight and fixed.
However, issues in supply can be difficult to predict. Recent examples of unforeseen challenges that hyperscale networks face in terms of supply include delays brought about by the Russia Ukraine conflict and the global microchip shortage.
Despite the plethora of challenges hyperscale networks face today, they continue to adapt and innovate, leading the way to a better-connected world.
About AFL Hyperscale
About AFL Hyperscale
AFL Hyperscale is the first cabling and connectivity solution provider focused on the ever-evolving needs of data centers. Hyperscale, colocation, and enterprise data centers are united in their pursuit to connect the unconnected, yet their infrastructure, performance, and operational challenges are totally
unique.
We work collaboratively with our customers to create connectivity solutions tailored to their current needs and to the requirements of future networks. We then use our responsive, global operational capabilities and distribution network for fast delivery.
This approach has transformed how many data centers grow worldwide and is built on 70 years’ combined experience in the design and manufacture of high-performance optical fiber networks, a global presence, and the backing and innovation sharing of our parent and grandparent companies, AFL and Fujikura, the pioneer in optical technology. AFL Hyperscale is your dependable partner to build a more connected world.
Find out more about us here.
AFL Hyperscale – The World, Connected.