Tech Trend

Why It’s Time To Rethink Your Data Center Networking Design

Driving business velocity is a critical requirement in today’s digital world. With business applications increasingly hosted in the cloud, velocity is a function of what cloud and edge data centers are designed to handle. Nothing made us more aware of this than the dramatic shift in enterprise IT spending on cloud computing once it became clear the pandemic would have long-term implications for how we work, live and consume goods and services.

While many organizations have embraced the shift to digital transformation, making it a reality is far more complex. Legacy data centers were built around the static enterprise HQ — but modern data centers must be dynamic and intelligent, built for data and users located anywhere.

It’s no secret that traditional data center network designs and processes are slow, cumbersome and expensive. Those designs tend to have highly siloed and rigid networks, with common outages, significant software bloat and lock-in from a few proprietary vendors. This means that network performance is not only often suboptimal and inefficient, but any network overhaul or “forklift upgrade” can easily take years of highly manual effort to implement and at significant expense. This is not an ideal situation for data center operators hoping to embrace the new wave of digital innovation.

A New Blueprint For The Modern Data Center

Data center networks are moving to a simpler, more agile design. This means, in more technical terms, embracing spine-leaf designs for physical networking, along with a routed infrastructure in the underlays. Service velocity is also improving through overlay-based architectures with technologies like data-processing units (DPUs, aka SmartNICs), which enable networks to be agile and efficient through software-defined processes.

Indeed, the biggest winners have been hyperscale companies like Amazon, Microsoft and Google that have validated the open and disaggregated networking model, which affords the most flexibility to embrace new technologies that drive growth. Hyperscalers have forged a new blueprint predicated on agile, intelligent and massively scalable designs that offer higher performance with increased efficiency — forcing the rest of us to rethink how data centers should be designed and operated.

Similarly, there is a growing distaste for proprietary network operating systems (NOSs), which represent vendor lock-in, monolithic design and slow development cycles. This is the primary driver behind the growing interest in SONiC, an open-source NOS based on Linux. Started at Microsoft, SONiC receives code contributions from a community of thousands of engineers — enabling faster innovation — and has been adopted by hyperscale companies like Alibaba and Tencent. Moreover, an open-source NOS works with switches and hardware from multiple vendors, enabling “white box” solutions that combine best-of-breed functionality. The inevitable result is disaggregation in the networking market — a trend that has been well-established in computing and storage markets for some time.

Central to this shift, of course, is unprecedented data growth. The amount of data created, copied, captured or consumed in 2020 was estimated to total 59 zettabytes, and it is expected to grow to 149 zettabytes by 2024, according to Statista (paywall). For data center networks, this means more bandwidth consumption and the need to easily move data between data centers, so it can be accessed from anywhere.

The Hallmarks Of Next-Generation Data Center Networking

To meet these networking requirements, organizations must not limit the speed of innovation. As the scale and complexity of networks grow, organizations can no longer view visibility, telemetry, performance and availability — or the ability to innovate — within silos of responsibility; there must be a holistic evaluation of both hardware and software.

Moreover, with network speeds now in the terabits, innovation at the silicon level is essential to enable more efficient and agile networks, without sacrificing security or performance. Here are the design considerations data center operators should embrace to meet today’s modern requirements and future-proof their businesses for tomorrow’s needs:

Telemetry, analytics and automation. Because enterprises lack unlimited resources and talent, they must do more with less. As a result, telemetry (and accompanying analytics) has become critical in providing better visibility into network performance and security, offering actionable insights derived from that visibility and enabling automation that prevents performance degradations and ensures network health.

Latency. Data centers must also lower their latency, given the higher level of East-West communication in flatter, modern networks. Modern data centers run distributed, cloud-native, microservice-based applications and artificial intelligence/machine learning (AI/ML) workloads. The distributed nature of these microservice components means that latency is a key factor affecting performance.

Programmability. With the need to become more agile, programmable data planes for network switches have also become essential. Whereas it may take one to two years just to “respin” switch silicon for a new function, programmable switch ASICs eliminate the need to respin entirely. This allows operators to future-proof the data center and rapidly adapt to any changes, while gaining peace of mind that they can remain competitive in any market conditions.

Improved power and efficiency. Of course, data centers need to support greater bandwidth, while reducing their carbon footprint and environmental impact with greater power efficiency. Accordingly, the network switches that data center operators have progressively adopted — from 25G SerDes switches in 2014 to 100G SerDes switches in the near future — have shown dramatic power efficiency improvements in terms of watt per terabit within the same size envelope.

Multi-sourcing strategy for networking. We have all witnessed the tremendous recent shortages in semiconductor components — and the fact that multiple supply chains lead to faster innovation, better products and, ultimately, lower cost. While networking has not historically embraced multi-sourcing (it’s common in computing, storage and optics), it’s time to rethink that paradigm.

Already, digital transformation and big data applications are reshaping business in all industries. From enabling remote workforces to driving AI applications, ML and IoT, data centers are maximizing business velocity and powering new use cases. But the data center needs to change. As is always the case in disruptive sectors, the organizations that embrace modern designs and work to future-proof their businesses will be rewarded.

Source:Why It’s Time To Rethink Your Data Center Networking Design

Leave a Reply

Your email address will not be published. Required fields are marked *