Home > Articles > Networking > Network Design & Architecture

I/O Consolidation in the Data Center

  • Print
  • + Share This
This chapter explains the benefits and challenges of designing physical infrastructure to simultaneously carry multiple types of traffic.
This chapter is from the book

Introduction

Today Ethernet is by far the dominant interconnection network in the Data Center. Born as a shared media technology, Ethernet has evolved over the years to become a network based on point-to-point full-duplex links. In today's Data Centers, it is deployed at speeds of 100 Mbit/s and 1 Gbit/s, which are a reasonable match for the current I/O performance of PCI, based servers.

Storage traffic is a notable exception, because it is typically carried over a separate network built according to the Fibre Channel (FC) suite of standards. Most large Data Centers have an installed base of Fibre Channel. These FC networks (also called fabrics) are typically not large, and many separate fabrics are deployed for different groups of servers. Most Data Centers duplicate FC fabrics for high availability reasons.

In the High Performance Computing (HPC) sector and for applications that require cluster infrastructures, dedicated and proprietary networks like Myrinet and Quadrix have been deployed. A certain penetration has been achieved by Infiniband (IB), both in the HPC sector and, for specific applications, in the Data Center. Infiniband provides a good support for clusters requiring low latency and high throughput from user memory to user memory.

Figure 1-1 illustrates a common Data Center configuration with one Ethernet core and two independent SAN fabrics for availability reasons (labeled SAN A and SAN B).

Figure 1-1

Figure 1-1 Current Data Center Architecture

  • + Share This
  • 🔖 Save To Your Account