- Bandwidth: The amount of data an interconnect can move per second.
- Latency: The amount of time from the moment you request data to its delivery.
- Connectivity: Whether just a pair of subsystems or many subsystems are connected.
In meeting the needs of each interconnect class, you must balance cost, physical size, power and bandwidth. As subsystems are implemented as single chunks of silicon,
|VLSI and interconnects|
|A key benefit of the physics behind scaling silicon technology to ever-smaller dimensions is that, along with the reduction in size, one generally obtains higher performance while lowering power demands. Smaller transistors take less energy to switch state.
However, problems occur when you have to connect these ever-shrinking transistors to actual wires in order to transfer data between chips, which are some distance apart. The energy needed to drive signals out of a chip package, across a board and into a distant chip does not scale with transistor technology. This means that larger transistors must be used for the inter-chip connections than for internal logic
Furthermore, the transistors needed to handle the logical activities in the interface represent a silicon area that shrinks with every generation of chip. However, the big driver devices do not shrink and thus occupy an increasing percentage of physical space and power.
A further challenge is keeping signals on many wires in sync with each other. When you are signaling at very high frequency, minor differences in wire length can produce significant differences in time of arrival. Connections that hook up multiple chips -- classical buses, for example -- require even bigger transistors.
The effect of all this is that systems are moving away from traditional buses and moving to interconnects that are unidirectional and point-to-point. This means the system of the future is going to look more and more like a network, with familiar network components such as switches embedded inside the computer.
Modern server system interconnects are depicted in the above diagram. In a modern server, buyers can choose from a variety of interconnects as follows:
- Back-side and front-side buses: The front-side bus connects one processor to other processors and often a high-performance North Bridge component, which connects the processors to the rest of the system. The back-side bus connects level two or level three cache to the processor. In some machines the back-side bus is shared among a number of processor cores or there is one bus per core. These buses are specific to the processor design. Most processors integrate two levels of cache; some designs also integrate a substantial level three cache or provide on-chip controllers to manage external SRAM (static RAM) as an L3.
- Memory interface: Standard interfaces slowly change as new generations of DRAM (dynamic RAM) are introduced. Some systems may favor RAMBus variants, while most will choose a more mainstream technology, such as" Double Data Rate" DRAMs (DDR, DDR2, etc.). In the diagram, the memory is shown connected to the North Bridge, which integrates the memory controller. However, newer designs are beginning to take advantage of VLSI to integrate the memory controller into the processor. This can substantially reduce latency to the memory and increase performance.
- System I/O: Most deploy some variant of PCI, generally a 64-bit-wide parallel version like PCI-X, though a serial form like PCI Express is being used increasingly.
- Storage: Serial ATA (SATA) or the eSATA variant is being used increasingly to replace the earlier preferred SCSI and commodity ATA interfaces. Similarly, the Fibre Channel (FC) interconnect is being replaced by SATA, although that interconnect does not have the same reach as FC, which still has its place in large configurations. A storage controller will be connected to the system using the system I/O bus (usually PCI). However, some storage vendors offer storage area networks (SANs), a form of specialized LAN tuned for storage, and network-attached storage (NAS) to connect the system to a storage subsystem using Gigabit Ethernet.
- LAN: An Ethernet variant, wired using Gigabit Ethernet
- Cluster interconnects: When systems are connected to a cluster a specialized interconnect is usually employed, although Gigabit Ethernet can be used. To connect computer systems in separate cabinets, Infiniband is slowly making some inroads. To connect boards within a single cabinet, versions of Hypertransport and RapidIO could be used, though the latter's adoption outside networking equipment is sparse.
Interconnects that have faded from existence include VME (generally used in industrial systems), MCA and EISA (used in PCs), NuBus (used in early modular Apple Macintosh computers) and the VESA Local Bus. This tip was excerpted from Chapter 1 of the e-book, Windows Servers and Storage, which touches on aspects of server hardware, including server architectures--systems components and interconnects, also known as buses.
About the authors:
René J Chevance is an independent consultant. He formerly worked as chief Scientist of Bull, a European-based global IT supplier.
Pete Wilson is Chief Scientist of Kiva Design, a small consultancy and research company specializing in issues surrounding the move to multi-core computing platforms, with special emphasis on the embedded space. Prior to that, he spent seven years at Motorola/Freescale.
This was first published in September 2006