Understanding basic server architecture

The following tip is excerpted from Chapter 1, Choosing your server, from our expert e-book, "Windows servers and storage." This chapter touches on aspects of server hardware, beginning with architectures--systems components and interconnects -- and ways to build servers from these components.

This Content Component encountered an error

The following tip is excerpted from Chapter 1, Choosing your server, from our expert e-book, "Windows servers and storage." This chapter touches on aspects of server hardware, beginning with architectures--systems components and interconnects -- and ways to build servers from these components.

In the computing world, architecture is a fuzzy word used in several contexts. It generally refers to structural principles. For example, the instruction set architecture of a computer defines how the computer interprets memory contents when executing them as a program. Any given system may have its architecture described at many different levels – including the Industry Standard Architecture (ISA), I/O, virtual memory and interconnects.

A server exists to provide services to its clients. Workloads vary and an architecture designed to efficiently support one class of applications could be very different from that needed by another class. For economic reasons, a manufacturer wishes to minimize the number of system components it needs to create, maintain and support. The manufacturer builds many different configurations from a reasonably small number of components or subsystems, allowing them the freedom to update one subsystem class without having to update all. A key capability for useful server architecture is an appropriate degree of composability – the ability to structure or configure servers by simply connecting various mixes of subsystems.

For this to be possible, a server must be constructed from a number of reasonably independent subsystems connected using a relatively small number of interconnects. Interconnects used to couple very high performance subsystems – such as the processor to its cache or multiple processors to each other – have many more constraints and demands than those needed to hook up slower devices, such as disks. Generally, the intimate high performance interconnects are proprietary (private to the vendor) and tuned for a specific purpose, while those further out toward the edge of the system are more likely to adhere to standards. We will discuss key interconnects and the principles behind them in the next section.

Given an appropriate collection of interconnects and subsystems, it becomes possible to build systems of many different shapes and sizes, varying the mix of processors, memory, storage and connectivity. We will discuss the major system arrangements in subsequent sections, covering SMP systems, clusters and grids.

About the authors:
Pete Wilson is Chief Scientist of Kiva Design, a small consultancy and research company specializing in issues surrounding the move to multi-core computing platforms, with special emphasis on the embedded space. Prior to that, he spent seven years at Motorola/Freescale.
René J Chevance is an independent consultant. He formerly worked as chief Scientist of Bull, a European-based global IT supplier.

This was first published in September 2006

Dig deeper on Microsoft Windows Server Management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchServerVirtualization

SearchCloudComputing

SearchExchange

SearchSQLServer

SearchWinIT

SearchEnterpriseDesktop

SearchVirtualDesktop

Close