By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
In the first part of this series we looked at how moving flat files, commonly referred to as unstructured data, from Windows file servers to a NAS or SAN could be easily achieved. In this second part, we will take a look at why and how to move the structured data sources to networked storage.
Migrating structured data
Regardless of whether or not corporate information is centralized into a data center environment, it is highly likely that ever-expanding data requirements are going to exceed the capabilities of your servers. They will have to host enough disks to meet the physical capacity or, alternatively, provide sufficient I/O to deliver the necessary performance.
While it is perfectly possible to install a handful of 500 GB, 750 GB or even larger disks into a server, the simple fact is that capacity would be huge and the performance would be absolutely terrible when more than a few people try to access whichever database or service is running on those disks. For these reasons, migrating to a SAN represents the best choice for administrators today.
Unlike conditions with flat file information, the size of the files involved with structured data and the permissions models used are such that it is impossible to simply migrate them from one place to another. Also, in Windows environments, the use of database-type information is almost never supported on NAS infrastructure -- only SAN technologies are supported.
Connecting a Windows server to a SAN
To connect a Windows-based server to a SAN requires one of three solutions. First, if the SAN is Fibre Channel-based, then a host bus adaptor, or HBA, is a necessity. Alternatively, if the SAN is based on iSCSI technology, administrators have two choices.
The first option is to use the same kind of solution you'd use with Fibre Channel SANs. Many vendors have HBAs for iSCSI that connect the server to the networked storage over standard networking equipment, but with the added benefit that encryption, compression and other intensive work can be done on the card. That reduces the burden on the server CPU.
If a server is unlikely to be so busy as to require using an HBA to offload processing, then you can employ standard Ethernet networking to connect the server to the storage. While it is of course possible to put the servers, users and storage on one single network, it is not recommended. Instead, use multiple network cards in the server in order to separate the users from the physical storage.
There are many benefits to booting the server from networked storage. Having the operating system stored on the SAN allows for the LUNs to be replicated to other volumes on the same SAN or to other controllers over the network. While booting a server from an HBA requires no external interaction, booting a server from a standard network card is a little different. Software such as winBoot/i from emBoot Inc. is very resilient and easy to set up so that the failure of one boot service does not render the simple act of rebooting or starting a server impossible.
Whichever way the server is chosen to be booted, the structured data in the form of Exchange, Oracle or SQL databases will always be on the networked storage. Fortunately, moving to that storage is simple. However connected, the LUNs presented by the SAN to the server will look like physical disks -- very large physical disks.
Administrators don't need to worry about configuring RAID arrays on the server or any software configurations, such as converting to dynamic disks or making volumes using disk management. Once formatted, administrators can follow articles -- for instance, KB224071 (in the case of Microsoft SQL Server) and KB821915 (for Microsoft Exchange) -- to move the databases and other important files to the SAN.
Hosting the structured data on the networked storage has great benefits over local disks because the SAN is able to host many more disks than is the case for directly attached server disks. The improvements in disk I/O help administrators host more information on the same server, reducing the overall servers count in the data center. Furthermore, the SAN is far better equipped to respond to changing space requirements. Whereas a server with local disks will invariably need to be shut down to add or reconfigure storage, the networked storage is specifically designed so administrators can add disks and disk shelves without having to shut anything down.
In part three, we'll take a look at the more advanced backup and recovery options that networked storage offers over local disks. We'll also see how they can be leveraged to reduce the number of tapes or backup disk space an administrator needs to maintain for recovery purposes.
ABOUT THE AUTHOR
Mark Arnold, MCSE+M, Microsoft MVP, is principal consultant with LMA Consulting LLC, a private messaging and storage consultancy based in Philadelphia. Mark assists customers in designs of SAN-based Exchange Server implementations. He has been a Microsoft MVP in the Exchange discipline since 2001, contributes to various Microsoft-focused technology Web sites and can be found in the Exchange newsgroups and other Exchange forums.