The promise of lower hardware costs and simpler management requirements have breathed new life into SAN technology, and leveraging an existing Ethernet network to also handle storage traffic sounds like a good idea.
But while iSCSI has matured since its introduction -- and has acquitted itself well in many different environments -- shifting to the protocol isn't always the right move. There are several factors to consider, and these small details are often the difference between success and failure with an iSCSI deployment.
The biggest difficulty with any SAN is the network architecture that ties things together. For iSCSI, this challenge is Ethernet technology itself.
While Gigabit Ethernet (GigE) sounds impressive, this is a largely theoretical amount of bandwidth: After factoring in the network stack and operating system (OS) overhead, the actual usable bandwidth is closer to 100-120 Mbps.
Another problem with Ethernet technology is its inefficient behavior. Designed to be chatty, Ethernet confirms each packet's delivery and routinely resends dropped or damaged packets. When you consider this in the context of a busy company network with many users accessing applications on a variety of servers, it's easy to see how active network traffic can consume bandwidth and drive up latency. Combine this with the traffic involved in database queries, streaming media, file saves/loads, and other storage access, and contention is a common result. While the impact of storage traffic can be limited by segmenting the network with careful architectural decisions, all that storage data still needs to get from users to storage and back again.
Potential bandwidth constraints are magnified on virtual servers where a single physical server hosts multiple virtual machines (VMs). Even when the server provides many Ethernet ports, each port is shared between two or more VMs, which places greater bandwidth demands on each server I/O port and can even push the backplane limits of the switch ports. When you add this to the storage traffic for each VM on the port, network performance can slow to a crawl. Deploying TOE-enabled NICs may help, but it's often easier (and cheaper) to add more NICs rather than switch to TOE versions. Migrating to higher-performance switches may also ease bottlenecks.
As a result of these limitations, it's important to evaluate bandwidth and latency at key points along the network before moving to iSCSI. Network changes may be required to sustain the added load of iSCSI storage traffic. For example, configuring jumbo frames, segmenting the GigE network and confining iSCSI traffic to dedicated server and switch ports could be the ticket. An alternative is deploying a10 Gigabit Ethernet with multipathing software, which can ease these complaints significantly (although the cost of redundant 10 GigE connections is still prohibitive).
Running iSCSI performance tests in a lab environment before making any deployment decisions is absolutely worthwhile. By using disk performance tools that measure storage behavior, like Bonnie++ 1.96, IT can compare iSCSI storage performance to current storage performance. This allows them to see how performance varies by tuning storage parameters between the iSCSI initiator and target (cache size, frame size, etc.). Windows administrators can monitor many physical disk counters with tools like Performance Monitor (PerfMon).
Drivers and firmware considerations
Drivers normally enable both iSCSI initiators (client systems) and iSCSI targets (storage devices). Since they can profoundly impact the performance and interoperability of devices, it's often best to use the latest driver version available for the OS you're running.
While Windows iSCSI drivers are generally well refined, driver support for Linux and other operating systems may be challenging. Once again, lab testing before deployment can reveal potential problems and help IT determine the optimal combination of drivers. Remember to check and update the firmware on NICs, iSCSI controllers and iSCSI storage systems.
iSCSI typically operates as a clear text protocol with no encryption for data in flight. As a result, an attacker listening to a network carrying iSCSI traffic can reconstruct files in flight, alter file contents by injecting fake iSCSI frames, and corrupt file systems that could expose servers to further attacks -- weaknesses similar to those of any IP-based SAN protocol.
To prevent these potential security problems, Windows administrators should consider several tactics before deploying iSCSI. One option is implementing a degree of network segmentation that limits the amount of storage traffic appearing on the general network. Another popular approach is to use virtual private network connections for storage data passed between clients and servers. This makes the storage data difficult (if not impossible) for attackers to "sniff." In addition, consider encryption techniques to secure data in transit, which prevents an attacker from using content even if it's compromised.
ABOUT THE AUTHOR:
Stephen J. Bigelow, senior features writer, has more than 15 years of technical writing experience in the PC/technology industry. He holds a BSEE, CompTIA A+, Network+, Security+ and Server+ certifications and has written hundreds of articles and more than 15 feature books on computer troubleshooting, including Bigelow's PC Hardware Desk Reference and Bigelow's PC Hardware Annoyances. Contact him at firstname.lastname@example.org.