Microsoft introduced Storage Spaces Direct in Windows Server 2016 Technical Preview 2. This feature expands the use of local storage when configuring high availability (HA) storage systems.
For example, Storage Spaces Direct supports the use of low-cost, low-performance, high-capacity devices such as SATA disks and solid-state devices -- such as solid-state drives (SSD) and serial storage architectures -- attached through the PCI Express bus -- an approach now dubbed nonvolatile memory express (NVMe) or the nonvolatile memory host controller interface specification. SATA and NVMe-based devices were not supported with earlier Storage Spaces clusters and shared disks.
From a practical standpoint, administrators can use Storage Spaces Direct to configure HA storage using the disks available in each clustered server and JBOD arrays connected to single servers. The idea is to remove the expense and complexity involved to create a serial-attached SCSI (SAS) disk fabric and allow much higher storage performance with nonvolatile memory devices and much cheaper high-volume storage with SATA disks. Storage Spaces Direct opens up the potential for HA tiered storage and the application of software-defined storage in the enterprise.
Administrators who plan to experiment with Storage Spaces Direct should keep some current caveats and limitations in mind. For example, Storage Spaces Direct should be deployed and managed through Windows PowerShell rather than through Server Manager or Failover Cluster Manager, though final release and future versions of Storage Spaces Direct may be manageable through other platforms.
In addition, Storage Spaces Direct does not currently handle disks connected through multiple paths -- disk fabrics -- nor does it support the multipath I/O software stack. Lab environments with such multipath configurations may need to be simplified before deploying Storage Spaces Direct.
There is no finalized hardware compatibility list, so Storage Spaces Direct is not guaranteed to work with any specific combinations of SAS, SATA or SSD disks, host bus adapters or remote DMA-based network adapters. Compatibility and performance issues can arise with some combinations of hardware, firmware and software.
How failover clustering in Windows Server 2012 works
Early look at Windows Server storage replication
The fundamentals behind a Hyper-V failover cluster
Windows Storage Spaces Direct vs. traditional hyper-convergence