As businesses adopt a broader range of applications and extend virtualization to make the most of computing resources,...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
IT professionals are doing everything they can to improve application performance. Read caching has long played an important role in boosting application performance, and write caching is finally catching up.
By incorporating write-back caching into Windows Server 2012 R2, IT administrators can now use top-tier storage media to protect writes before committing them to disk -- and without holding up application responsiveness. This results in measureable improvements for busy, mission-critical workloads. Let's review the requirements and issues surrounding write back cache and Windows Server.
Caching is a technique for improving application performance. Consider read caching first. Most cache is intended to speed read operations by storing frequently-read data in a small portion of memory rather than waiting for the application to access the same data from magnetic media. Since memory access is an order of magnitude faster than magnetic media, data is read from a cache much faster and the application can continue on sooner. If the expected data is not in the cache (a cache miss), the data can still be accessed from storage.
Write caching has been around as long as read caching, but works a bit differently. With write caching, data is written to a small portion of memory space instead of to disk. This completes the write much faster and allows the application to continue sooner -- the data in cache is then written to disk in the background. This is often referred to as write-back cache.
Write caching has not been used as extensively as read caching, because writing poses a special problem. If power fails or the system crashes before cached data is committed to disk, catastrophic and unrecoverable data loss can occur. This isn't a problem for read cache because the data is already on disk and the cache can easily be rebuilt from disk contents once the system and application are restored after a crash. Write cache designed for modern storage systems is typically implemented as specially designed memory modules with backup batteries and other protective features.
The broad introduction of solid state disk devices is reducing the potential vulnerability of write cache failure by using a small portion of SSD space as the write cache. Windows Server 2012 R2 can establish a write cache on SSD when a new storage volume is created. The result is a fast, memory-based write cache which is nonvolatile but does not require a battery backup. Write-back cache is best employed for such applications as databases, which can overwhelm traditional disk subsystems with write cycles. The caching takes data quickly and allows the application to resume much faster than waiting for disks (such as SAS or SATA disks) to complete writes in the queue.
Making sense of write back cache requirements
Altering write back cache size
Dig Deeper on Windows File Management
Related Q&A from Stephen J. Bigelow
One size does not fit all when administrators develop a protection policy for specific applications. Learn about the configuration options in System ...continue reading
Set up and operate a VM network using proven strategies to ensure security and performance. With a little planning, virtualization admins can avoid ...continue reading
Virtual switch security is achieved through a number of features. Virtualization admins can create and enforce policies, lock down MAC addresses and ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.