Tip

How data recovery for NTFS differs from FAT

So far the articles in this data recovery step-by-step guide addressed data recovery techniques for FAT or FAT-32 partitions. Now I'll turn my attention to data stored on NTFS volumes.

Since the NTFS file system works completely differently from FAT and FAT-32, data recovery must be approached in a very different way. However, there are exceptions. The final article in this guide will discuss a last ditch recovery technique that works regardless of file system type, but right now we're going to talk about how the NTFS file system works from a data recovery standpoint.

If you Google the phrase NTFS data recovery techniques, you'll mostly get links to Web sites selling data recovery products. The reason for this is that NTFS is designed to perform data recovery on its own, without the need for third-party data recovery utilities or administrative actions. The two main technologies at work that make this possible are cluster remapping and transaction logging.

How cluster remapping prevents data loss

Cluster remapping is a technique that attempts to prevent data loss by automatically moving data from clusters containing bad sectors on the hard disk to good clusters. The mechanics of cluster remapping vary, depending on whether or not the volume containing the bad sectors is fault-tolerant, and whether the bad sectors were detected during a read or a write operation.

Let's talk about write operations on volumes

    Requires Free Membership to View

that are not fault-tolerant. When data is written to an NTFS volume, the operating system detects sector as part of a verification process during the write operation. If the OS does determine that a sector is bad, Windows marks the entire cluster as bad so that it will not attempt to save data to the cluster in the future. (This happens because clusters cannot be subdivided.) The data is then written to a good cluster, and no data loss occurs.

But if the bad sector is detected during a read operation, things work differently. The operating system returns a read error to the application that requested the data. There are different theories as to what happens next. Some sources indicate that once the read error occurs, Windows marks the sectors within the cluster as bad, and the data within that cluster is permanently lost. Other sources say that if a portion of the data is readable, Windows moves it to another cluster prior to marking the affected sectors as bad. If you know of a Microsoft document with a definitive description of what happens, let me know about it.

When bad sectors are detected during a read operation on a fault-tolerant volume, the data is definitely read from an alternate source (another disk in the fault-tolerant volume). The way this read operation is performed is similar to what would happen if the disk containing the bad sectors had failed. Once the data has been read, Windows checks the disk containing the bad sectors for available good sectors. Once it locates these sectors, the recovered data is written to them. It then remaps the file containing the clusters with the bad sectors, so that data is read from the newly written clusters. The clusters containing the bad sectors are then marked as bad.

The idea of being able to mark a cluster as being bad is not new. The FAT and FAT-32 file systems also support marking bad clusters. The difference is that cluster marking and remapping in NTFS is automatic. Marking bad clusters within a FAT or FAT-32 file system requires a disk utility, such as CHKDSK.

How transaction logging prevents data corruption

The transaction logging feature of the NTFS file system is designed to prevent data corruption. Why is this important? Think about the way the FAT and FAT-32 file systems work. Suppose the power goes out while you're updating a file that resides on a FAT file system. The entire file will likely become corrupted, since interrupted write operations almost always result in at least some level of corruption on FAT and FAT-32 file systems.

On the other hand, NTFS takes a transaction approach to file I/O. Any write operation that creates or modifies a file is considered by the operating system to be a transaction. So the OS takes an all-or-nothing approach to the operation -- the transaction either completed or it didn't. There are no partial writes because incomplete transactions are not allowed.

Although the mechanics behind transaction logging are complicated, the basic idea is that when a write operation occurs, the file system records the operation to a log file. Once the write operation is logged, NTFS updates the volume cache and then makes a log entry indicating that the transaction is complete.

After a transaction is committed, the NTFS file system is able to ensure that the entire transaction is applied to the volume, even if a failure occurs. If a recovery is necessary, NTFS can redo each committed transaction that is found in the log file.

Technologies such as cluster remapping and transaction logging make the NTFS file system much less prone to failure and corruption than FAT or FAT-32. But this doesn't mean the NTFS file system is perfect. Despite its protective features, several factors can result in data loss. Certain types of hardware failures can cause data to become corrupted. Likewise, if the volume's boot sector or master boot record were to become corrupt, it could cause the data on the volume to become inaccessible.

The next article in this series will discuss recovery techniques you can use if data loss does occur on an NTFS volume.


Data Recovery Techniques for Windows
- Introduction
- How to recover data
- How to create a boot disk to run Norton Disk Editor
- How disk cluster size affects data recovery
- How long file names complicate data recovery
- How to recover deleted files on FAT via Disk Editor
- How data recovery for NTFS differs from FAT
- How to recover corrupt NTFS boot sectors
- Signature-based data reccovery: A last ditch technique

About the author: Brien M. Posey, MCSE, is a Microsoft Most Valuable Professional for his work with Windows 2000 Server, Exchange Server and IIS. He has served as CIO for a nationwide chain of hospitals and was once in charge of IT security for Fort Knox. He writes regularly for SearchWinSystems.com and other TechTarget sites.

This was first published in June 2006

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.