Watch out! Don't be a victim, or a purveyor, of shoddy data back up processes.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Back up is not the end-all be-all. Completing daily, weekly or monthly data back up is great. Forgetting to create a corresponding data restore process is not. With no restore capability, back up is meaningless.
So, let this Top 10 of true data loss stories serve as fair warning. The names of the organizations where these no-nos have taken place have been omitted. Besides, "what happened to them could happen to anyone," said Bill Margeson.
This Top 10 list is compliments of Margeson, President of CBL Data Recovery Technologies of Armonk, NY. CBL provides recovery for data lost due to hardware failure, software corruption or natural disaster.
1. It's the simple things that matter
A successful business organization purchased a "killer" Unix network system and put 300 workers in place to manage it. Backups were done daily. Unfortunately, no one thought to put in place a system to restore the data, too.
2. In a crisis, people do silly things
When the prime server in a large hospital's system crashed, minor errors started occurring. System operators tried everything to regain control. They repeatedly invoked a controller function. Instead of gathering data about the errors, however, they erased the entire RAID array data.
3. When the crisis deepens, people do sillier things
When the office of a civil engineering firm was devastated by floods, its owners sent 17 soaked disks from three RAID arrays to a data recovery lab in plastic bags. Someone had frozen the bags before shipping them. As the disks thawed, even more damage was done.
4. Buy cheap, pay dearly
This organization bought an IBM system, but not from IBM. Then, the system manager decided to configure the system uniquely, rather than follow set procedures. When things went wrong with the new system, it was next to impossible to recreate the configuration.
5. An almost perfect plan
This company purchased and configured an expensive, high-end and full-featured library for system back ups. Unfortunately, the backup library was placed right beside the primary system. When the primary system got fried, the backup library did, too.
6. The truth and nothing but the truth
After a data loss crisis, this company's CEO and an IT staff member met with the data recovery team. No progress was made until the CEO was persuaded to leave the room. Then the staff member opened up, and solutions were developed.
7. Lights are on, but no one's home
A regional-wide ambulance monitoring system suffered a serious disk failure. Only then did it discover that its automated backup hadn't run for 14 months. A tape had jammed in the drive, but no one had noticed.
8. When worlds collide
This company's high-level IT executives purchased a "Cadillac" system without knowing much about it. System implementation was left to a young and inexperienced IT team. When the crisis came, neither group could talk to the other about the system.
9. Hit Restore and all will be well
September 11, this company's IT staff went across town to their back up system. They invoked Restore and proceeded to overwrite from the destroyed main system. Of course, all previous backups were lost.
10. People are the problem, not technology
Disk drives today are typically reliable. Human beings aren't. A recent study found that approximately 15 percent of all unplanned downtime occurs because of human error.
FOR MORE INFORMATION
What are your data loss stories and headaches? E-mail Assistant News Editor Meredith Derby
Read our latest Top 10 list