When I worked in networking, I took as many training classes as possible during those first few years. One idea that was constantly drilled into my head was the notion that since you have a short backup window and a limited amount of tape to back up the data to, you should back up only data -- not the operating system or the applications.
I pretty much accepted this concept. Every night I did a full system backup on my domain controllers, but I usually only backed up the data on my other servers. Then about a year ago, I had a massive crash on one of my servers and had to do a bare metal restore.
Since I didn't have backups of anything but the data, I had to manually install Windows Server, and then try to remember which applications the server was running so that I could restore them as well.
There were many other details to cope with, such as checking my network documentation (I had thankfully taken the time to document my network) to see which IP addresses were bound to which NICs and what hardware was installed in the server so I would know which drivers to download.
Although my restore method got the job done, it made me realize that having only the data backed up is fine as long as you never have to restore anything other than the data. This type of backup is a pain, however, if you end up having to do a bare metal restore.
Backup window doesn't fit tape capacity
After that incident, I wanted to start doing full backups of each of my servers. The problem was that my backup window and my tape capacity simply wouldn't allow it. I thought about using a rotation scheme in which I would perform a full backup of one server each night and do data-only backups of the other servers. But doing that was still beyond the capabilities of my backup hardware.
I looked for a way to remedy this problem, but like many other small companies, money was tight and I couldn't afford a high-end backup system. I also began to look into disk imaging and found some disk imaging products that would image my servers and then keep the image up to date as changes were made to them. Even so, most of these products were beyond my budget, and I still had to figure out where I was going to store the images that I created.
In the end, I adopted a poor man's imaging solution. The method that I am using is low cost and it has worked extremely well for me. I think it would work well for many other small and medium-sized companies.
Getting the technology set up
Rather than invest in a high-end imaging solution, I went online and ordered a medium-performance PC. I then upgraded the PC with a gigabit NIC, some extra RAM and three of the largest hard drives I could find. Collectively, these three hard drives would give me plenty of space to store server images and would even give me some room for growth.
I had an extra Windows Server 2003 license that I wasn't using, so I didn't have to worry about purchasing an operating system for this server. Of course, Windows Server 2003 tends to be a bit pricy. As an alternative, you could always check to see if you have any old Windows 2000 licenses that are no longer being used, or you could run an open source operating system.
Once I had Windows Server up and running with a bare bones configuration, I started making some server images. Keep in mind that I still had to work within some constraints. I was still backing up data every night in the same way that I always had, and I couldn't do the imaging in a way that would interfere with the normal backup or disrupt productivity on my network.
As such, I looked at my backup logs to get an idea of what time each server usually gets backed up. I then devised a schedule in which one server would get imaged each night. The time that the imaging process would run depended on what time the server was normally backed up. If the server was one of the first to be backed up at night, then I would image the server later in the night. If the server was one of the last to be backed up, then I would make the image earlier.
I created the actual images by running NTBACKUP locally on each server. I simply scheduled NT Backup to back up the full contents of the server to a file that would reside on my backup server. I used a two-week rotation process, which means that I would image one server a night, and each server's image would be overwritten by an updated image every two weeks. Technically, I could have used a week and a half for the rotation cycle and kept the images a little more current, but using a two-week rotation cycle made scheduling less complicated and it kept a few days open each week. That way if I buy more servers later on, I already have a spot in the schedule set aside for them.
Right now, you may be thinking that that this method sounds good, but that at any given time the server image could be as much as two weeks out of date. That isn't as much of a problem as you might think. Remember that prior to implementing this system, there was no server image. If a server failed catastrophically, it had to be reloaded by hand. Now, if a server fails, I can restore a relatively new server image to bring the server back to a semi-current state, and then restore the data from tape backup.
Often the only thing other than the data that will have changed in the last two weeks is the installation of a few patches, and my WSUS server will put those back automatically. I am in the habit of manually making a server image after I make a major change to a server, but if I have just installed a new application or something and don't have an image that includes it, it is usually no big deal to re-install a single application.
One last issue to address is the recovery process. If the server images are sitting on a backup server, those images are completely inaccessible during a bare metal restore. I could technically install Windows onto the failed server, join the domain and then access the images, but I really don't want to do anything that could change references to the server within the Active Directory (such as deleting the Active Directory computer object just so I can re-join the domain).
Instead, I configured the backup server to also act as an FTP server. If one of my servers fails, I install Windows and then connect to the FTP server. I then place the appropriate image file into a location where it can be downloaded via FTP and I download the server's image to the server's local hard drive. I then disconnect from the FTP server, put the backup server's copy of the image file back into a safe place and then use NTBACKUP to restore the image file using the local copy.
Setting up this type of imaging solution can be time-consuming, but for small companies on a budget, it is a low-cost and effective alternative.
Brien M. Posey, MCSE, is a Microsoft Most Valuable Professional for his work with Windows 2000 Server and IIS. Brien has served as CIO for a nationwide chain of hospitals and was once in charge of IT security for Fort Knox. As a freelance technical writer he has written for Microsoft, CNET, ZDNet, TechTarget, MSD2D, Relevant Technologies and other technology companies. You can visit Brien's personal Web site at www.brienposey.com.
More information from SearchWinSystems.com
- Tip: What to do when your server crashes
- Topic: Research Server Management in this topic section
- RSS: Sign up for our RSS feed to receive expert advice everyday