Is Microsoft's idea of trustworthy computing an oxymoron? Bill Gates doesn't think so. According to a companywide...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
e-mail back in January, Gates wants Microsoft to strive to make computers and computing as trustworthy as the electrical power and phone service we have today. Of course, Windows operating systems are a big part of this, but Bill's even got Microsoft working on a project to make a trustworthy computer – Palladium, which will help us all realize this goal.
There are some of you that had a good laugh at this. Hearing "trustworthy" and Microsoft in the same breath was just too much for you. You were quick to scoff, rant, rave and even redirect www.trustworthycomputing.com to a Google search page on Microsoft software flaws.
Then, in March, Microsoft announced a moratorium on new development while programmers (8,500 of them, some say) were given training in identifying security flaws in software and in writing secure code, then turned loose to review products. The new-work stoppage lasted more than two months.
One official white paper written by Craig Mundie, Microsoft's senior vice president and CTO, explains the company's "Trustworthy Computing" campaign in detail. Gates, Mundie and others have repeatedly emphasized that their project is not something that can be accomplished in a few months; it's something that may take several years.
But trustworthy computing really isn't a new topic. Have you reviewed the Common Criteria? This international standard for judging the security levels of computer devices and software had its origins in the Rainbow series of guidelines, which was developed by the federal government during the '80s. Do a search on "trustworthy systems" to read about research and comments on goals postulated by others. Certainly the model for reliable computing, a world where computers control everything, or at least a lot of things, is a common theme in books, television and movies: "The Jetsons," "Demolition Man," "Star Trek," "Star Wars" or "The Matrix," anyone?
So where are we now? If you track Microsoft security bulletins -- and you should -- you might have noticed that a number of them do not attribute notification of the vulnerability to a specific, um, security researcher. That's because some of these notices, and the accompanying fixes, are the results of the code reviews that even now are turning up flaws and instigating corrections.
Microsoft has also come out with a number of new free security tools and some revisions to others. These include Software Update Services, which allows you to centrally update Windows systems from a server on your network; an updated URLScan, which protects IIS and ISA Server from malformed URLs; and the upcoming SMS feature pack, which incorporates new centralized patching and reporting capabilities for users of SMS.
We're going to see numerous benefits in the next version of Windows as well. If you've taken the time to download RC1 of Windows .NET Server 2003, you'll find some 30 items come up more secure by default. Some examples: IIS is not loaded, auditing is turned on, and, should you chose to install IIS, you'll find it a static Web server. You have to add in all the fancy-shmantzy stuff on your own.
Microsoft's own experience with secure computing is also instructive. In reviewing the security of its own networks, in doing the risk analysis, the threat modeling that we all should be doing, Microsoft made internal changes that it has shared or will share with us vis-a-vis best practices, security operations guides, secure networking white papers and new security products.
Practically speaking though, the question remains: are these new security tools, coding changes and defaults worthwhile? I still remember systems that appeared to be more secure only because without lengthy configuration and access to documentation you couldn't use them. Microsoft products went the other way: ease-of-use was important; security had to be configured after the fact.
Unfortunately, good security is not any easier when it's accomplished by pushing a button, adopting a template or automating patching. Good security and blind application of patches can break applications and reduce productivity. The reality is that there are still many tradeoffs and decisions to be made. For example, to use Microsoft's Security BaseLine Analyzer, another new security tool, you must leave the Remote Registry Administration service enabled. Leaving it so allows possible remote access directly to the registry, something other security experts advise against. So which should you do?
ABOUT THE AUTHOR:
Roberta Bragg, MCSE, CISSP, MCT, MCP, is a well-known Windows security consultant, columnist and speaker. Her publishing credits include "ISA Training Guide," "MCSE Windows 2000 Network Security Design" and "Windows 2000 Security."