This content is part of the Essential Guide: Catch up on the Windows Server patches of 2017
Problem solve Get help with specific problems with your technologies, process and projects.

Who's to blame when Microsoft security updates go bad?

A bad Microsoft patch is not an uncommon event, but without the resources to test it adequately, what's an overworked IT staff supposed to do?

This past June, the latest batch of piping hot Microsoft security updates came rolling out on Patch Tuesday, as usual. But it wasn't long before the hue and cry was heard far and wide when many administrators applied the patch associated with security bulletin MS16-072 -- only to find it had uncorked a host of problems.

This update was meant to stop a man-in-the-middle vulnerability in Windows, but many administrators who applied the patch then had to deal with numerous complaints about missing printers and application shortcuts, to name just a few problems.

Administrators who got hit by this flood of help desk tickets found the easiest solution was to remove the patch. Once the chaos had dissipated, an angry mob of affected IT staffers gathered their pitchforks and torches, and launched a massive fusillade of angry tweets at Microsoft.

Technically, the patch was fine, but it changed how Group Policy worked once installed -- a fact that was not disclosed with the initial MS16-072 security bulletin. Two days after Patch Tuesday, Microsoft updated the security bulletin to explain that certain Group Policy read permissions required changing to avoid problems, adding "[b]efore MS16-072 is installed, user group policies were retrieved by using the user's security context. After MS16-072 is installed, user group policies are retrieved by using the machines security context."

It also took a couple days for Microsoft to respond with a PowerShell script to help administrators find Group Policy objects that needed modifying.

We asked the SearchWindowsServer advisory board members for their thoughts on the matter. Is there anything IT can do to avoid making more work when the next Patch Tuesday rolls around?

Brien Posey

Brien PoseyBrien Posey

This past June, Microsoft released a Windows Server patch that broke group policies. Many organizations found that the only way to stop the resulting chaos was to uninstall the patch. Needless to say, there was a considerable backlash against Microsoft, especially considering that this was not the first time that they have released a buggy patch. Tech bloggers predictably chastised Microsoft for failing to adequately test its code prior to release.

While I cannot defend Microsoft's apparent lack of quality control in this particular situation, it is ultimately up to the administrator to determine whether or not a patch is fit for installation.

General IT best practices have long stated that organizations should thoroughly test patches before installing them onto production systems. Even so, in this day and age of ever-shrinking IT budgets, every systems administrator I know is so overworked that taking the time to test patches is somewhat unrealistic. Besides, comprehensive patch testing is difficult. Nobody has the time to perform granular checks of every system in search of a bug that might not even exist.

Unfortunately, there aren't any easy answers to this dilemma. Personally, I like to wait for a week or two before applying patches. That gives me time to see if others are reporting problems. Even this approach is risky, however.

Patches are usually designed to address known security vulnerabilities. If you leave a system unpatched, then your system remains unprotected against a vulnerability that is well documented. In doing so, you greatly increase your chances of suffering a security breach.

Ultimately, organizations must accept the idea that Microsoft security patches are not perfect, and that patches will occasionally cause problems. Given this knowledge, each organization must establish its own patching policy based on the perceived risks of patching or not patching systems.

For more from Brien Posey, please visit his contributor page.

Trevor Pott

Trevor PottTrevor Pott

I don't buy the idea that the fallout from a bad patch should rest on the systems administrators rather than the vendors. This is nothing more than victim blaming and is absolutely no different from blaming poor people for being poor.

There aren't always options available to a systems administrator. Their priorities are set for them. Ask any sys admin if they would prefer having a complete duplicate of their production environment to test on and they'll say yes. We all would. I have never met anyone outside of NASA who gets that.

Ask any sys admin if they would like to be allocated adequate time for testing patches and they'll say yes. We all would. Nobody ever gets enough time for this. Not even NASA.

The majority of systems administrators aren't specialists. We don't work in big teams. We're generalists -- fire fighters trying to solve 50 problems at once. Many don't understand this. They think testing patches is as easy as riding a bicycle; once you've done it once, it should be just as easy forever. To a certain extent, I guess this is true. Learning to do patch testing is sort of like riding a bicycle, except your hair is on fire, the bike is on fire, everything is on fire and you're in hell.

We pay vendors money for a reason. It is for them to provide quality assurance for the products and provide actual support. And Microsoft doesn't demand a mere pittance. They charge exorbitant amounts for software that is, at best, mediocre.

Microsoft constantly changes these products such that it makes the lives of administrators, developers, partners and end users more difficult. Microsoft absolutely, categorically and institutionally refuses to listen to its own customer base when those customers explain what needs to be changed and what changes have gone horribly wrong.

Learning to do patch testing is sort of like riding a bicycle, except your hair is on fire, the bike is on fire, everything is on fire and you're in hell.
Trevor Pott

Let's say you buy a car. You bring it in for routine maintenance that involves the car vendor patching a software flaw in its security system. You drive home, and it starts raining. You turn on the windshield wipers. Unfortunately, the new patch had a bug whereby if the windshield wipers are on while the radio is set to 93.2 FM and your left blinker is on, then the brakes don't work.

Is it your fault for not having tested every possible combination of control and interface arrangement before driving home? Could you have even tested all combinations without driving the car? Don't some of those combinations involve the car being in motion?

Even in small businesses, systems administrators deal with dozens, sometimes hundreds of programs. In enterprises, sys admins are dealing with thousands of programs. Not only do we have to deal with each individual hypervisor, microvisor, operating system and application -- any of which can have thousands to potentially millions of settings and configurations -- we need to deal with the interaction between these applications.

Microsoft is a Fortune 50 company. It has billions upon billions of dollars. Dollars it took from our pockets and those of our employers. Dollars it gets not because it makes the best software or offers the best services, but because it has created an ecosystem of extreme lock-in built on the back of a monopoly.

There is no universe in which it makes sense to demand that each and every company -- and there are well over one billion companies in the world -- to individually and independently test Microsoft's patches. The overwhelming majority of those organizations simply don't have the resources to do so, whereas Microsoft does.

And if we aren't paying Microsoft its punitively high software licensing rates for it to test the applications and operating systems forced upon us, then for what exactly are we paying? Isn't it the job of a software vendor to perform QA on its patches before releasing them?

For more from Trevor Pott, please visit his contributor page.

Michael Stump

Michael StumpMichael Stump

First things first: installing patches on servers is just a tiny piece of patch management. In fact, installing a patch, especially on a production server, is a reasonable action to take only after the patch has been thoroughly tested on dedicated test environments. Otherwise, you are betting your server on Microsoft's QA processes and patch documentation.

As part of an organization's security program, patching walks a fine line between risk and reward. One benefit of quickly deploying new patches to your systems is the reduced risk of a compromise. The risk, as we were reminded with MS16-072, is that some patches aren't ready for production. Whether the patch itself is flawed, or the supporting documentation is insufficient, ultimately the responsibility for the systems we manage falls to the systems administrators.

The best bet to prevent patch-related problems is to develop a patch management program that allows you to properly evaluate patches prior to deployment. Part of this evaluation must include an understanding of the problem that each patch solves, and the ability to determine whether your systems have other controls in place that mitigate the vulnerability without the need to apply a patch. If so, perhaps rolling the patch out is not necessary. When you determine that a patch is needed, then put it through your development cycle. You wouldn't push an update to your corporate applications without testing, would you? Treat Microsoft patches the same way. Deploy, evaluate and rollback in your nonproduction environment before you even think of deploying to production. Catch that bit about rollback? You really should test that process out, too. Overzealous change managers often ask, "When is the last time you rolled back a patch?"

And let's not forget that social media is there to help in these cases. Pay close attention to what the IT pros are reporting on Twitter. Chances are that you will not be the first to run into a bad patch; learn from someone else's mistakes.

In this recent case, sys admins the world over had been lulled into lazy patch management practices thanks to Microsoft doing its job pretty well. We had just started to forget the old days, when we ran Windows Update with our fingers crossed and a pot of coffee percolating in the data center break room in expectation of a long night. Now, many of us roll out patches via System Center Configuration Manager as nonchalantly as we throw Pokéballs at yet another Spearow or Rattata. But shame on us for pushing out patches without understanding the impact on our systems.

For more from Michael Stump, please visit his contributor page.

Next Steps

Third-party patch management products are not perfect

Are automated patch management tools the answer for overworked IT?

Using software products and expertise to help mitigate patching problems

Dig Deeper on Windows Server troubleshooting