Getting Off the Patch

Monday, January 10, 2011

Pete Herzog

We are told patching is good. It feels good. We see it as fixing something which we are told is broken: there's a hole, bad people can take advantage of it, and now they can't. It fixes a definite, specified problem. And it feels good and safe and comfy to take care of problems. All good things.

But when patches fail to help us we are reminded that it was us who failed and not the patches. It is because patching is just one small part of the solution that includes Anti-virus, firewalls, intrusion detection systems, strong authentication, encryption, physical locks, disabling of scripting languages, reduced personal information on social networks, and not clicking on suspicious links or opening suspicious attachments as part of a healthy lifestyle solution which is needed to keep us safer.

I don't know about the rest of the world, but I've seen this before. As a kid, my breakfast cereal of frosted, chocolaty, Super Corn Crispies was also healthy for me as long as it was PART of a nutritious breakfast that they said ALSO included fresh-squeezed orange juice, bran toast, turkey bacon, egg-whites, and a multi-vitamin pill. At some point I realized I wasn't eating the Super Corn Crispies as part of a nutritious breakfast but I was eating it because I liked it and if I stopped eating it, it didn't really take away from my nutritious breakfast. Growing up and being healthier actually meant realizing the difference between what feels fulfilling like two bowls of that crap cereal and what really is fulfilling like an actual nutritious breakfast that is significantly lower in high fructose corn syrup and a shelf-life of less than a century.

So what is patching really? In the security industry we are told it is part of defense in depth. We are told that it is a specific deterrent or end to a specific threat. We are reminded it's part of the security process. We are educated that it is one tactic in a strategy to minimize risk. We are cajoled into thinking it's a measure to maintain operations. And we are informed that it's one of the many, many controls which security appears to have. Although, that last one, more than the others, is due to the vocabulary problem of the security industry.

Now the OSSTMM gets occasional complaints for seemingly making up our own definitions. But that's not really true. ISECOM uses the definitions as they were intended but not necessarily by the security industry. Sure, some things we put up with because it's how security users know it, like "firewall". But it's the security industry that has altered many of the definitions of these words which explains why it's so hard to find two security professionals to agree on some standard definitions. Although I understand other industries of similar scientific maturity like the ghost-hunting industry have the same problem. So yes, the OSSTMM doesn't always use the terminology the security marketers have appropriated however we do use them as they have been established and used in most all other industries. Why? Because there's a big enough communication gap already between information security and the rest of the industries. Truly, where else in any other industry besides ghost-hunting is a firewall not protect against fires? (FYI - in ghost-hunting a firewall protects people against the icy, unwanted, touch of a Specter.)

Outside the security industry, the definitions of controls are defined as either something to maintain a baseline sometimes by collecting metrics (which requires interactions with operations) or something which eliminates or reduces harm. That means that the control assures things stay the same and that they can deter interactions which act to change that or cause harm. This means that controls are operational by definition. The security industry has decided to expand upon that definition as it does many other definitions that it appropriates from other industries. It has thereby further defined controls as Administrative (or Procedural), Logical (or Technical), and Physical. What's odd is that both Physical and Logical controls fit the standard definition because they fit the universal definition but the Administrative Control clearly does not. The most common definition of Administrative controls states that it is the basis for selecting the Logical and Physical controls. Wouldn't that make it a Strategy? Which means it's not operational. Which means it's not a real control as universally defined.

The security industry commonly states that the patching process is one of those Administrative controls. We know it is part of a business strategy for software companies. However, for the security professionals and the end-users, I'm not so sure it is. In the security trenches we know patching doesn't help maintain a baseline because it changes it. Yes, patching changes code which changes operations. That's why patching is part of Change Control and is tested on non-critical servers first. This is a very important point because you design your operations to be a certain way and if you're changing them constantly, how can you be sure of what you have and what it's capable of at any given moment? You can't. So no wonder we have such a hard time to secure our operations if we are always changing them at the core!

We also know that patching doesn't eliminate or reduce harm on its own. At best it either closes an interactive point or improves a flaw in an existing operational control. Rarely does patching introduce new controls but it's possible that a particular patch integrates a solution with controls to an existing service like packet filtering, SSL encryption, or CAPTCHA. But it's not the patching or the patch which is the control because it itself doesn't interact with the threat. The patch is only a way to add or take away code. It makes a change to how things currently run just like policies and security awareness training do (also Administrative controls) to help achieve a security strategy. Wait, aren't things that help achieve a strategy called tactics? Somebody better call the security industry because I think something's wrong with the definition for Administrative controls.

So if patching is a tactic towards a particular security strategy, how can that be bad? I never said it was all bad. There are reasons where patching makes sense just like there are reasons to get a kick from a cup of coffee, get kicked by a shot of tequila, or spray stuff up your nose to breathe easier for 1.5 seconds. Yes, for the record, I am comparing patching to nasal spray.

For example, one overall business strategy is to have a perfectly working operations to optimize returns. But optimized returns rely on freedom from costly efforts or unexpected losses (security), and freedom from unpleasant surprises (trust) that force you to drop what you're doing to deal with it. To achieve this, you can pick many tactics and just one of them is patching. So consider this:

Patching may seem to be one of the cheaper tactics towards security since most patches are free and are no-brainers to install.  But in what scenarios is it still cheaper after you count in the time of patching, testing, and possibly fixing other software which breaks? Perhaps we can argue that the cost and ease to install makes it most suitable for home computers in this trade-off.

Patching may seem to be more secure because you are effectively interfering with a known threat. But is it the most secure way since it isn't timely because patches come much after the fact and don't address 0-days? We can argue that the modicum of protection provided by patching is better for the systems that apply no controls or poor controls which is the case for many home users.

Patching may also seem to be the way of increasing trust in your service because it is addressing an uncovered flaw. That means you can have more confidence in it. But is adding unknown code and untested changes to your operations give you more reason to trust it especially considering it is coming from the source that made it wrong to begin with? Perhaps if your network is nearly perfectly homogeneous with only that company's software so that their stuff is guaranteed to have been tested together and therefore not break. Although ANY 3rd party drivers or applications or your personally configured environment, or unique processes in how you use the patched application or service may still change. Ask yourself if your experience shown you that by installing the patches you're free from troublesome surprises? I don't think we can argue this even for home users because many of them are sick and tired of the undesirable side-effects patching may bring. They just don't know how to get off the patch.

By the way, if you measure reasons to trust with the OSSTMM 3 Trust Metrics. most major software companies will probably only get medium to high percentages in in 2 or 3 of the 10 trust rules and bomb the rest. So you may feel you can trust them but logically you have little reason to trust them. If you follow them it's because you want to and because you like it.

Now compare the previous considerations to you taking the time/money to install the right balance of controls so that you never have to patch again unless you want some new feature. If you do this then you know which reasons you have to trust your operations. Which means no surprises because you can prepare in advance for the problems you know you could have. For example if you didn't install any continuity controls due to cost so instead you create emergency response procedures for handling a DoS attack to get you back and running again as quickly as possible should it happen.

Most people would love to have this! They would love to feel this safe because they really are safer. They would love to have less scares. The OSSTMM 3 even shows how to do this. Unfortunately, many security professionals are doing them a disservice by not helping them get there. How many of the last 1000 vulnerability notices went out that included information on which controls prevented or mitigated the exploit? When's the last time you saw it say, our tests showed that the bug in Application X lead to a remote root except when Application X was run in a sandbox? They don't. If they did, the common people hearing about it would think, "Damn, I should run that app in a sandbox. Maybe I should run all my Internet apps in a sandbox!" Instead, they think, "Damn, root access? Well, let me sit on my hands here while I wait for the patch and hope nothing happens."

Unfortunately, without help from the security professionals, what it comes down to is how well people know their operations so that they can secure them. Most people have no idea even of their own processes let alone entire operations and so their security is just the off-the-seat-of-their-pants variety. They patch because then it's crazy not to. They think every little bit counts then. So they're really just hoping that the patches will make up for their personal inadequacies by making certain bad things not happen. Since it's easy to install them, why not? They also know that if something breaks they can blame the patch.

So my sincerest advice is that if you have no idea what your operations are, don't know how to put the right balance of controls on the operations or the environment, just don't care about security, or have money to burn then by all means continue on the patch because I can honestly say that it is right for you. Other reasons you might want to stay on the patch are because you trust the software companies to know more about your operations than you do. Patch because you're so used to nasty surprises that you don't think another one will do anything to change your current self-medicated dosage of antacid and beer. Patch because you know clearly that you are taking a short-cut now to save time for something more important which you want to come back and do right as soon as possible. Those are all valid reasons to patch. You can add your own excuse here now too or you could just start working towards getting yourself off the patch. Reading through the OSSTMM 3 is your best place to start, especially the parts about operational controls and Chapter 14 called "What You Get".
Possibly Related Articles:
Firewall SSL malware Security Strategy Methodologies Patch Management OSSTMM ISECOM Network Security
Post Rating I Like this!
Milan Pikula This is SO not true. It's all about the ratio of goodness and badness it brings, just as creating a sandbox (may disrupt operations, costs money and time, must be tested), or re-configuring the application to be safer (the same objections), or introducing the IPS (totally ruins operations if misconfigured).

Of course patching doesn't save the day on its own. But sandboxing doesn't as well - think information leak from the very same application, which sandboxing can't prevent. Where do you have that 15-years old sendmail and how much you invested in protecting it from exploiting the few simple buffer overflows? I hope you disregarded the "non-executable stack", "address space randomization", "stack canaries" as patching as well.

Perhaps there should be a distinciton between "good things to do to" and "good security (as a concept)". Patching for a known vulnerability is almost always a good thing to do, because the real risk of exploiting is quite high: as soon as the vulnerability is published, it goes to dozens of scanner tools for anyone to find and abuse. Patching thus does what security is expected to do - decreases costs from hacks. If I make a bug in my code, I fix it. How can I sleep well if there are users who won't patch because of your recommendation?

On the other hand, good security should begin with a concept, be enforced on all levels, etc. But those "all levels" also include patching. Of course it's just a drop in the ocean. But the ocean IS MADE from drops.
Michael Barbere "So my sincerest advice is that if you have no idea what your operations are, don't know how to put the right balance of controls on the operations or the environment, just don't care about security, or have money to burn then by all means continue on the patch because I can honestly say that it is right for you"

These controls must be steam powered mechanical controls that are immune to past, current, and future exploits.
hamza karmani Patch is the right path neo ...
Cor Rosielle @Milan Pikula
You may think creating a sandbox or re-configuring the application may disrupt operations, costs money and time and must be tested, but the same is true for patches. Bad thing about patches is that once installed, you only have to wait until the next patch is released and everything starts all over again, disrupting operations, costing money and time and must be tested.

Now this is one approach you may want to choose. Patching is not a bad thing to do (Pete wrote that too), it's just not always the best thing to do. If you don't care or don't want to think about better solutions, then just stick with patching. This article was probably not meant for you or you're not ready yet for better solutions. This article tells you can consider different approaches to achieve better security and that is what the OSSTMM is about. It is about how to measure strong and weak points in your operational security, identifying missing controls, or overly redundant controls, identifying flawed controls, etc. This gives you an idea how well your environment is protected, even if you did not apply a existing patch, or could not apply an existing patch because it breaks the application, or could not apply a patch because it does not exist yet, or could not apply a patch because the bug is not reported at all and no patch is being created.

Now I like to be in control. That's why I do use the operational tests as described in the OSSTMM. In my experience it delivers me a better protected environment for less money, less time invested, less testing and no ruined operations.
Rod MacPherson Milan Pikula,

I'm sure Pete will correct me if I'm wrong, but I don't think he ever meant to say you should never patch, but just that patching shouldn't be the thing that takes up most of your time (as it does for many people in this profession). Rather, you should try to make your systems more secure by understanding what types of interactions it is supposed to have with whom, and putting controls into place to make sure that that's what happens.

You are right, a sandbox isn't the answer to all of life's problems, but I don't think that is what Pete meant. Running Adobe Reader and your web browser in a sandbox will eliminate most of the problems that unpatched bugs in these open up, so you don't have to patch them almost weekly, and can focus on more important things. Running most server products in a sandbox won't have quite the same effect because the interactions it needs to make are different, and so the controls needed are different.

That is the key thing Pete wants people to think about. What interactions are your programs supposed to make, how can you best control those interactions? Is patching the thing you should be spending time on? or is it something else? and maybe patching is something you work into a more relaxed schedule?

Milan Pikula sorry for the long message, I wrote it offline before reading the latest replies.

@Rod MacPherson: well said, I agree with all you wrote. I just don't feel this article conveys the same message. It disregards patching as crap cereal ;).

You always have to use some kind of abstraction when implementing security model. Otherwise you'll end up re-implementing the applications and network just for the purpose of verifying the correctness of each interaction. And if you have bugs within the "allowed" interaction, you don't have any other means of protecting the app than patch. Patching isn't redundant.

@Cor Rosielle: we agree on some points: patching costs something, and sure, you have to look at other options as well. I've been proponent of securing wisely (not too much, not too little, use the right tools) for years. I think this belief is reflected in my previous comment too.

What I don't like on this article (and your reply as well) is thinking there are different mutually exclusive approaches: either you patch, or you do something else. Even the way you disregard my opinion with "you are not ready for better solutions" directly compares the two, and implies patching is evil.

In reality, you don't have to chose between A and B. You can (and should) consider the pros and cons of both. Patching is NOT equivalent to "other solutions". In many cases (my exact words were "almost always") pros outweight cons for patching. There is no simpler way to fix a known SQL injection than to change a single line in the source code. The change is trivially verifiable, and it removes the vulnerability forever. On the other hand, having an application firewall / ips helps only while it's there, may not prevent local attack vectors, may not prevent obfuscated exploits.

I don't say IPS or other measures or security by design are bad. IPS fights 0day, and may alert you about the attack. If you develop from scratch, you could start using a framework that prevents SQL injections altogether. And there is nothing better than thinking about your security at conceptual level, identifying assets and risks and whatever. Few years ago I co-authored a linux kernel patch aimed at increasing security, which allowed users to implement any MAC/DAC security model they wanted. Back then I advocated against using projects allowing ad-hoc "non-writeable files until reboot", because they don't add to real security.

But all of this said, unlike such projects, patching is NOT a lesser kind of security, nor it is interchangeable with other measures, because it works better for that particular vulnerability, than any generic solution could.

While using, let's say selinux or vmware to partition the system, helps prevent hacking the web by expoiting mail, and putting a root kit on filesystem, it doesn't prevent exploiting most vulnerabilities and altering services which the application itself provides (!).

If bugs in the software don't matter, then how many bugs? How severe? What if the code is so buggy it crashes every minute? What if it breaks the data because of race condition whenever there are two concurrent users? Is it better to limit number of concurrent sessions on firewall, or fix a simple problem in the app?

Patching is not evil. Patching does not replace other security measures, and other security measures can't replace patching. Well written patch completely prevents single known problem from ever happenning again, while other measures just decrease the damage done by bugs not yet discovered.
♠ StyleWar ♠ Nice article. A bit misguided here though: "Now compare the previous considerations to you taking the time/money to install the right balance of controls so that you never have to patch again unless you want some new feature"

Presupposes the nature of the code that requires patching is relevant to a control that we (we meaning ANYONE) can influence to a better outcome. Many patches are for sections of code that are rarely used.... Spending a lot of time on my end buttoning up my security doesn't do anything for me if my COTS vendor didn't spend enough time in QA.
Pete Herzog Thanks everyone for sharing their insights and opinions. Now please allow me to explain focusing on the facts which seems to be something we will agree on.

1. A patch when applied changes the source code.
2. Patches are released AFTER a flaw is reported.
3. A patch will fix one or more reported flaws in the code.
4. The means to absolutely verify the true source of the patch requires that your security has not already been compromised.
5. Evidence shows that patches, under the guise of security, have been used in the past as a means for a company to change the function of their product, remove content, or enforce licensing terms after it has been purchased and installed on the computer.
6. Patching alone, without operational controls, has not shown to protect systems or services consistently.

Therefore using the facts, we can logically conclude the following: For every software, there are an unknown quantity of flaws. You protect the software with multiple varied controls to protect against flaws both reported and not. Therefore when you fix the flaw, you are only fixing a known and reported flaw. This does not protect you against the unknown, unreported flaws still existing and why you still need operational controls. So to say that you need to patch to fix a flaw ignores all the flaws you don't know about. To fix each flaw in addition to adding controls adds new uncertainties both to the software and the operational controls and requires further verification testing to avoid surprise problems. A small change does not mean it's a small test. To ignore the functional testing after patching is to trust that the software maker knows your operations better than you, has your best interest in mind above their own profits, and that is if you can even be sure of where the patch came from. Patch only because you can't control the interactions, can't stop the interactions, don't do any quality control or functionality testing anyway, or don't know if you've been already compromised anyway.

Additionally, flaws in operational controls (security software) are a serious shame on the security industry and as some of suggested, if you have that many flaws in a software, replace the vendor. I couldn't agree more.
♠ StyleWar ♠ The implied trust that allows us to put our backs to a wall WITH our vendors comes from two ongoing areas of concern:

1) Not being burned by that trust in either the quality or the security space by that vendor tends to enhance customer loyalty


2) Not being able to afford to review every line of code in every app and every patch for every app tends to imply and even *require* trust.

Generally speaking, I also believe that vendors are VERY interested in their profits (as you stated) but that this is not mutually exclusive to my security in the long term. They WANT me to remain loyal to continue to line their pockets, so they must produce high quality, secure products and patches....

Generally speaking, I would be challenged to believe that there is a single operation example out there of the type of scenario where the philosophy you describe resulted in a strong security posture as measured by any standard we currently hold dear.

But maybe I'm misunderstanding your real points here.
Pete Herzog Greg, perhaps you should take the time to read the OSSTMM 3 then. For one, trust should not come from just the 2 reasons you mention. Certainly not in the security space. Secondly, your taking the argument to the extreme: "review every line of code" shows both a frustration with my method and perhaps a suggestion that you perhaps don't test after patching.

Of course most vendors are interested in their profits but your security from their tight software is not the priority. Your loyalty stems from a reliance on patches, constant reminder that they are there, a false sense that the patches means they are improving constantly, and their means to consistently upgrade so as to keep you licensing newer versions. This isn't a conspiracy theory, it's business acumen. Since they know you have a hard time proving liability against them and their product should it fail they have little motivation to make it the best possible and keep it that way indefinitely.

If one measures a strong security posture by the number of patches and layers of security software then yes, our method may not meet that standard. Then again, that particular method has been failing spectacularly over the years so why not try something else?
♠ StyleWar ♠ feel free to email me for additional discussion. I don't want to distract from your column. Incidentally - I thought the "Making Security Suck Less" article was a good one, and appreciate (even if I disagree) with the premise of this article.
Pete Herzog Greg, thanks for your interest in the article and what we are doing. I do appreciate your comments because it gives me a chance to explain to you and all those people who have doubts like you but don't post them. I want only to convince people to start trying a better way towards security and not try to start some holy war. I'd be more than happy to discuss this further with you and even help you learn the new methods. Honestly, from when I, and others who switched, put it to practice the first time, it was scary. Change can be scary. But now it's like breathing fresh air! Personally, the greatest benefit for me is the "family house calls" that us IT people often dread- visit a family member and get asked to fix their computer problems. Now I just set them right in a few hours, tell them if they want new software installed to ask me. Then I can test it and turn it into a thin app or portable app, bring it with me and let them run it without worry. No more patch hassles, printers stop working, AV updates, or other changes. It just works consistently. :)
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.