How to Pen Test Crazy

Monday, June 20, 2011

Pete Herzog


The current security model is crazy. And the current crazy testing methods actually make it look like it's not.

I think that's why so many people fail to see how broken the current consumer-ready security model is. Look at the current attacks and how security companies, even HUGE ones with their security measures and countermeasures built on this model are letting the people hang.

But I'm jumping too far in already. Let me step back a moment and thinkcast. Who tests operational security better than the hacker? And by hacking, I mean the concept of knowing intimately and deeply how something acts in its environment in order to influence it the way you want to.

(You might like this definition of a hacker but nowadays most of the world sees a hacker as a synonym for "Internet criminal". Of course it's wrong and probably induced and perpetuated by the media, but I think it is a reality we have to take into account. So in this following explanation about what a hacker is and does, perhaps I need to first make explicit that I'm not referring to just any "internet criminal" although there are many which are hackers according to this concept. But sadly usually, an Internet criminal doesn't need to intimately understand the operations of it's target. For many such Internet criminals it's often sufficient if they just find a working exploit that was discovered and created by someone else.)

"Hackers" are the ones who interrogate RFCs and dwell on source code looking for even the smallest logic errors or the most abstract combination of timing and happenstance to make the seemingly impossible (You can imagine the developers saying, "Yeah, but for all those things to happen at once will never happen") possible. Hackers do this not by being sneakier or eviler. They don't particularly know the security best practices better than the policy writers. They aren't even always crack programmers or even secure programmers. What they are is more deeply knowledgeable about how a certain thing really interact and co-depend across a variety of environments. They teach themselves the clock work of something very complex like interacting protocols and operating systems and they keep teaching themselves as these things change. Over time they learn that no matter what changes in OS, protocols, programming language, etc., all these complex systems have certain elements or certain conditions that if designed a certain way or just missing will lead to a security problem. So hackers learn to look for these things and dig in where they find them.

Now the investigative role in hacking has mostly been taken over by vulnerability researchers today. That's great because it creates a huge body of knowledge to extend the capabilities of the professional hackers like penetration testers and ethical hackers. This means better test coverage of a larger scope including even more products and services. This leaves the professionals to focus on getting deep into the cogs and springs of operations to find those places to dig in and then apply the vulnerability research to verify if the problems really exist.

Except now the application of vulnerabilities has been taken over by scanners. I'm not saying they do it better. I'm saying the market has sadly, unfortunately accepted scanners to find and report vulnerabilities in their standing infrastructure as a cheaper, good-enough alternative. Find a hole and throw a patch in its face. We've got gilded industries around it too like vulnerability management and patch management.

So all that's left of the hacking for the professional tester is the big picture part of analyzing the interactions of the whole operations. They can investigate how things work in the various environments, how they interact, the resources they trust, share or squander, and how this combines with the vulnerability reports, the regional and company culture, the chains of trust, and the assets. Except sadly, usually, nobody hires hackers to do this. They hire MBAs and risk managers to look at the vulnerability reports and compare them to an industry baseline and the various required compliance objectives to make threat trees, risk scenarios, and the assorted matrix.

(When these MBAs analyze results they sadly, usually spend their time focusing on something from the past and which is kind of general. Sometimes the threat they analyze and will try to protect against has no chance anyway in your controlled operational environment because it's a controlled operational environment. In plain speak, they match a banner to a CVE and apply a CVSS without thinking about the controls which affect the operation of the threat within a controlled environment simply because they don't have that intimate knowledge about how their own systems operate in their environment. But a hacker does. This difference is what distinguishes hackers from risk managers, leaders from followers, specifics from general, future from past, success from failure, and us from them.)

So who verifies security operations? Who tells you the big picture of what you have, how it interacts, and what it needs, based on how it works and how it should work? Not the penetration tester. Not the ethical hacker. Not anymore. Sadly, unfortunately they've been marginalized to running scanners and eliminating false positives and negatives. They have had their scopes restricted to test only specific infrastructure components only in certain ways as required by compliance objectives. They are used to shock, scare, or leverage upper management or the Board to make a bigger security budget to pay for more vendor licenses. They have been made the spokesmodels for looking hardcore while proving a negative to protect corporate interests. And that's really the one trick expected of them. It's the only one anyone's really buying. Yes, in the modern, commercial and corporate security world, professional hackers have been reduced to being a one-trick pony.

The professional hackers who do what they do best in the ways that are absolutely critical to organizations have been marginalized into near extinction with small pockets surviving in niche work like crime and espionage. The professional hacker who somebody could've hired would have told them how they need to balance their trust for their vendors with specific operational controls and beware contracts that use phrases like "best efforts" and "timely notification".

So I wonder how many pen testers with clients using RSA authentication caught this problem in advance and how many advised their clients of the missing compensating controls from that trust? How many calculated and quantified it to show that there was a serious imbalance? How many pen testers were actively testing for cross-channel trusts?

We wrote the OSSTMM 3 to address these things. We knew that penetration testing the way it continued to be marginalized would eventually hurt security. Yes, the OSSTMM isn't practical for some because it doesn't match the commercial industry security of today. But that's because the security model today is crazy! And you don't test crazy with tests designed to prove crazy. So any penetration testing standard, baseline, framework, or methodology that focuses on finding and exploiting vulnerabilities is only perpetuating the one-trick pony problem. Furthermore it's also perpetuating security through patchity, a process that's so labor intensive to assure homeostasis that nobody could maintain it indefinitely which is the exact definition of a loser in the cat and mouse game. So you can be sure it also doesn't scale at all with complexity or size.

You see we at ISECOM knew that many penetration testers were still those same hackers who could work with operations and bring real security value. They were also the people in the best position to bring change to the security industry because they could consistently poke holes in the vendor-driven security model of authentication and encryption and still build better operational security despite it.

So we realized if we could show the penetration testers that there's more to this birthday party than the one trick expected of them then they'd get it. So we did the research. We got great minds together and continuously ran real-world tests. We made the OSSTMM 3 because we don't want penetration testers and ethical hackers to be just the face of compliant corporate security. We want them to really be hackers again. We need operational security done right. For the sake of security in this interconnected world where the butterfly effect is much more than a theoretical description of how we are all required to keep some of our eggs in someone else's basket. We NEED them to be hackers again.

We NEED them to be the authority for how operations and security work together from the CPUs to the personnel. We NEED them to add a scientific method to their testing to assure validity for their efforts. We NEED them to be able to categorize and quantify their results to provide an unbiased foundation that risk managers can use as real data now and not rely so much on historical data or hypothetical baseline data. We NEED them to quantify specifically how much trust is not enough and which security is exactly too much so they can create and manage controls based on valid trust scenarios. We NEED them to hack through all this crazy, security BULLfrak that too many security people are pretending not to notice. Because if we can't make penetration testers into hackers again, we'll see a lot more companies and governments getting their asses handed to them.

How to do this is in the OSSTMM 3. It breaks down all those certain elements and certain conditions which hackers learn to look for that point to flaws. It tells you how to see when something designed a certain way or missing certain things will leave a door open to a specific type of threat. So if you haven't read it, do so. If you have read it and didn't "get it" then take another look and think about it in the terms outlined here. It's designed to make you a better hacker. It's designed to remove you from the crazy and help you get a whole new sense of security.

Possibly Related Articles:
Information Security
Methodologies OSSTMM ISECOM Trust hackers
Post Rating I Like this!
Ian Tibble Great article, and I especially loved the second from last paragraph. I comment to some extent in a similar way in my upcoming book Security De-engineering.
I completely agree on the comments such as making penetration testers into hackers again. Others also, related to the activities of MBAs in security (in my book i refer to them as CASEs - Checklist and Standards Evangelists).
I have some criticisms of the way testing is done, and the folk who do the testing. We need to remove testing restrictions. If testing is not a simulated attack, then it's not worth doing it. And then the skills problem is one that anyone who has been in security since the late 90s will be acutely aware of. But i take a step back from these aspects and ask; if we have perfect testing conditions with Hacker-like skills and no restrictions, what are we actually trying to achieve in penetration testing? From my way of thinking, what we are really trying to achieve is just finding stuff that internal staff may have missed.
In a more evolved security world (that is some way off our broken world today), internal security staff have all the qualifications that you mention in your article: they know the companies systems inside out, along with their interdependencies, and they can empathize with IT and networks operations staff. In this case, the testing is done only to catch misconfigurations that internal staff may have overlooked. In some cases, there won't be much chance of any returned value from a pen test, and in these cases, the security department won't advise on engaging an independent testing team at all. Some other cases, such as testing complex custom web apps...there will always be some chance of returned value.
In the case that is very common today (as you also mentioned), security staff won't be at all familiar with their own IT environment. In this case, you can deploy a highly skilled testing team and all of their findings will relate to stuff that security staff were not aware of. There will be a perception of value but this is deceptive. In these cases, there isn't a huge value returned in the testing. Testing should not be used to compensate for a total lack of knowledge on the behalf of security staff, unless the company is prepared to engage a highly skilled penetration testing for at least 6 months per year!!

Overall, the old hacker element didn't work because nobody was managing them. You had cases where hackers with green hair were talking directly to CIOs and other C-levels, and they were swearing at the C-levels because they used Windows laptops instead of FreeBSD. What was needed was an agent for the Hackers, just as an artist has an agent. But not only did such people not exist, there was also no identified need for such managers.

Anyway introducing some order in pen testing methodology is always a good initiative, and i'll be looking more closely at OSSTMM 3
Pete Herzog Hi Ian. Thanks! But I'm not saying companies need to hire the image of a TV actor playing a hacker ;) I think professional security people need to step up and start knowing their operations in depth like hackers do. Then there's plenty of hackers who can wear a suit and a tie too if you need them to. So hire who you think is appropriate for your company but make sure they are ready to get deeply into all the interactions of your systems. Of course, the reason why many evangelize new or different OSes is because they see what's there isn't working. But the security or IT management needs to work with them to be enablers and to bring security to what you have rather than change what you have which creates all new operations, new costs, and its own new problems in other channels (like human and physical security).
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.