The Perils Of Automation In Vulnerability Assessment

Monday, June 25, 2012

Ian Tibble

1de705dde1cf97450678321cd77853d9

Those who have read the book Security De-engineering will be familiar with this topic, but really speaking even if literally everyone had read the book already, I would still be covering this matter because the magnitude of the problem demands coverage, and more coverage.

Even when we’re at the point of “we the 99% do understand that we really shouldn’t be doing this stuff any more”, the severity of the issue demands that even if there should still be a lingering one per cent, yet further coverage is warranted.

The specific area of information security in which automation fails completely (yet we still persist in engaging with such technology) is in the area of vulnerability scanning, in particular unauthenticated vulnerability scanning, in relation to black box scanning of web applications and networks.

“Run a scanner by it” still appears in so many articles and sound bytes in security – its still very much part of the furniture. Very expensive, software suites are built on the use of automated unauthenticated scanning – in some cases taking an open source scanning engine, wrapping a nice GUI around it with pie charts, and slapping a 25K USD price tag on it.

As of 2012 there are still numerous supporters of vulnerability scanning. The majority still seem to really believe the premise that it is possible (or worse…”best practices”), by use of unauthenticated vulnerability scanning, to automatically deduce a picture of vulnerability on a target – a picture that does not come with a bucket load of condiments in the way of significant false negatives.

False positives are a drain on resources – and yes, there’s a bucket load of those too, but false negatives, in critical situations, is not what the doctor ordered.

Even some of the more senior folk around (note: I did not use the word “Evangelist”) support the use of these tools. Whereas none of them would ever advocate substituting manual penetration testing for an auto-scan, there does seem to be a great deal of “positivity” around the scanning scene.

I think this is all just the zen talking to be honest, but really when we engage with zen, we often disengage with reality and objectivity. Its ok to say bad stuff occasionally, who knows, it might even be in line with the direction given to one’s life by one’s higher consciousness.

Way back in the day, when we started off on our path of self-destruction, I ran a pressie on auto-scanning and false expectations, and I duly suffered the ignominy of the accusation of carrying Luddite tendencies. But… thing is see: we had already outsourced our penetration testing to some other firm somewhere – so what was it that I was afraid of losing?

Yes, I was a manual tester person, but it was more than 12 months since we outsourced all that jazz – and I wasn’t about to start fighting to get it back. Furthermore, there were no actual logical objections put forward. The feedback was little more than just primordial groans and remote virtual eye rolling – especially when I displayed a chart that showed unauthenticated scanning carrying similar value to port scanning. Yes – it is almost that bad.

It could be because of my exposure to automated scanners that I was able to see the picture as clearly as I did. Actually in the first few runs of a scanning tool (it was the now retired Cybercop Scanner – it actually displayed a 3D rotating map of a network – well, one subnet anyway) I wasn’t aware myself of the lack of usefulness of these tools. I also used other tools to check results, but most of the time they all returned similar results.

Over the course of two years I conducted more than one hundred scans of client perimeters and internal subnets, all with similar results. During this time I was sifting thru the endless detritus of false positives with the realization that in some cases I was spending literally hours dissecting findings. In many cases it was first necessary to figure out what the tool was actually doing in deducing its findings, and for this I used a test Linux box and Ethereal (now Wireshark).

I’m not sure that “testing” as in the usage of a verb is appropriate because it was clear that the tool wasn’t actually doing any testing. In most cases, especially with listening services such as Apache and other webservers, the tool just grabs a banner, finds a version string, and then does a correlation look-up in its database of public declared vulnerability. What is produced is a list of public declared vulnerability for the detected version. No actual “probing” is conducted, or testing as such.

The few tests that produce reasonably reliable returns are those such as SNMP community strings tests (or as reliable as UDP allows) or another Blast From The Past – finger service “intelligence” vulnerability (no comment). The tools now have four figure numbers of testing patterns, less than 10% of which constitute acceptably accurate tests.

These tools should be able to conduct some FTP configuration tests because it can all be done with politically correct “I talk to you, you talk to me, I ask some questions, you give me answers” type of testing. But no. Something like a test for anonymous FTP enabled – works for a few FTP servers, but not for some of the other more popular FTP packages. They all return different responses to the same probe you see…yeah, tricky.

I mentioned Cybercop Scanner before but its important not to get hung up on product names. The key is the nature of the scanning itself and its practical limitations. Many of our beloved security softwares are not coded by devs who have any inkling whatsoever of anything to do with security, but really, we can have a tool deduced and produced with all the miracles that human ingenuity affords, but at some point we always hit a very low and very hard ceiling, in terms of what we can achieve with unauthenticated vulnerability assessment.

With automated vulnerability assessment we’re not doing anything that can obviously destabilize a service (there are some DoS tests and “potentially disruptive tests” but these are fairly useless). We do not do something like running an exploit and making shell connection attempts, or anything of the sort. So what we can really achieve will always be extremely limited.

Anyway, why would we want to do any of this when we have a perfectly fine root account to use? Or is that not something we really do in security (get on boxes and poke around as uid=0)? Is that ops ninja territory specifically (See my earlier article on OS Security, and as was said recently by a famous commentator in our field: “Platforms [censored]es!”)?

The possibility exists to check everything we ever needed to check with authenticated scanning but here, as of 2012, we are still some way short – and that is largely because of a lack of client demand (crikey)!

Some spend a cajillion on a software package that does authenticated testing of most popular OSs, plus unauthenticated false positive generation, and _only_ use the sophisticated resource intensive false positives generation engine – “that fixes APTs”.

The masses seem to be more aware of the shortcomings with automated web application vulnerability scanners, but anyway, yes, the picture here is similarly harsh on the eye. Spend a few thousand dollars on these tools?

I can’t see why anyone would do that. Perhaps because the tool was given 5 star ratings by unbiased infosec publications? Meanwhile many firms continue to bet their crown jewels on the use of automated vulnerability assessment.

The automobile industry gradually phased in automation over a few decades but even today there are still plenty of actual homo sapiens working in car factories. We should only ever be automating processes when we can get results that are accurate within the bounds of acceptable risks.

Is it acceptable that we use unauthenticated automated scanning as the sole means of vulnerability assessment with the top 20% of our most critical devices? It is true that we can never detect every problem and what is safe today, maybe not safe tomorrow.

But also we don’t want to miss the most glaring critical vulnerabilities – but this is exactly the current practice of the majority of businesses.

Cross-posted from Security Macromorphosis

Possibly Related Articles:
14125
Network->General
Information Security
Risk Management Hacking Scanners Tools Vulnerability Assessments Penetration Testing Network Security Automation Pentesting
Post Rating I Like this!
B9d9352326e5421a02e698a51d10ad2c
Beau Woods Automated vulnerability scanning definitely has its place, but that shouldn't be confused with a true vulnerability assessment. I agree that there are lots of false negatives when running in unauthenticated mode, which can lead to a false sense of security. A lot of times what I've seen is that by expanding the default password list on the tools by 5-10 entries, you can find 1-2 boxes that are vulnerable! That tells me that most organizations aren't doing true vulnerability assessments or Pentests alongside the scans.

But what's shocking to me is how many of the scans I used to run would find issues that are real and public facing! Things like old Apache installs, invalid SSL certs and other issues. Many of which should be caught every time with a scan, and some of which should automatically fail PCI or have negative consequences in other compliance regimes. Sorting through false positives tended to be easy with a lot of experience - the same ones always come up because of a poorly written scan rule.

So I think there is value in unauthenticated scanning, but not nearly as much as has been placed on it. It's good for really large organizations to see what's on their network (shocking how many don't know) and to verify no really obvious holes are there. But I also agree that there are much better ways to do that, like authenticated scans performed inside the DMZ, which provide a lot better value. And regularly running true VAs and Pentests.
1340696578
1de705dde1cf97450678321cd77853d9
Ian Tibble Well, security teams don't get root access generally and that has to change. I totally understand why we don't get root access. There has to be a skills revolution. Sometime soon.
We put too much emphasis on remote stuff, VA, Pen test etc. which will always be like running blind. Authenticated scanning _Should_ be better, but current products in this space fall some way short.

Scanners tell us our Apaches are down-level and SSL certs need improving. I would just hope we could get to a point where we don't need scanners to tell us these things. Or if scanners are our way of informing us of downlevel software, then only tell us about our downlevel software, not the other terabytes of false positives based on guesswork.
But you're right, if scanners are finding downlevel Apaches and its public-facing, that's either as bad as you say it is, or worse :)

>"Sorting through false positives tended to be easy with a lot of experience - the same ones always come up because of a poorly written scan rule"

yes. true.

>but I also agree that there are much better ways to do that,

Yes, I mean when you think about it, scanners tell us we have old software. If we can't check these things from root shells, then we can improvise. We don't need 1000 page reports full of false +ves from a commercial scanner. I wrote a 500 line ruby script to grab banners and correlate against OSVDB (before the auto-updates were hosed).

1340721393
B9d9352326e5421a02e698a51d10ad2c
Beau Woods Now you're starting to get more into the realm of security audit than vulnerability assessment. It's a place the big 4 do a lot of, largely through running recon scripts on the box with admin access, then comparing to a list of known issues and known good, then pronouncing a verdict. But that's an area that's not done very well by anyone right now, IMHO, because there's not enough technical security understanding among auditors (I'm happy to be wrong if there are groups who do it well, please leave a comment). Those who do decently have it were former security guys turned auditor, and those have usually lost a step from not keeping up with the true risks (what's really being attacked, how, and then what do the threat actors do) and controls (ie. is a WAF just a blinky thing or is it effective? More/less effective per dollar than secure SDLC in their environment? Is it good enough?).

This dovetails with a conversation I'm having with some other folks around whether or not you can make defense sexy. My response is that it needs to be done through making good audit, through very efficient realtime visibility and risk decisions, through good security process, through getting business ownership, etc. If you think that's sexy then heck you're probably just right for the job. It's certainly a hard problem to tackle and can be fun as well. But it's more along the lines of what a CPA or Auditor gets up in the morning for, rather than a Pentester. Just a different mentality.
1340723023
1de705dde1cf97450678321cd77853d9
Ian Tibble I think there's a big difference from being at a shell prompt and following a checklist (this is what auditors do - or use some gnarly script) and being at a command shell and asking oneself "now how would i elevate privileges on this box?", or being quizzed by ops ninjas about the latest OS security standard..."why should we do this?", "this control means we run a serious risk of production trouble-shooting issues because..."

OS and app sec is the front line right now. Its totally critical. Its an area that allows us to really get to know our networks. Its allows us to gauge our risks in the most effective and efficient way, and it is completely possible to automate the mining of security control configurations on all of our OS and databases and get all this info in one place.
Unfortunately so many businesses either don't understand the importance of these issues, or they think they can substitute this with pen testing or use of scanners.

When we're not sure of the importance of OS security, think zero days. We can't patch against this problem. And zero days are very real, because the economics of malware dictate that there are lots and lots of zero day exploits out there.

Call it white box pen testing if you will.

>But that's an area that's not done very well by anyone right now, IMHO, because there's not enough technical security understanding among auditors

you sound like me now and I pity you for that :) No, but you're totally completely right
One example: a Linux shell script used by big 4 auditors. It tests only 6 aspects of OS security, but that's not the worst part. They got the concept of /etc/ftpusers wrong for RHEL! This file lists those blocked from FTP, not allowed!

I worked with a big 4. They had a very good web apps team. They had a really superb intranet with security config guides for most popular OSs. But - they didn't use this for auditing!
1340724861
B9d9352326e5421a02e698a51d10ad2c
Beau Woods If you get to a shell and are pulling settings that are irrelevant or aren't pulling relevant ones then that's a problem with either the script or the ability to query a setting. You should be able to run a virtual pentest with the right information, or darn close to it.

I disagree that 0-days are as big a threat as everyone says. When they're discovered in OSes these days they're signs that it might be a nation-state funding it. Third party apps (looking at you Adobe, Oracle and Sun), there's usually some defense in depth measures that could be put in place to prevent them from having much effect. Like sandboxing the app (IE, Firefox, Flash, etc.) or running with limited privileges.

Big audit firms are at least thorough and consistent - or at least that's the reputation they have. Most of their auditors don't know tech. or security though (had to explain to 4 Barbies fresh out of college that the dozens of racks WERE the servers; met another one with my job title billing out at more with a college degree in finance and a week of hotel conference room training) and so they can't do anything that's outside of a playbook. And that's a failure. IT Audit is not where you want the stupid and inexperienced people.
1340725474
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.