Open Source Code in the Enterprise - Keys to Avoiding Vulnerabilities

Wednesday, April 18, 2012

Rafal Los

0a8cae998f9c51e3b3c0ccbaddf521aa

Way, way back in December 2008 I wrote a piece on this blog called "Open or Closed [source]? Which is more secure?" and it got some people talking and debating... some of you may actually remember that post if you've been reading my stuff for a while.

Now we appear to be back to this again in another study Aspect Security recently did ... so it's time for me to re-visit the idea... again.

There is no debate in the open vs. closed source software question.  Either can be made well, or poorly.  Either open source or closed source can be relatively secure, or riddled with easy-to-exploit holes.  We don't need to rehash this again... but there appears to be some new data.

I'll let you read the CIO article "Do Open Source Components Threaten Your Apps?" on your own because you don't need me to parrot the well-written piece.  I do want to add a bit of color commentary to this story though, from the perspective of the software security program.

Here's the headline out of this that really caught my attention... "Global 500 Firms Downloaded 2.8 Million Insecure Components"... and if you work anywhere in the Fortune 1,000 or not odds are someone in your organization has incorporated an insecure open-source component into one of your applications. 

Maybe the vulnerable, high-risk piece of open-source code is in one of your core business applications.  Maybe that vulnerable code is in an application no one really uses...  Or maybe, just maybe, that vulnerable open-source component is sitting in one of your customers' data centers running their business critical functions. 

What I'm telling you is that the risks range all up and down the scale from low-risk, to high-risk (internal), to high-risk (external).  Getting sued because you incorporated an open-source component that was vulnerable into an application you sold to your customers causing them to get compromised isn't some scary FUD - it's very, very real in today's litigious, risk-passing business world.  Now, the question isn't whether your organization has incorporated vulnerable open-source code into source code you developed... it's how you did it.

This post isn't a call to action against using open-source components, or to try to crack down on developers who copy/paste code they find on the Internet in your corporate source code. 

First off, open-source components are a great asset to your development capability (helps not to have to re-invent the wheel, right?) and second you can't possibly stop developers from being lazy (or seeking help from the almighty Google search engine). 

What I'm offering here is a 3-step process to help the CISO sleep better at night knowing that even though your developers are likely using vulnerable open-source components and code from the Internet - it's OK because it's being vetted in a safe manner. 

Here's what I mean -

  • Always perform a full audit of critical source code that will drive business both internally and externally.  This extra step somewhere in the software development lifecycle (and it still has to happen whether you're using traditional waterfall, agile, or something like DevOps) helps the head of that development project know the actual source of the code for which he or she is responsible.  Fair warning, people don't always tell the truth - so this needs to be a governance step but know that it won't always catch 100% of the abuse-cases.
  • Educate your developers on the dangers of blind trust in code obtained from open-source or other source on the Internet where you can't have a reliable assurance of the security of that code.  Awareness goes a long way, and after you've told and educated - ask developers to sign a 'I am responsible' document.  This gives them that extra feeling that they're responsible for their actions, that you're auditing their actions, and if something goes wrong they will be held accountable.
  • Test, test and test again.  Every application, every iteration, every release, every time should be tested for security defects.  When things change in the code, it must be tested.  From source code, to run-time testing, to the new hybrid testing models which provide incredible insight into the relationship between running application and source code - test early, test often, and triage and fix critical security issues whether they come from vulnerable open-source or vulnerable in-house written code... it all needs to be tested.

No one's ever going to be able to eliminate the use of "cut n' paste" from the Internet, or vulnerable open-source code in your applications much in the way you'll be hard pressed to eliminate vulnerabilities from closed-source vendors. 

The burden falls on you the organization compiling and releasing the code or application to follow smart practice guidelines like auditing, educating and testing that code to ensure risk-reduction is done properly. 

As a side note, if your software security program can't handle these 3 very simple tasks - it's time to re-visit your software security program.

If you need help... well, you know who to call.

Cross-posted from Following the White Rabbit

Possibly Related Articles:
12672
Vulnerabilities
Information Security
Testing Enterprise Security Open Source Application Security Vulnerabilities Secure Coding Software Security Assurance FUD DevOps
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.