Reflections on Ten years of Software Security

Saturday, April 21, 2012

Rafal Los

0a8cae998f9c51e3b3c0ccbaddf521aa

Today I'm taking a break from the daily news of information security a bit to reflect on the nature of the security of 'code'. 

I will readily tell you I'm a terrible programmer, and I learned that way back when I wrote my first BASIC programs, and then onto Turbo Paschal (that was sure a dead end wasn't it?) in early high school and C, C++ and so on. 

I'm not very good at writing optimized, intelligent code - but what I've learned to do over the years is read an incredible array of languages for the subtleties that make them 'break' in the security sense.  I've studied Ada95, FORTRAN, MIPS-RISC and other weird and now long dead languages and formats over the years and it all comes down to the same thing every time.

Given a finite amount of time to write a piece of software with specified features and functionality the security of that code will always take a back seat.  This is, at least for the time being, only natural.  The fact that a great number of universities and colleges still don't teach 'secure development' as an integral part of the computer science curriculum means that for at least the foreseeable future we're going to continue to have this problem of insecure code. 

Let's face it, code breaks in strange ways that it's not always easy to understand.

I recall when I wrote my first MIPS-RISC program way, way back.  It was a calculator that was the very first program for the course in college and it was going to be a big part of how I saw my class for the rest of that semester.  I was really proud of my code.  I could quickly add, subtract, multiply, divide with speed and efficiency. 

I tried all kinds of number combinations and large sets of numbers and the calculator still worked so when the professor asked who would be willing to volunteer their program for a demonstration for the class my hand shot up proudly.  Quickly he added, multiplied subtracted and divided numbers large and small and I was very impressed with myself.  My app didn't crash, it was very small in code size and it was efficient.  Then he did something I will never forget. 

The professor added 12 and Q.  My application core dumped because I had never accounted for anyone adding a letter into the calculator... I mean, who would do that?  Why would anyone do that?!  At the time this was simply a feature failure - failure to protect the end user from accidental mistake but as I look at it now I wonder what other things I could have done to that program to exploit the system.  I probably don't want to know the answer.

That's pretty much how it still is today.  Programs are being written at an alarming and astounding rate.  Some places commit code several times a day to their repository, and I've recently talked to some that deploy code to production-like environments at least daily in small batches.  Thinking about this and my experiences with being a software security consultant and pre-sales engineer... I just sit and wonder what the outcome of this type of action will be.

I fully realize a development methodology like DevOps puts the responsibility for the functionality of code that's deployed on the developer who wrote it and maintains it - but what about security?  It's one thing to deploy a code change that adds a button that changes a workflow which affects the operation of an application. 

It's even possible (although I won't say guaranteed) that the developer pushing the code, and given a great tool-set, can test the functionality and impact of that change on the rest of the application.  But what about the impact to security?

I recently gave a talk at OWASP (various OWASP conferences, actually) about the quantity vs. quality balance in software security testing.  The bottom line is you can't do a zero-footprint security testing cycle.  It's just not possible ... there will always be some impact to the development and release schedule.  So if organizations are trying to develop and release in a matter of hours what does that do to the security of the code? 

Let me ask that a different way - what does the security function have to do to compensate for the shrinking testing window of opportunity?  Obviously bright minds immediately turn to the integration of security more fully into the development components and better automation.  Better automation is at the heart of the DevOps movement anyway - so why shouldn't this include security?

It seems like the faster we move, the less time we have to actually think.  This is sort of like a fighter pilot.  If you're locked in combat you don't have time to read the manual.  You rely on training and technically advanced automation to keep you alive and ahead of the battle. 

I know software security is a far cry from the battlefield analogy but it's at least similar.  At certain times you realize that the code you're writing may actually be the difference between life and death to someone, somewhere and then this may act as a forcing function to get you to think security... at least that's the hope right?

If you're going to run faster and faster then the only thing I can logically think of is that security has to be second nature to you, like remembering to breathe.  There are ways to achieve this, but one of the best is to perform repetitive motions until they become second nature. 

Whether this is through education, repeated exercise of security principles, or something else I can't tell you - but I strongly suggest that 'better security' won't happen overnight and it won't happen after a single training class.  You have to live it non-stop for it to be a part of you.  Then once it's become a part of you and lives within your instincts, you have to be equipped with the tools and methods to use those security instincts very quickly and in diverse situations. 

From a vendor perspective I'm happy to report that we're well on our way to having this part covered.  Static analysis, dynamic analysis, hybrid analysis and every other type of security technology available to you in a way that can be used quickly and with minimal intrusiveness is what we do, and I'm proud of that.

Now... if we could just magically wave a wand and get everyone on the same wavelength starting Monday morning...

Cross-posted from Following the White Rabbit

Possibly Related Articles:
15041
Webappsec->General
Software
OWASP Testing Application Security Best Practices Development Secure Coding Standards Software Security Assurance DevOps
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.