Developer interaction is paramount if an organization is going to have a successful Software Security Assurance (SSA) program - it's a point very few people will argue since its fundamental to the cause.
The level of interaction is somewhat of an art-form, and when executed improperly can lead to a form of near-catastrophic failure I like to call the "Infinite Feedback Loop".
Work with me here...
Developers have goals. Their goals are to write code to specification, make sure that it compiles and "works", and get it delivered on or ahead of schedule. Period.
Generally there are performance bonuses for delivering code ahead of schedule, and I've witnessed more than one individual (or manager) get let go because an application was delivered late.
Notice I didn't say because the application was delivered with vulnerabilities... but such is life at this point in the game.
Information Security folk think we've got it figured out, and it's true - some of us actually do, but what we're failing to realize is the psychology behind the interactions with ourselves and the folks that actually write the code.
We really need to get a few things burned into our brains it comes to software, security, and why the state is as it is...
1. Excluding malicious intent, developers rarely knowingly write vulnerable code
2. Security is not a measure most development shops are evaluated against
3. Security is seen as an additional layer of complexity, and complexity causes issues
So what in the world does this have to do with anything, or an "infinite loop"? I was just going to tell you, as a matter of fact! It's all in the conversation we have after we've performed an analysis of some soon-to-go-to-production code.
Inevitably you'll find vulnerabilities whether you're using tools or performing security analysis by hand.
Stop me if you've seen this one already... A security analyst does a vulnerability analysis and prints up a big, fat 100+ page PDF. He or she then walks it over to the developer who needs to do the actual fixing of the code only to find out he may as well be speaking Klingon...
The conversation then very quickly devolves into something like this:
InfoSec Analyst: "You have a bunch of vulns in your app, these have to be fixed before the app can go live"
Developer: "No I don't"
InfoSec Analyst: "Yes you do, they're right there in the report"
... (pause while developer thumbs quickly through report)
Developer: "No, none of these make sense, and besides -there are hundreds of vulns here and I've already found one that can't possibly be real"
InfoSec Analyst: (dumbfounded)
Developer: "Yup, I have to go work on the new app, let me know which of those you need fixed, how to fix them, then I'll prioritize and we'll go from there"
This turns into an infinite loop because you'll never actually get a report the developer understands, has time to fix, or understands how.
Now, none of you real App Sec folks have ever had this problem, because you've got it figured out - but it's out there, and it happens every... single... day. So how do you get out of this?
Simple - we go back to fundamentals. Developers are having issues understanding us as InfoSec people, that means we need to modify our language. We need to be less alarmist, and more sensitive to their timelines and goals.
We also need to be able to speak "developer" (which I've been informed is a derivative of Klingon) which means not sending over huge reports with thousands of pages of vulnerabilities - but sitting down and actually talking about stack traces, taints, sources/sinks and getting into the developer's heads.
We also need to start enabling these folks to actually write better code, not because they really want to but because it is just easier than dealing with you later.
Trust me on this...
Cross-posted from Following the White Rabbit