Solving Problems from the Security Viewpoint

Thursday, June 07, 2012

Rafal Los

0a8cae998f9c51e3b3c0ccbaddf521aa

Why is it that very smart security-minded people produce eloquent solutions to common problems only to watch them sit on a shelf and seldom see implementation?

I've personally wrestled with that question for years and years because there really are some eloquent and very usable solutions to commonly identified security problems that organizations have that never seem to get implemented.  

This was highlighted again by a brief Twitter exchange talking about the OWASP ESAPI for application security mitigation.  Chris Eng of VeraCode cited that from their metrics, just about 2% of JAVA applications use the ESAPI in some form. That's miserably low adoption.

What makes this whole matter worse is that the ESAPI is a pre-vetted, pre-built library designed to help developers decrease the effort and time it takes to produce lower-risk code.  

The ESAPI is designed to both be retrofitted into existing applications, and works well to insert into new code - so why isn't it being adopted in huge numbers?  For that matter, why aren't many other things just like the OWASP ESAPI being adopted?

From experience, there are 3 clearly identified causes for poor adoption of well-intentioned security-built technology into everyday development and systems building... Let's take a look at them and see what can be done to raise the level of adoption from each case.

Security-centricity: Unfortunately, even the best intentioned solutions developed by people with a security background are doomed to fail at broad-scale adoption due to the mindset required to implement them.  While I can't say this is the problem with the ESAPI, plenty of other security-centric innovations are focused from a security viewpoint.

They don't take into account the fact that developers, architects, and users aren't security professionals, and don't think like security people.  When we look at something and immediately think- "gee this should be easy to secure if you only do A, B, and C" a developer may  think "sure that may be a security problem, but that solution is simply too complex or difficult to implement". 

Happens all the time, and it used to happen in the various development organizations whose code my group at my previous role would validate for security sanity.  

I even had a group go through and develop an anti-XSS module that was guaranteed to work every time, all the time in every application we were developing but since it was too "security-centric" (and that can mean difficult to implement, complex, poorly documented, etc) the assumption my team made that it would just be adopted failed.

Solution: When developing solutions to security problems we should be consulting UX-experts, developers, architects, business analysts, and those who will be implementing the components we're advocating.  

Making sure we have a solid security solution which will also be popular and usable by those who are meant to use it is critical. Security solutions developed in the vacuum of the security silo rarely catch high adoption rates outside of our own spheres of direct influence.

"Not developed here" (lack of trust): I'm confident this applies more to the world of software development than anywhere else, but the "not developed here" mentality is difficult to overcome in nearly any sphere.  

I know I ran into that problem as in the previous example where a ready-to-use module was simply ignored by teams that had no direct hand in developing it due to the 'added risk' or so they claimed, or implementation.  

Rather than using a relatively simple anti-XSS module which had been proven effective against attack that we were providing, each development team chose to write their own which was tailored to their application and was not only not re-usable but also scaled poorly from one developer and code-house to the next

I don't think this mindset is somehow endemic only to developers as I've seen other groups from project managers, to architects, to analysts, solve problems each in their own way even though a readily available solution was staring them in the face, ready to be implemented.

Solution: Unfortunately I don't have a readily available solution here as this is clearly a cultural problem.  The only way to break anyone of this type of behavior is to root it out organically.  Find those willing to adopt pre-built security features, empirically demonstrate their gains in time, resources and risk mitigation, and hopefully others will follow.

Applicability: I've learned that people tend to find and latch onto any reasonable excuse not to do something that requires effort, even if it is for the sake of risk reduction and that risk-reduction is a board-level mandate.  Applicability is often cited as one of these excuses.  

There is a very fine line between making something generic enough to be usable nearly universally, and making it applicable to specific use-cases.  This line is difficult to walk in any circumstance, but if you waver too far in either directly - you end up with the excuses.  

If you err too far on the universal-ness of the fix, the implementer will have to work too hard to tailor it to the specific application.  If you err too far on the user-case side you'll lose your ability to implement across the organization or broader...it's a very difficult task.

Solution: There is no simple solution, except that it takes practice and to know your organization well in the first place.  Knowing the environment you're operating in (situational awareness) and how the people, processes and existing technologies operate will serve you well in developing something that satisfies both the security need and the applicability.

While there are no magic solutions and pixie dust to get the target audience adopt your security 'fix' there are plenty of ways to fail.  It is only in failing that we learn to succeed, and what it takes to succeed... and that's written on the faces and resumes of those that have tried to put out fantastic fixes to hard security problems that never got adopted.  

Remember whether you're trying to eliminate cross-site scripting in your aging code, or trying to architect a network component for optimal risk-reduction and functionality -it only matter is if someone willingly adopts your fix that it matters.

Cross-posted from Following the White Rabbit

Possibly Related Articles:
9392
Enterprise Security
Information Security
OWASP Application Security Methodologies Cross Site Scripting Information Security Infosec Mitigation Security Solution ESAPI
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.