Fighting Alert Fatigue

Wednesday, July 06, 2016

Mike Paquette

0133c663c244df033c2eeb2248dfee32

While there’s been a great deal of discussion surrounding the high-level value of behavioral analytics in mitigating losses due to cyberattacks, the realization of this benefit usually begins with relieving an organization’s employees from the dreaded condition known as "alert fatigue."

Security professionals are under more pressure than ever before to identify and address advanced security threats within their organization’s IT infrastructure. The problem is that humans lack the capacity to sift through massive amounts of machine-generated log data on their own – let alone pinpoint the needle in the haystack.

Traditionally, security teams have employed monitoring tools that rely on threshold-based rules to detect cyberattack-related activity, but these tools are notoriously difficult and time-consuming to create and maintain in the face of rapidly changing environments. In addition, these tools tend to flood security teams with low priority or false positive alerts, rather than prioritizing the alerts that could have the greatest impact.

Overwhelmed by thousands of low-value alerts, security teams spend too much of their time troubleshooting false positives than identifying real threats and responding before they impact business. Half of IT professionals and security managers say false positives negatively impact their security readiness, according to research from Enterprise Management Associates (EMA).

With staff plagued by alert fatigue, issues hidden deep within an organization’s data often go unnoticed for months – or worse, can be missed altogether. Despite several high profile data breaches at Anthem, Home Depot and JP Morgan, the time gap between attack and detection is still unacceptably long. As such, security professionals are beginning to realize the benefits of turning to behavioral analytics to detect suspicious behavior early and better protect their organization – as well as preserve their own sanity.

Behavioral analytics solutions can employ technologies such as unsupervised machine learning to analyze millions of data points each minute, creating a statistical baseline of normal behavior within an organization’s data and then flag behaviors that are statistically unusual or anomalous. Security teams can deploy virtual “algorithmic assistants,” which are automated routines that continuously model selected data fields, and accurately detect anomalies. This capability is often referred to as machine learning-based anomaly detection.

Two Ways Behavioral Analytics Can Fight Alert Fatigue

  1. Prioritize investigation of existing alerts based on how unusual they are

A very simple principle can be applied to an existing stream of security alerts to provide relief for alert fatigue - investigate the most unusual alert behaviors first. By using anomaly detection to identify alert behaviors that are unusual early on, such as a rare event ID, an unusual volume of alerts for a given destination, or an unusual number of distinct event IDs in a given time period, analysts can avoid wasting time on the same false positive events day after day. As an added benefit, they now have a documented and mathematically based algorithm upon which their prioritization is based.

  1. Replace threshold-based rules with automated anomaly detection

The original idea behind monitoring rules - to use automated monitoring to alert the security analyst to known bad behaviors - is sound, but the actual implementations have caused alert fatigue due to the shortcomings and complexities in writing rules that work well in the face of dynamic data patterns.

Security professionals understand that elementary attack behaviors, even those associated with previously unknown threat vectors, can be detected using the anomaly detection capabilities of behavioral analytics. Threshold-based rules can be replaced with algorithmic assistants that accurately model the normal behaviors in the data and generate alerts only when unusual behaviors are seen.

For example, a threshold-based rule created to detect data exfiltration over HTTP might trigger any time a user transfers more than say, 100MB of data in a given day. In a large organization, such a rule might trigger thousands of times per day, creating a flood of uninteresting alerts. By contrast, a machine learning based algorithmic assistant would accurately and automatically model the behavior of HTTP transfers for each user on the network and generate an alert only when a user’s behavior is different than it normally is for that time of day and day of the week. Such an approach would typically reduce the number of alerts from thousands per day to perhaps just a handful in a week, again helping to relieve alert fatigue (and simultaneously identifying actual unusual events worthy of investigation).

Both incidents and the attackers that perpetuate them are only going to grow more complex. To succeed in this new landscape, security professionals will need to accept the fact that human effort and intelligence alone is no match for today’s advanced threats. By augmenting their efforts with behavioral analytics and machine learning, teams can be sure they reduce alert noise and fatigue while quickly identifying and addressing the issues that actually matter before they hurt their customers or the bottom line. 

Possibly Related Articles:
11236
Enterprise Security Breaches Vulnerabilities
Enterprise Security false positive Security alerts Alert Fatigue
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.