Using Artificial Intelligence for Security Automation, Orchestration and Response

Wednesday, January 11, 2017

Nathan Burke

62a099b7cea20f00f6fb25ccd2b0dfa6

Artificial Intelligence is a term being used to describe everything from chat bots to self-driving cars, and marketers are jumping on the bandwagon to take advantage of the trend. In this article, we will define and delineate AI, machine learning, and deep learning, and the expected consequences each will have on information security. And while the movement to involve systems more in functions traditionally attributed to human cognition is well underway, let’s take a step back to see what these terms actually mean.

The Cybersecurity Capacity Problem

The way companies approach cybersecurity is evolving, and can be examined in three phases:

  1. Prevention Just 10 years ago, companies focused their efforts on prevention: avoiding compromise. Companies built walls and fortified networks to keep their adversaries out.
  2. DetectionBased on an increase in the volume and sophistication of attacks, organizations then implemented detection systems to alert them when potentially malicious threats made it through their defenses.
  3. Response Looking at prevention and detection systems, you’ll notice that these technologies are automated and very fast. However, until now, organizations have relied on people to make sense of the alerts generated by these products, and expect them to perform manual tasks to investigate whether the threats are real or benign. The resulting response is slow and repetitive, and incident response teams are drowning in alerts with no chance of keeping up.

The incident response challenge coupled with a staggering cybersecurity skills gap presents a cybersecurity capacity problem. As Doug Graham, CISO at Nuance Communications puts it:

"It’s easy to end up in a cycle where one buys more tools, gets more alerts and, despite working hard to correlate those alerts, still finds the volume of resulting actions staggering.

Companies need to find ways to break this cycle or turn down the volume of alerts, as there will never be enough staff bandwidth to properly process every alert."

The only way organizations can keep up with the volume of threats and subsequent alerts is through security automation, and artificial intelligence is a critical capability of security automation technology.

Defining the Terms

An article in the Wall Street Journal by Yann LeCun, director of artificial-intelligence research at Facebook asks the question “What’s Next for Artificial Intelligence”? From the article:

The traditional definition of artificial intelligence is the ability of machines to execute tasks and solve problems in ways normally attributed to humans. Some tasks that we consider simple—recognizing an object in a photo, driving a car—are incredibly complex for AI. Machines can surpass us when it comes to things like playing chess, but those machines are limited by the manual nature of their programming; a $30 gadget can beat us at a board game, but it can’t do—or learn to do—anything else.

The article then goes on to delineate AI, machine learning, and deep learning, and the expected consequences each will have on careers, the economy, and a fundamental change in the way humans interact with machines. And while the movement to involve systems more in functions traditionally attributed to human cognition is well underway, let’s take a step back to see what these terms actually mean.

What is Artificial Intelligence?

A quick look at the Wikipedia definition of AI:

Artificial intelligence (AI) is the intelligence exhibited by machines. In computer science, an ideal "intelligent" machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at an arbitrary goal.[1] Colloquially, the term "artificial intelligence" is likely to be applied when a machine uses cutting-edge techniques to competently perform or mimic "cognitive" functions that we intuitively associate with human minds, such as "learning" and "problem solving".

Without wandering too far down the rabbit hole, the definition of rational agent:

In economics, game theory, decision theory, and artificial intelligence, a rational agent is an agent that has clear preferences, models uncertainty via expected values of variables or functions of variables, and always chooses to perform the action with the optimal expected outcome for itself from among all feasible actions. A rational agent can be anything that makes decisions, typically a person, firm, machine, or software.

Artificial Intelligence in the context of a computer system needs to be able to solve problems and execute tasks that mimic the human cognitive process including:

  • Understanding the scope of the problem at hand
  • Knowing where to find sources of information to help solve the problem
  • Being able to ingest data from the outside
  • Having the capacity to analyze data
  • Deciding what actions to take based on data analysis
  • Determining whether those actions solved the problem
  • Running an analysis to see whether what was uncovered in the course of the above process can be applied elsewhere

Let’s take these one-by-one as they relate to cybersecurity automation and orchestration.

Understanding the Scope of a Cyber Threat

An automated system that aims to investigate, evaluate, and then remediate a cyber threat must also be able to understand the scope and breadth of the threat. Without knowing the magnitude of the problem, such a system would never be able to fully solve the problem.

Let’s look at a common incident response scenario as a human cyber analyst.

When a detection system like FireEye sends an alert about a known malicious IP address to a cyber analyst, the analyst could perform the following logical steps:

  1. Determine which machine on the network has connected to the offending IP address
  2. Inspect the endpoint and perform an investigation to see if the machine has malware that is connecting to the IP address
  3. Take remediation steps to clean the machine and make sure there’s nothing left behind
  4. Add a firewall block rule to stop any other machine from accessing the IP address

Those four steps can solve the issue as it was presented, and you could argue that the analyst did what they were expected to do. However, a system that uses artificial intelligence and security automation would need to perform additional steps:

  1. Query network resources to determine what other machines on the network have accessed (or attempted to access) the IP address
  2. Automatically trigger additional investigations on each machine to kill processes, quarantine files, and remove anything malicious from memory
  3. Send the results of each investigation back to a ticketing system

In many cases, a single alert is a symptom of a much larger issue and an artificially intelligent system must be able to understand the bigger picture.

Knowing Where to Find Sources of Information to Help Solve the Problem

Keeping with the example of a FireEye alert about a malicious IP address, we saw that the artificially intelligent system was able to query network resources to determine what other machines had accessed the offending IP address. In that one step, the system had to perform a complex chain of actions that are necessary to be considered AI:

  • The system must know where and how to access additional network resources
  • It must know the purpose of these resources and what data should be there
  • It must have the ability to parse through the data to find what is relevant and actionable
  • The system is required to apply the relevant findings to translate what it has found into a series of subsequent actions

All of these steps seem elementary to us, as they are both logical and how our brains function. However, being able to codify the decision-making process involved when looking for additional information to solve a problem is incredibly complex and a hallmark of artificial intelligence.

The Ability to Ingest Data from Outside

Resourcefulness is an innate human trait. Just think of how often you look for external sources of information every day. From checking the weather to reading a paper on artificial intelligence, we are constantly querying data from the outside to help us make decisions.

In the cybersecurity world, the ability to access up-to-date information about known threats is essential for any security tool to function. The volume and sophistication of threats require constant updates to things like AV signatures and threat intel feeds in order to thwart attacks at scale.

An artificially intelligent incident response system must be able to access an array of different threat intelligence sources constantly if it aims to evaluate every cyber alert it sees. In doing so, the system is able to always incriminate or exonerate potential threats with the highest level of confidence possible.

The Capacity to Analyze Data

Analysis of data by an artificially intelligent system can only be accomplished by determining content, context, and meaning.

  • ContentPut simply: what are we looking at? In the case of an alert, what pieces of data should the system be looking for in order to take the next step. Examples could include IP address and location of potential threat.
  • ContextWhat type of alert is this? Was it sent by an AV? A DLP system? A SIEM?
  • MeaningGiven content and context, what should the system do next?

Deciding on a Course of Action Based on Data Analysis

Once an artificially intelligent system has performed the requisite analysis, it must know what to do next based on codified logic. And while a similar investigation flow can be applied to multiple alerts, the remediation process can be vastly different. Some examples:

  • Phishing Email Who is the sender? What files are attached? Has anyone clicked the attachment? Downloaded and run the executable? Given their credentials? The resulting remediation actions based on the answers to these questions are conditionally dependent and require advanced decision logic.
  • Malicious IP Address If an IP address deemed to be malicious is accessed by a device on a network, what happens next? Is the IP address just a symptom of a malware-based infection on an endpoint? What kind? Is it ransomware making a call to the IP address and encrypting files? How many other machines are making calls to the IP? Once the root problem is cleaned on the endpoint, does it make sense to automatically add a firewall block rule to prohibit others from accessing that IP?
  • AV Alert If the system gets an alert about a Trojan on a laptop and sees that the AV has successfully removed the offending files, is that a sign of a successful remediation? Or should the system instead run a full investigation to ensure that the Trojan wasn’t just an entry point to spawn malicious processes and morph into something the AV has missed?

Knowing what to do after a determination has been made about a potential threat is arguably the most critical capability of an artificially intelligent cybersecurity solution. Understanding how to rigorously investigate, remediate, and continue the cycle is what makes an AI solution valuable.

Determining Whether Actions Taken Solved the Problem

Evaluating whether the actions taken actually solve the entirety of the problem is the critical last step of the alert to investigation and remediation workflow. While some products and processes will stop at the remediation phase, any artificially intelligent system must be able to both verify that the remediation actions have been successful and that no additional actions are necessary.

Keeping with the AV example referenced earlier, an AI-based cybersecurity solution would verify that the AV product successfully removed the files and processes at the root of the infection, check for anything left in memory, launch parallel investigations to determine if there was any lateral movement, and re-investigate to make sure those steps have fully remediated all traces of the infection environment-wide.

Applying Results Elsewhere

Finally, once an AI-based cybersecurity solution has completed the end-to-end flow from alert to remediation and verification, it must be able to apply its findings universally. For example, if an alert from a detection system is determined to be an unknown threat, the system can then detonate the suspicious entity in a sandbox to examine behavior and incriminate or exonerate based on characteristics observed. Just because a threat is unknown to threat intelligence feeds, for instance, does not mean investigation should stop. When a new threat is uncovered, an artificially intelligent system is able to apply its newly-found knowledge to all other systems in its network, launching investigations to find out whether other machines exhibit evidence of the threat or threat type.

About the Author: Nathan Burke is Vice President of Marketing at Hexadite, where he is responsible for bringing Hexadite's intelligent security orchestration and automation solutions to market. For 10 years, Nathan has taken on marketing leadership roles in information security-related startups. He has written extensively about the intersection of collaboration and security, focusing on how businesses can keep information safe while accelerating the pace of sharing and collaborative action. 

Related ReadingThe Role of Artificial Intelligence in Cyber Security

Possibly Related Articles:
6430
Network Security Orchestration artificial Intelligence
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.

Most Liked