Alan Turing, Undecidable Problems, and Malware

Monday, January 23, 2017

Kenneth Geers

C66e591693fd6e7b26fc5a60efb68817

In 2003, Oxford University Philosophy Professor Nick Bostrom posed the following question: what if an artificial intelligence (AI) machine were given just one task: to create as many paper clips as possible? If you think about it, this AI machine might decide to kill off the human race. Why? Because 1) humans may decide to turn it off, and 2) humans are made up of atoms that could be used to make more paper clips.

Alan Turing thought about such information technology challenges almost a century ago. In 1936, Turing argued that humans can never predict whether a computer (a “Turing machine”), even given infinite processing power, storage space, and time, will provide a final Yes or No answer (given a random program and random input). In other words, we cannot know if or when a computer will finish its work, or simply run forever, calculating who knows what. The reason is that any algorithm can be made to contradict itself. Therefore, humans just have to wait for a computer to provide some kind of answer, and then evaluate whether it is what they were looking for, and whether the result seems reasonable.

Over the years, there have been interesting variations on this theme. In 1983, Turing Award winner Ken Thompson argued that an evil compiler could automatically insert a secret backdoor into every program it generates, and that no one could know about it because every “trace of malice” in the compiler’s source code could be removed. The moral, Thompson wrote, is that you cannot trust code that you do not “totally” create yourself – including the compiler.

These are not idle, philosophical questions with no practical value. For the analysis of malicious code – or “malware” for short – simple programs do not pose too much of a problem. However, in the current IT landscape, there is simply too much “attack space.” Hackers regularly sneak malware into images, advertisements, software updates, steganography, and more within the millions of lines of code passing through your network every day. And even with access to source code, it is not possible to discover all possible vulnerabilities and attacks, from buffer overflows to SQL injection techniques.

Furthermore, we have to consider the impact of time. Software analysis is not only complex, but also time-consuming. In the Internet era, the average human’s attention span is down to 9 seconds. Consider an analogy from tournament chess, where each player has two opponents: the person sitting across the table, and the ticking chess clock. The business world has the same problem: time is money, and you have to move fast.

Attackers know that complexity plus time constraints are a dynamite combination. Your employees need access to untrusted files and programs, but your anti-malware solutions cannot deliver a reliable verdict within a reasonable time frame.

So what is the best way to secure your network? In order to keep workers happy and productivity high, sometimes you have to run untrusted code. But that code should be run in quarantine, where it cannot damage your IT infrastructure. In parallel, the untrusted code must be subject to a combination of software and (if need be) human analysis. Following that, the previously unknown code can be added to the whitelist of trusted files – or to the blacklist, where it will stay forever.

About the author: Kenneth Geers (PhD, CISSP) is Senior Research Scientist at Comodo, a global innovator and developer of cybersecurity solutions. He is also a NATO CCD COE (Cyber Centre) Ambassador, a Non-Resident Senior Fellow at the Atlantic Council, an Affiliate at the Digital Society Institute of Berlin, a Visiting Professor at Taras Shevchenko National University of Kyiv in Ukraine and an accomplished author.

Possibly Related Articles:
19228
Viruses & Malware Enterprise Security Vulnerabilities
Enterprise Security Alan Turing algorithm untrusted code
Post Rating I Like this!
The views expressed in this post are the opinions of the Infosec Island member that posted this content. Infosec Island is not responsible for the content or messaging of this post.

Unauthorized reproduction of this article (in part or in whole) is prohibited without the express written permission of Infosec Island and the Infosec Island member that posted this content--this includes using our RSS feed for any purpose other than personal use.