Should we give AI the key to our security?
Posted: 27 November 2017 | By Sam curry
The cyber security industry is a good example of a field where artificial intelligence (AI) is both being looked to as a near-magical perfect solution while also already being deployed in a practical way every day. But can we trust it?
The cyber world is notoriously unbalanced, with the hostile attackers having their pick of thousands of vulnerabilities to launch their strikes, along with deploying an ever-increasing arsenal of tools to evade detection once they have breached a system. While they only have to be successful once, the security teams tasked with defending a system have to stop every attack, every time.
The inhuman speed and power of an advanced AI would be able to tip these scales at last, levelling the playing field for the security practitioners who are constantly on the back foot.
The perfect AI would be able to detect and thwart even the most well-planned, high level attacks – all without the need for any human intervention.
What we currently have – machine learning helping people
While we wait for our perfect, genius artificial intelligence to appear, AI is currently being heavily used in the security industry in the form of machine learning (ML).
Essentially a system that can learn without being explicitly programmed to do so, ML lacks the self-awareness that is popularly ascribed to AI. However, it is still incredibly valuable when it comes to handling large amounts of data and identifying patterns and trends.
This capability is used by cyber security practitioners to better get to grips with the vast amount of potential evidence they need to sift through after a cyber attack.
One of the earliest tenants of forensic criminal investigations is Locard’s Exchange Principle, the idea that all crime scenes involve an exchange of the perpetrator taking something away, but leaving something behind in return. The investigators are tasked with finding and understanding these traces in order to help them understand what has happened, and hopefully track down the criminal.
Cybercrime has upended Locard’s Exchange Principle because the average attack creates an exponentially larger amount of potential evidence to be examined – with many specifically designed to conceal or disrupt the evidence and hinder the investigation.
Analytical tools powered by ML enable cyber investigators to regain the advantage by handling the heavy lifting of sorting through the enormous piles of digital evidence and breaking it down into key points and trends. Rather than having to tediously comb through everything themselves, the human practitioners can focus on the most important evidence first.
Every minute counts when it comes to investigating a breach, so the support of ML is proving to be increasingly invaluable.
But humans still have a place – turtle stuff
As powerful an asset as ML is however, I believe it will be sometime before the self-sufficient cyber security AI the industry dreams of becomes reality. One of the biggest challenges facing AI developers in every field is the fact that unlike a real human brain, an artificial mind is not truly capable of intuition or assumption, and runs purely on data.
A powerful example of this issue appeared in October when MIT researchers were consistently able to trick Google’s AI into identifying a 3D printed model of a turtle as a rifle. MIT’s Labsix team achieved this bizarre feat by adding visual noise designed to confuse the AI – a concept known as an adversarial image. The turtle was so successful it fooled the AI from every angle, and the researchers also had success in convincing it that a baseball was an espresso and a kitten was a bowl of guacamole.
This same technique could also be applied by cyber criminals to fool a forensic AI and throw it off the scent of an investigation.
If attackers are aware of the data points and trends used by the AI, they can hide their activity within noise that will have the AI thinking nothing is amiss, or lead it in the wrong direction.
Just as the human eye will obviously know the difference between a turtle and a rifle, a real security professional will be able to see through this adversarial noise.
The human brain’s capacity to make logical leaps and work by intuition rather than cold data means that the expertise of real human professionals will be a vital part of cyber security for many years to come.
Sam Curry is Chief Security Officer at Cybereason, the world’s most powerful cybersecurity analytics platform