The turtle that Google’s AI thought was a gun – and the intelligence of AI in cyber security
Posted: 8 November 2017 | By Jamie Graves
Earlier this week, a story surfaced about Google’s artificial intelligence (AI) being duped. By a turtle. Researchers identified that by understanding the patterns and ways that an AI system can classify images, they could 3D print a turtle that was identified by Google’s systems as a rifle from every angle.
It’s a funny story (and I haven’t even mentioned the baseball classed as an espresso or cat categorised as guacamole), but one that serves to prove a wider point regarding the fragility of AI systems and the extent to which they can actually be deemed ‘intelligent’.
Why isn’t artificial intelligence more…intelligent?
There is no one in the world, not the giants behind self-driving cars or the experts at DeepMind, who actually have an artificial intelligence system
I’ve had back and forth arguments about the definition of AI with more than one person, and quite often it comes down to what definition you are actually using. Let’s be very clear – there is no one in the world, not the giants behind self-driving cars or the experts at DeepMind, who actually have an artificial intelligence system. Your self-driving car may well get you home in one piece, but it’s not going to get your bags out the boot, no matter how nicely you ask it.
This is because what is deemed to be artificial intelligence is, in near enough all cases within cyber-security, actually machine learning. Machine learning is a subset that falls under the umbrella of artificial intelligence and for all intents and purposes is some clever counting and statistics layered into technology – not a fully autonomous system capable of thought. Spam filters are often a good reference for how machine learning technology was initially used.
Much of the concern around artificial intelligence within the sector stems from expectation management; with AI at the peak of its current Gartner Hype cycle, said expectations are inflated. This is why we, for instance, use the terms augmented intelligence when referring to our machine learning capabilities. The technology available is incredibly useful and can carry out sophisticated analysis to benefit security teams, but that’s where the line needs to be made clear; it’s there to augment and benefit workers, not replace them entirely.
When looking at what an artificial intelligence or machine learning system can bring to the table, it needs to be viewed as one of three pillars: people, process and technology
The three pillars of artificial intelligence in cybersecurity
There certainly are instances of AI being presented as a silver bullet within cybersecurity. But, as the example of the turtle and rifle shows, that’s simply not the case. When looking at what an artificial intelligence or machine learning system can bring to the table, it needs to be viewed as one of three pillars: people, process and technology.
These disparate areas in actual fact all need to work closely together in order for a company to get the results that they are expecting. Experienced people and sensible processes need to be backed up by the relevant, intelligent tech, and, just as importantly, brilliant AI needs to be guided by people educated in how it works. AI is the equivalent of putting a Ferrari engine in your Ford Focus, but it still needs someone at the wheel.
As a huge supporter of artificial intelligence and the impact that it can have in cyber security, I’d like to see a better understanding from the sector in how AI is presented to and evaluated by the end user. We are now past the point of ‘a rule will be triggered or it won’t’, and are firmly in the realms of ‘fuzzy outcomes’ and probability-based systems.
This is far harder to communicate to those that don’t work within the industry; if you presented the weather as a probability, then people would be annoyed if it rained on a day with a rain probability of 30%. This, along with honesty regarding the capability of systems and an emphasis on education, is what will make cybersecurity AI a force to be reckoned with.
Jamie Graves is the CEO of ZoneFox, which aligns your security with your business needs.