AI isn't just for chess games anymore. Today, it could outwit cybercriminals vying for control of your company network.
When scientists first raised the concept of artificial intelligence over 60 years ago, they thought it would change the world. Pop science articles about smart home robots hyped the topic, but the reality was different. The technology just couldn't deliver. For decades, artificial intelligence floundered in the backwaters of technology research during what became known as the AI winter.
Then, thanks to the evolution of graphical processing units (GPUs), things changed. These fast processors excel at the mathematics needed to crunch mountains of data and spit out the statistical models that AI uses for its analysis. The last decade has seen a quantum leap in AI technology, and now, sci-fi scenarios like self-driving cars finally seem without our grasp.
AI is showing promise in another area: cybersecurity. The same machine learning concepts that are driving technologies like digital virtual assistants and computer vision can also protect our data, say experts. CapGemini surveyed 850 CIOs and CISOs in Europe, the US, India, and Australia for its report on cybersecurity in AI to gauge their interest in this technology. Of these executives, 69% said that AI would be necessary to respond to cyberattacks. Only one in five had used the technology for cybersecurity purposes before this year, but two in three planned to implement it by 2020.
What is it about AI that makes it suitable for cybersecurity? It excels at two things: small, repetitive tasks that take human operators a short amount of time, and analysis that isn't black and white. AI decisions fall into a grey area, in which the result is a judgement call. An example is deciding how legitimate something is. A bank transaction isn't either trustworthy or fraudulent: it has a probability of being legitimate, and AI tells us what that is.
Cybersecurity analysts make these judgement calls on lots of things every day. They may have to decide whether a malformed network packet was malicious, or whether an endpoint communicating with a new IP address has become part of a botnet. They might have to decide whether a login attempt by a senior executive from a public network is allowable.
Analysts are having problems doing this stuff at scale. Of 267 cybersecurity professionals surveyed by the Enterprise Strategy Group. 74% said that a shortage of skills had affected organizations they'd worked for. The biggest effect was an increasing workload on existing staff, while 30% complained of burnout among security pros. Many said that they were unable to properly investigate security alerts, and when they did, they often made mistakes.
Cybersecurity AI tools look for those incidents and anomalies at scale and in real time, surfacing the most critical issues for analysts to investigate. They use machine learning tools to scour historical network data, establishing a baseline that represents normal activity. Then, they quickly flag and prioritise anything unusual that happens. The idea is to remove a lot of the heavy lifting, freeing up analysts to do what they're good at: looking at the incidents that the AI tool deems serious enough to show them, and then using their human intelligence to close the circle, deciding what to do next.
Some AI solutions focus on related areas like identity and access management (IAM), using historical data to decide whether a user login attempt is legitimate.
The Dark Side
So, AI can be a force for good in cybersecurity. Like many technologies, though, it can also be a force for evil. Attackers can use AI to think faster than people can and automate many of the attacks that we see human cybercriminals mounting today.
National security agencies already see the threat. In 2016, the United States' DARPA military research group held a contest that pitted AI algorithms against each other. Competing algorithms rushed to fix security holes in their software while finding holes in others.
Three years later, AI continues to evolve, spawning new approaches such as generative adversarial networks that can be more creative about the solutions they come up with. If the likes of DARPA are watching this space carefully, you can be sure that others ranging from nation states to cybercriminals are too. In the cybersecurity arms race, defenders may have no choice but to use the same technology that attackers do.