AI versus humans in the fight against cybercrime

Or why the future of cybersecurity might be one where algorithms occasionally augmented by human intelligence do battle against one another, writes Andrew Huntley of Barracuda Networks

Reliance on artificial intelligence (AI) to combat cybersecurity threats looks set to increase by several orders of magnitude over the next few years. A recent report from P&S Market Research suggests that the global AI cybersecurity market will reach US$18.1 billion by 2023. 

As the overall size of the attack surface continues to expand, it’s simply not feasible for cybersecurity teams to defend against every threat without some additional help. AI’s playing an important role in helping automate threat detection and response, which in turn, eases the burden on these teams. 

If we take a look at social engineering attacks such as spear phishing and business email compromise (BEC), they’re extremely hard to detect. Cybercriminals leverage social engineering to mimic user behaviour to get around known defences and infiltrate organisations.

This gets to the heart of the challenge for IT security teams. Email is the number one threat vector because it allows malicious third parties to directly target what has long been regarded as the organisation’s weakest link: its employees.

Yet most cybersecurity investments in recent years have been directed at securing networks and computers. While this of course is a great thing, the problem lies in where the bad guys focus their attention next – exploiting human weaknesses.

AI is increasingly being used to detect and block social engineering attacks and identify employees that are at highest risk of being attacked.

In some cases, this is done by combining information from multiple signals to learn the unique communications patterns of employees and identify anomalous signals in message metadata and content. This messaging intelligence can determine with a high degree of accuracy whether an email is part of a social engineering attack and block it. 

Will AI replace cybersecurity experts?

 Most AI use is expected to augment rather than necessarily replace cybersecurity professionals. As mentioned earlier, cybersecurity teams find it difficult to defend against every threat without some additional help. 

There are already millions of cybersecurity positions that aren’t being filled because of a chronic skills shortage. At the rate at which cybersecurity students are being trained, most of those positions might never be filled. 

There’s no denying that AI will have a significant impact on the cybersecurity profession. Many entry-level cybersecurity positions will be eliminated because most of the tasks performed by those individuals will become automated. 

The salaries that cybersecurity professionals can command might also be impacted. Salaries are usually a function of supply and demand. As AI becomes more pervasive, demand for cybersecurity expertise in the form of a human will ease somewhat. 

Like it or not, the fact is that AI applications don’t ever suddenly move on to a new job, get sick or forget anything they’ve learned, unlike humans. The cost of setting up those AI applications will also continue to fall to the point where the cost of employing a machine to perform security tasks will be less expensive than hiring a human to perform the same task. 

It won’t happen overnight

 Naturally, none of this is going to occur overnight. AI applications are only as good as the algorithms on which they’re based. Those algorithms require access to massive amounts of data to identify patterns. 

Someone with cybersecurity expertise needs to train those algorithms to recognise various types of attacks. As the attack vectors shift, someone will also need to train new AI models capable of recognising those new patterns. 

While the amount of time required to train an AI model is reducing, it still requires a significant amount of time and effort to train algorithms to handle cybersecurity tasks. In fact, there’s likely to be a need for new algorithms to be developed specifically for cybersecurity. Most of the algorithms being employed in AI applications today were developed decades ago.  

Cloud security: An imperative 

The one thing that will change sooner rather than later is increased reliance on cloud security services. The only way to cost effectively aggregate all the data required to drive an AI application is to employ a cloud service. In effect, the shift to cloud security is a prerequisite for developing AI applications. 

Cybersecurity professionals should assume their adversaries are making similar investments. It’s already clear that cybercriminals will be leveraging AI algorithms to discover vulnerabilities that bots will then be able to target with greater precision, at much faster rates than ever before. 

Increased reliance on AI to help thwart those threats will be required. In fact, the future of cybersecurity might be one where algorithms occasionally augmented by human intelligence do battle against one another. 

In the meantime, cybersecurity professionals should start getting used to the idea that their next team member may not be a human. They’ll be able to communicate with that new team member via natural language processing (NLP) models crafted by the developers that created the application. But don’t expect that new team member to be overly empathetic the next time you make a mistake. 

Andrew Huntley is the regional director of ANZ and Pacific Islands for Barracuda Networks