AI: the challenges and opportunities for cyber security

Artificial Intelligence (AI) is a branch of computer science that enables machines to imitate human intelligence by ‘thinking’ and learning from experience. Although the true implications of AI are still speculation, it is certain that AI will play a pivotal role in the future of technology.

According to Boston Consulting Group (BCG), 70% of executives expect AI to play a significant role at their companies in the next five years. Meanwhile, Senseon figures reveal that 88% of SMEs have a dedicated AI security budget, with more than half (53%) believing greater expenditure would help them better deal with their cyber security workload. In this article, we explore what AI means for cyber security departments.

Why is AI useful for cyber security?

There is no doubt AI has huge potential for cyber security. In fact, according to Senseon, 82% of security SMEs believe AI is crucial to the industry’s future. Security professionals are likely to soon have toolsets at their disposal that can understand and react to security threats far more efficiently than most practitioners could. So, as we stand on the brink of a potential golden age in our field, how can AI be a good thing for security?

AI-based SIEM tools

We are already seeing real-world applications of AI in security, notably new SIEM-supplementary tooling such as Darktrace. AI allows you to automate the detection of threats and combat them, even without the involvement of humans. In theory, this keeps your data more secure by removing the margin for human error entirely. Since AI is totally machine-learning driven, it assures you complete error-free cyber security services, or so vendors would have us believe. As BCG’s research has suggested, companies have also started to allocate more resources than ever before towards boosting AI-driven technologies. In the long run, this should save an organisation money in areas such as staff and training, as well as lead to a more effective cyber defence team.

Come out, come out, wherever you are: insider threats

Cyber criminals often leave ‘backdoors’ into systems they have illegally accessed, to allow for straightforward re-entry next time they want to create mayhem for their victims. While the entry points have shown to be relatively easy to hide from humans, AI constantly scans situations and behaviours to spot these rogue operators, then shuts them out.

And breathe: space for security personnel to think

Senior security professionals are, on the whole, an intelligent bunch. Most have big plans on how they can develop, evolve and improve their function’s security profile. When they have the time to focus on these improvement programmes, their firms often prosper. However, many senior staff find their time is consumed by stamping out fires and working on the proverbial front line.

The application of AI to cyber operations promises to upset the status quo, according to the World Economic Forum. AI can offer more efficient and effective tools for defending against attacks that occur at machine speeds. This is where the cavalry that is AI tooling can come to the rescue. With AI fighting many of the battles that have typically required senior attention, alongside human support, senior staff can return to what they do best – planning how to protect their employer’s vital resources.

Why is AI a risk to cyber security?

We know AI is already playing a crucial part in enhancing breach detection and relieving some of the cyber security workload. Capgemini research shows 61% of enterprises say they cannot detect breach attempts without the use of AI technologies. But what happens if cyber criminals are also using this technology against us?

Due to AI’s speed of learning, it will be able to identify vulnerabilities and exploit them far quicker than its human counterparts. This has traditionally been a very laborious and time-consuming process. Examples of exploiting vulnerabilities could include AI creating specific email content that would result in a significantly higher click rate for phishing campaigns or using machine learning to mimic behavioural patterns and mask its own activity during a breach. AI could also be capable of predicting a computer’s response before it attacks in order to avoid triggering the target system’s defences. This level of sophistication in attack may leave firms wide open if they do not have sufficient detection and defence capabilities.

Furthermore, firms using AI may be lured into a false sense of security, as AI is usually designed to work autonomously and is empowered to make its own decisions. Corruption within the AI’s defences can often go undetected for some time, leaving firms unaware they have been breached.

Finally, one of the biggest risks is the lack of understanding and costs of integrating AI into a security function, which creates significant barriers to adoption. Security leaders may be leaving themselves wide open if they fail to understand the implications of AI, as well as the repercussions for cyber security when the technology becomes widely available to cyber criminals.

Ethics and AI

While AI and machine learning represent a big opportunity within security, as we’ve discussed, there remain numerous challenges. In addition to the technical aspects of getting these technologies up and running, there is a considerable amount of concern regarding ethics and privacy with AI programs and products.

From our conversations with those in the industry, many professionals seem to suggest there just aren’t enough people within organisations who have an understanding of technology and are focused on ethics. Technology and AI move far faster than emerging skillsets or an organisation’s cultural growth.

As such, tech teams are currently left to themselves to try and make ethical decisions. Not only is that really not their job (and having people mark their own homework is always problematic), but they also lack either privacy, sociological or psychological training and knowledge that would be required for such a complex task. Data protection is possibly the closest most organisations will have to an ‘ethics’ department that also has technological understanding. While privacy is catching up, your privacy team will require a large amount of upskilling to also be able to make the correct ethical calls on AI.

That said, what happens with AI and machine learning responses can be a privacy issue. We’ve seen this to some extent with voice assistants, such as Siri, Cortana and Google Home Hub. All these systems require the use of data to improve the services they provide. However, there have been significant privacy concerns after reports emerged that these systems pick up conversations even when not active, with Siri being the most recent to fall foul of this. Furthermore, employees and contractors for these organisations have been tasked with listening to customer conservations.

This is an ethical and a privacy issue, so both ethics and privacy professionals are required to advise organisations. Warnings and recommendations should cover the risk of falling foul of GDPR and various other privacy regulations, as well as the more intangible ethical concerns that can affect an organisation’s bottom line in either stock value or fines.

The human element

To summarise, AI is a powerful tool that can be used by organisations and cyber criminals alike. It is a technology that can’t be ignored, as it will play a pivotal role in the future of cyber security. AI has the power to greatly streamline threat detection and defence capabilities, reducing costs and freeing up resources for allocation on other parts of the function.

Nevertheless, AI could greatly enhance the speed and exploitation of vulnerabilities in the hands of cyber criminals. Lack of understanding and the cost of adoption could mean cyber criminals have the upper hand once the technology becomes more widely available. Organisations should also be aware of how AI uses personal data because this raises several ethical issues worth considering.

As AI develops and becomes an integral part of cyber security, it will still require security professionals who have their fingers on the pulse with new and emerging technologies to effectively integrate, govern and manage these systems. Falling behind on this trend could mean game over. Ultimately, technology isn’t always the answer, and it will never be the full solution; people are always the strongest defence.

If you would like to discuss your cyber security recruitment needs in relation to emerging technologies such as AI, please get in contact with me on 0207 936 2601 or via email at jem@barclaysimpson.com.

Our 2019 Market Report combine our review of the prevailing conditions in the security & resilience recruitment market with the results of our latest employer and candidate surveys.

Image credit: Hitesh Choudhary via Unsplash