Blog

AI for cybersecurity or cybersecurity for AI, or both?

Satu Korhonen Product Owner of AI Empowerment, Solita

Published 30 May 2025

Reading time 7 min

We live in a world of increasing data volumes, where we are increasingly able to benefit from different data types. Furthermore, as the geo-political situation has grown more uncertain, the number of cyber threats has increased and will most likely increase in the future. The recent development of generative AI, for instance, allows for more sophisticated deep fakes and the generation of harmful code while requiring only a tiny amount of technical knowledge. Both the domains of AI and cybersecurity are indeed in a phase of fast evolution.  

With these increasing capabilities, risks, and latest regulations like NIS2 and the AI Act, it is also increasingly clear that these topics are becoming more entwined. NIS2 emphasises the cybersecurity of all IT solutions, including those having AI features, while the AI Act categorises AI systems based on their risk levels and imposes obligations to ensure safety, transparency, and accountability.

To understand how these two areas overlap, it is important to distinguish two differing viewpoints. On the one hand, cybersecurity practices can and should be used to enable the creation of secure AI systems, so the viewpoint in question is AI security or, phrased differently, cybersecurity for AI systems. On the other hand, AI algorithms, like anomaly detection, can be used to increase the cybersecurity of any system. So, the viewpoint here in question is AI for cybersecurity. As there is a clear overlap between these two areas, advances in one can also benefit the other.

This post focuses on the intersection of these fields where AI and cybersecurity meet. We’ll look at the differences between these two viewpoints, their challenges, and the importance of addressing them both. Let’s begin by considering how cybersecurity can be used to make AI solutions safer and more secure.

Cybersecurity of AI

At the core of every AI system is firstly data, secondly one or more AI models, and thirdly, the system moving data from storage to model, data processing, monitoring and so forth. These parts work within some network, in some location and interact with other systems to enable the use of the outcomes of the AI model1.

AI systems differ from other IT systems mostly in having the model making predictions based on probabilities, and the immediate system around it to make sure it works. These systems also share the vulnerabilities that come from hardware, software, data, users, and so forth. In short, they are mostly IT systems that can fail, that can be hacked, that can leak data and secrets and so forth. The cybersecurity of AI focuses on addressing and mitigating the number of vulnerabilities the AI system is exposed to, known as its attack surface.  

ML system's image

The goal of securing AI is to enable the data and model to remain confidential, enable the logic to be unmodified by adversaries (integrity), and the system to be available when and where needed. The security of AI systems is a topic that is rapidly growing in importance as AI is increasingly included and embedded in all the systems we use at work and in our everyday lives.

There are three main sources to consider when thinking about AI system security. Firstly, there is human error and compromise. Humans can input data into AI that should not be added either on purpose or by mistake. They can also rely too much on AI, leading to less-than-ideal decisions, or they can rely on it too little, causing useful and efficient tools and processes to be unused. The main way to mitigate human-related issues is training on cybersecurity and AI, guidelines, access management, risk assessment, and data protection.

The second source is malicious activity, also known as adversarial attacks. In an adversarial attack, malicious or curious actors may, for instance, subtly manipulate input data to deceive AI models into making incorrect predictions or classifications. These kinds of attacks can take many forms. The data is a major attack vector that can be influenced either before the original training of the AI model or in the re-training phases that are needed to keep the model up to date and functional. By inserting erroneous or misleading data into the training or re-training dataset, an adversary can undermine the model’s integrity through data poisoning. Denial of Service attacks, on the other hand, aim at taking a system offline and unusable. Prompt injections are ways to try to make generative AI models behave against their safety guardrails. These kinds of issues can be mitigated by good MLSecOps and DevSecOps practices, like endpoint security, proper access management, supply chain monitoring for vulnerabilities, code checks, data management, input monitoring, intent detection, and so forth.

AI can also have silent model problems that need to be specifically tested and monitored for. They can stem, for instance, from biased models giving out discriminatory, incorrect or harmful output. These are issues that can be hard to detect through alerts, require thorough testing, risk identification and mitigation that focuses on these. Also, identifying unintended consequences of the system is important, as well as exploring the potential for compliance issues. With new regulations on both cybersecurity and AI, it is important to consider the changing requirements to avoid ending up in a situation where non-compliance leads to real-life consequences for organisations. Outside these three issues, there are always hardware failures, losing key employees and any sources of system disruptions that can cause problems at any stage of an AI systems’ life cycle.

Basically, AI systems are continually more present and prevalent. They also handle increasingly more critical tasks and processes, so their cybersecurity becomes vital to consider from the moment of ideation of the AI system ideation and design to end-of-life considerations. Security requires risk assessment, threat modelling, testing, incident reaction, and system monitoring. Organisations can respond to and mitigate attacks more rapidly if they have clear processes and an incident response plan in place, as well as a routine for checking AI systems for indicators of compromise.

Since AI cybersecurity is a speciality field within AI, and cybersecurity specialists might be hard to come by, one solution to increase the cybersecurity of AI solutions is to have security champions in development teams handling AI systems that also understand AI. These, let’s call them AI security champions, should have allocated time for improving the security of the solution. Their role is to help the team adhere to security conventions and processes and actively communicate with other security champions and cybersecurity specialists.

Using AI to improve cybersecurity

AI for cybersecurity, on the other hand, refers to the use of AI technologies to improve and strengthen cybersecurity efforts. For instance, anomaly detection is a machine learning technology that can identify unusual patterns and data points in a vast amount of data to identify potentially unknown attack types. Generative AI can be used to detect malicious code or deep fakes meant to defraud. As more of our lives happen through our devices, we create more and more data. With the sheer volume of data continually increasing, traditional methods of threat detection and prevention become inadequate and probability-based AI technologies become needed.

AI technologies can present one way of handling this increase in data quality, quantity and variability as they can find statistical patterns, basically what is common and what is odd, in massive datasets and use them to create predictions that can be used to detect and mitigate cyberattacks and vulnerabilities. It also enables detecting novel patterns, as in the example of anomaly detection above. This capability is quite important as the threat landscape is ever-changing. AI technologies can be used to detect trends, problems, and patterns in network traffic, user behaviour, log files, emails and so forth. The list of potential use cases is ever-increasing as AI technologies develop further. It can potentially enable focusing resources on the critical threats from the false positives, as well as identifying security vulnerabilities and mitigating them before they are taken advantage of.

Using AI approaches in cybersecurity has benefits. These models can process and analyse data faster and on a larger scale than humans. They offer a more dynamic defence against changing cyberthreats since they can recognise the common from the uncommon and change to new threats over time. Furthermore, AI technologies can lessen the need for human intervention, freeing up cybersecurity experts to concentrate on more strategic duties.

Of course, all technologies have their challenges as well. For instance, generative AI in cybersecurity is an interesting field to look at, but the development and discovery of valuable use cases is still heavily ongoing. Also, with AI being able to scan more data, the risk of false positives and false negatives is something to be very mindful of. While a false negative could let a genuine threat go unnoticed, a false positive could result in needless alerts and resource waste. Another challenge is the need for large amounts of the needed kinds of data to train AI models effectively. If the training data is incomplete or biased, the model may not work as desired.

Conclusion

AI and cybersecurity are areas that can bring substantial value to each other. This overlap between them can be viewed from two different angles. We can and should use both. We should utilise the methods, tools, and processes from cybersecurity to enable our AI solutions to be secure. And we can utilise AI technologies to improve our cybersecurity, also outside our AI solutions. However, as experts for both of these fields can be hard to come by, utilising ideas like the security champion can help, as well as collaborating with experts from companies like Solita.  

  1. Business
  2. Data