AI is an additional weapon for cybersecurity

AI is an additional weapon for cybersecurity

In the security world, there is much talk about artificial intelligence’s impact. But what is AI doing on the attackers’ side? How is it being deployed within security products designed to make companies more secure? Is it also possible to secure the AI used within organizations? We will address those questions on Techzine.

We will address each question in a separate article featuring experts from the field. They participated in a roundtable discussion at the beginning of Cybersecurity Awareness Month. The participants are: André Noordam of SentinelOne, Patrick de Jong of Palo Alto Networks, Daan Huybregts of Zscaler, Joost van Drenth of NetApp, Edwin Weijdema of Veeam, Pieter Molen of Trend Micro, Danlin Ou of Synology, Daniël Jansen of Tesorion and Younes Loumite of NinjaOne. This second article focuses on the role of AI within security solutions.

Also read our first story within the AI in/and cybersecurity series, which provides an accurate picture of the state of attacks.

Going through data quickly

Theoretically, AI within security tools makes more possible every day. For years, it has been an integral part of tooling, such as automatically detecting malicious moves within the network. Now, more is becoming possible, as observed by Huybregts of Zscaler. AI is increasingly becoming a layer of cyber defense, where the model applied is as good as the data you feed it. With a larger amount of data available, the models can be better trained, leading to stronger outcomes. Huybregts emphasizes that Security Operations Centers (SOCs) currently have a lot of data to deal with. “They are looking at the huge amount of data, looking for the needle in the haystack,” Huybregts notes. To correlate this data and provide valuable insights, AI is crucial.

Improvements in AI modeling will also spark new discussions. How capable will AI become in analyzing data? Can it make the SOC significantly more efficient? From Tesorion’s SOC expertise, Jansen has insight into this. In addition to correlating data through AI, he sees detection models that put anomalies in perspective. In the near future, Jansen expects tier 1 analysts to become less crucial. “The focus will be more on incident response and the reports that really need to be investigated, the suspicious situations,” Jansen foresees. More engineering and data science skills will be needed in the SOC, to explain models well. Understanding the models is required to leverage them for risk management.

Cyber defense to new heights

Vier mensen zitten aan een rechthoekige vergadertafel in een kamer met grote ramen en gordijnen, terwijl ze met elkaar in gesprek zijn.
From left to right: André Noordam, Patrick de Jong, Pieter Molen and Danlin Ou

Capable models are taking cybersecurity in the full breadth to new heights. The roundtable participants agree on that, but the improvement steps will vary by field of work. For example, Loumite of NinjaOne emphasizes that AI helps endpoint security by providing better insight into protected devices. In conversations with managed service providers now, he often sees a gap between what organizations think they are protecting and what is protected. This is due to incomplete inventories, especially with BYOD devices in circulation. Devices with no visibility are at risk of remaining unprotected. “It is crucial to close the gap. AI can help to gain better visibility and cover all devices,” Loumite said.

Molen of Trend Micro agrees that AI can really realise an increased security state. He sees a strong application now in using data to estimate how easy certain attack paths are to exploit and to act on that with preventive measures. “It’s about staying ahead of the attacker and proactively strengthening your systems,” Molen explains.

Therefore, by recognizing the patterns in previous attacks, organizations can prevent greater damage. Just think about eliminating a vulnerability that could have been abused for ransomware. Or possibly acting quickly on an incipient attack can make a big difference in the damage scope. It can sometimes take weeks to investigate and resolve an attack. And the costs in the process run into the tens of millions, depending on the company size. So NetApp’s Van Drenth sees prevention and response time reduction as a nice boost. “It’s hard to say no to something that can reduce damage within minutes,” Van Drenth argues.

The rise of the security platform

An integrated security approach is becoming increasingly important. Especially in combination with assessing the influence of AI. All solutions will increasingly work together, which will increase the data pool. AI may not solve everything – it is not a silver bullet – but an ecosystem is crucial to deploy it correctly, Veeam’s Weijdema foresees. “That’s why we integrate solutions with vendors like Palo Alto Networks and NetApp, so the backup data gives more context and produces cleaner results,” he explains. The average user does not want to become an AI expert but does expect results. This requires clear insights without having to dive too deeply into the technology.

So, when embracing AI, the security industry looks at how it can support businesses through partnering. Not just embracing AI to keep up with the times but also coming up with a use case. If this leads to a stronger line of defense, that’s just a nice side benefit. In this, Palo Alto Networks’ De Jong sees a strong preference emerging for best-of-breed — using the best solution in a given situation — combined with a platform approach. “They want rich data that can be collected anywhere, while remaining open to other tooling,” says De Jong. In doing so, companies indicate that they do not want an additional tool to tackle AI, but rather that artificial intelligence is integrated into their existing systems and uses the available data.

Noordam of SentinelOne adds the trend toward the single pane of glass concept. “Whether it’s SIEM, XDR or another tool – organizations are aiming for a single view to consolidate data and avoid silos,” Noordam said. Fewer security solutions but more integration are the future, according to Noordam. Only then can attacks be effectively identified and stopped.

How far can automation go?

Drie mensen zitten aan een tafel met drankjes en gebak, in gesprek. De kamer heeft een houten vloer en een open haard op de achtergrond.
From left to right: Daniël Jansen, Younes Loumite and Edwin Weijdema

With a large amount of available data and knowledgeable models, automation becomes a potential next big step for many organizations. Just because of the complexity of enterprise IT, there is a great need for it. Networking, storage, security tools – everything expanded significantly over the years. Although the intention was to work more efficiently and effectively, practice shows otherwise. According to Synology’s Ou, this increases the need for centralized management systems. These systems can better scan and secure all points in the infrastructure. “AI plays a crucial role in automating device discovery,” Ou points out. This leads to better visibility and a stronger security posture, especially in complex network environments.

With this, the discussion about the degree of automation is increasingly coming to the table. Loumite, following up on his earlier point about endpoint detection, sees opportunities to make things work more efficiently at organizations. These include, for example, the many tools that are now running at organizations but are not being used to their full potential for assessing the attack surface, risk and attack path. “You should really embrace automation,” Loumite said. However, he notes that you can only implement automation at a certain maturity level within the organization. Your infrastructure and security posture must be ready.

Plan B

Besides the potential profit, you can also ask yourself how far along we are with automation. And can the mistakes that automation makes be caught? With AI, you generally see that the technology is not error-free; you might have to accept that. Weijdema is holding back here. He has seen situations where automated decisions locked down entire Active Directory systems. “If you let AI do actions automatically, situations arise where everyone is locked out because the system says ‘no,’” Weijdema outlines. Full automation without human intervention will, at least for the time being, carry this risk. Still, Weijdema sees a future where AI can make more and more decisions autonomously, as long as there is sufficient trust in the data fed to the AI models.

Ou certainly agrees with both sides of the story. She sees potential for automation, but even the best models can fail. Ou, therefore, advocates a “plan B” to recover data if necessary. “What if things go wrong? Can you then recover the data?” Ou wonders. She emphasizes that immutability and air gap techniques are crucial to ensuring data security. Even with the most advanced AI solutions, errors by both AI and humans remain a risk. Therefore, in Ou’s view, it is essential to establish recovery strategies to minimize the impact of an attack.

Humans remain important

No matter how good AI will soon be within the security layer, it seems unlikely to be able to secure everything fully autonomously. Especially in the coming years, human input will remain necessary. Jansen emphasizes that AI sometimes falls short in recognizing obvious threats. “AI is very capable, but still lacks skills,” he explains. He gives the example of a business email compromise attack that AI mistook for normal behavior. The AI labeled the action legitimate because it agreed on the trained landmarks. However, it did not consider additional factors that a security expert would overwhelmingly label as rogue.

For Molen, Jansen’s example indicates that AI will not drive cybersecurity tools 100 percent of the time. It’s a matter of constantly finding where artificial intelligence should and should not be applied. “It is also about knowledge of what was happening years ago and what is happening in the world today,” Molen outlines. Many opportunities exist to analyze and map the attack surface and support detecting malicious activity. Still, at least for now, additional technology and human assessment remain necessary.

Reliable detection and blocking

Vergaderruimte met meerdere mensen aan een tafel, een groot raam op de achtergrond en diverse papieren, kopjes en een karaf water op tafel.
From left to right: Daan Huybregts and Joost van Drenth

In that respect, the signals for additional AI in tooling are simply on green as long as it is useful and we are aware that it cannot do everything. De Jong adds that the models must be reliable. According to him, organizations must be able to trust that the decisions AI makes – for example, blocking or isolating suspicious devices – are correct. “AI obviously has to be very reliable,” De Jong notes. Although there is always a small risk of AI making a mistake, he says such incidents are rare. He foresees that AI will increasingly make decisions autonomously without the need for human intervention, especially when it comes to recognizing anomalous behavior in IT environments.

Nothing will stand in the way of a wave of innovation when all that is in place. From then on, Van Drenth sees a more prominent role for data storage as a component that will be further integrated with security tools. “Data storage needs AI to detect and prevent incidents,” he argues. Applying AI to multiple layers, such as storage and SOC systems, can detect and address incidents faster. Van Drenth emphasizes that integrating AI within different layers of the IT infrastructure requires a holistic approach, where collaboration between teams is essential.

The OT gap

So far, the discussion has mostly been about what AI can do on the IT side. Yet you cannot have the discussion about increased security with AI without addressing the impact on OT and IoT, Huybregts points out. This convergence of OT and IT needs to continue as far as he is concerned. “In OT, AI is becoming increasingly important to make decisions,” Huybregts observes. For example, factories use laser beams with IoT sensors to let AI make decisions independently. Hefty defense mechanisms need to be built around that using AI so that the devices remain well protected.

Indeed, cybercriminals are aware of what they can do on the OT side. Consequently, they are executing attacks on these types of systems and are doing so in increasing numbers. You don’t want to put control in the hands of the hackers. They can do whatever they want when they take over the machines. “The impact of a plant going down is huge,” says Noordam. When a hacker gains access to a particular device, even a dangerous machine can explode, destroying the entire plant. In theory, that is even more damaging than a hacker gaining access to part of the IT system.

Strategic weapon in security

All in all, AI is a broadly powerful weapon in the fight against various threats. It requires careful implementation and cooperation between different layers of the IT infrastructure. The technology must be used not only to detect threats but also to respond to anomalous behavior dynamically. Automation offers opportunities, although it must be cautiously deployed to avoid mistakes. Organizations that want to use AI effectively must rely on high-quality data and be prepared for scenarios in which AI does not function flawlessly. AI will increasingly become an integral part of security solutions in the future. But in combination with human expertise and recovery strategies to reach its full potential.

This was the second story within our AI in/and cybersecurity series. The next article will look at AI that makes organizations use secure.

link

Leave a Reply

Your email address will not be published. Required fields are marked *