Is AI a Cyber Security Ally, or a Threat for CNI organisations?
Short answer? It’s both.
The speed of AI tools make them an ally for cyber security experts. It’s useful for intelligence functions, as it has improve the pace at which incidents can be checked and reported. IBM’s recent Cost of a Data Breach report also found that where an organisation was using AI extensively in its security operation, it was able to speed up detection and containment. This ‘saved an average $1.9 million in breach costs and reduced the breach lifecycle by an average of 80 days’[1].
That speed is also a threat. For bad actors, AI tools can scan vast amounts of data to identify vulnerabilities in systems, networks, or applications much faster than previously. For example: machine learning can be used to detect outdated software versions or misconfigured servers.
There is a further risk from shadow AI use. Adoption is greatly outpacing security and governance. Users are keen to benefit from the ability to code or write at pace and organisations are fearful that competitors will take advantage of new intelligence age and businesses that get left behind will fail. As a result, AI is already in use in many organisations, but without any security guardrails in place. Organisations are playing catch-up with AI use. The IBM report found that 97% of the companies in the research reported not having AI access controls in pace.
Critical National Infrastructure organisations can benefit from a policy-first approach. It may come as a revelation, but organisational information and code is already being shared with LLMs, and data is even now being held outside the UK. As users make requests of the likes of ChatGPT, even for a summary, swathes of information lands in a server in California or Texas. To counter this, it’s important for organisations to get their AI procedures in order.
We recommend:
Start with the policies you already have. Examine how they can be updated
Make sure they are put into practice. It’s not enough to have it on paper, it has to work in daily office life too
Identify change management procedures and create a security management plan that includes AI
Bring together a team to implement a shift in organisational behaviour to incorporate AI risks
A policy isn’t effective if it’s never properly implemented. Once in place, as with all security measures, AI policies and procedures need to be maintained. This is an area that organisations can traditionally find difficult, in fact IBM found that ‘of the organizations that have AI governance policies in place, only 34% perform regular audits for unsanctioned AI [2].
So AI is both an ally and a threat for CNI organisations. It’s an advantage to potential system infiltrators because of the speed at which it can assess and learn from data. Shadow use is an immediate threat, as users test AI tools with organisational information, sharing it with global LLMs. But it’s can also be an effective tool with regularly updated policies and procedures. In the ever-changing world of cyber security, AI can be harnesses to limit the impact of security incidents.
[1] IBM Report: 13% Of Organizations Reported Breaches Of AI Models Or Applications, 97% Of Which Reported Lacking Proper AI Access Controls
[2] Cost of a data breach 2025 | IBM
Author: Shannon Simpson
NEED SUPPORT? Enquire Now
One of our experts will be in touch shortly to better understand your requirements and challenges.