One in four organisations hit by AI data poisoning

Around a quarter of organisations have been victims of AI data poisoning in the past year according to new research from information security and privacy platform, IO (formerly ISMS.online).

Based on a survey of 3,000 cyber security and information security managers in the UK and US, the IO State of Information Security Report also reveals mounting risks from shadow AI.

The study found that deepfake or cloning incidents were reported by 20% of organisations in the past 12 months. A further 28% of respondents see deepfake impersonation in virtual meetings as a rising threat over the next year. Misinformation and disinformation generated by AI were named by 42% of security professionals as a top emerging threat, ahead of generative AI-driven phishing which concerned 38%. Shadow AI misuse – or employees using GenAI tools without permission – was reported by 37%.

More than half of organisations said they had moved too fast with AI deployment and now struggle to scale back or secure its use responsibly. Some 39% identified securing AI and machine learning technologies as a key current challenge, sharply up from 9% in the previous year.

IO's survey indicates that in the coming year many more plan to invest in generative AI-powered threat detection and defence, deepfake detection and validation tools, and in governance and policy enforcement around AI. Some 79% of UK and US organisations report using AI, machine learning or blockchain for security – up from 27% in 2024.

Commenting on the findings, Chris Newton-Smith, chief executive of IO, said: “AI has always been a double-edged sword. While it offers enormous promise, the risks are evolving just as fast as the technology itself. Too many organisations rushed in and are now paying the price. Data poisoning attacks, for example, don’t just undermine technical systems, but they threaten the integrity of the services we rely on. Add shadow AI to the mix, and it’s clear we need stronger governance to protect both businesses and the public.”

The UK’s National Cyber Security Centre has issued warnings that AI will almost certainly increase the effectiveness and efficiency of cyber intrusion operations over the next two years.



Share Story:

YOU MIGHT ALSO LIKE


The Future of Risk & Resilience with AI & Data
CLDigital's Co-Founder, Tejas Katwala, joins CIR Magazine to discuss how CLDigital is transforming enterprise risk and resilience. By integrating business processes, AI and data-centric strategies, organisations can move beyond compliance to proactive risk management – simplifying operations, strengthening resilience, and driving business performance. Listen now to explore the future of intelligent risk management.

Investec is disrupting premium finance – Podcast
Investec made waves in entering the premium finance market, where listening and evolving in response to brokers made a real difference.