NCSC warns of the hidden risks of AI chatbots

Businesses have been warned to be vigilant of the potential risks from integrating AI-driven chatbots into their business, with research suggesting that they can be tricked into performing harmful tasks.

The National Cyber Security Centre warns that understanding is still limited about the potential security threats posed by algorithms that can generate human-sounding interactions – known as large language models (LLMs).

Specifically, it points to a potential security weakness for LLMs in their vulnerability to ‘prompt injection’ attacks, which is when a user creates an input designed to make the model behave in an unintended way. This could mean causing it to generate offensive content, reveal confidential information, or trigger unintended consequences in a system that accepts unchecked input from the LLM.

In a blog post on its website, NCSC said: “As LLMs are increasingly used to pass data to third-party applications and services, the risks from malicious prompt injection will grow. At present, there are no failsafe security measures that will remove this risk. Consider your system architecture carefully and take care before introducing an LLM into a high-risk system.”

Jake Moore, global cyber security advisor at ESET, added: “The potential weakness of chatbots and the simplicity with which prompts can be exploited might lead to incidents like scams or data breaches. However, when developing applications with security in mind and understanding the methods attackers use to take advantage of the weaknesses in machine learning algorithms, it is possible to reduce the impact of cyber-attacks stemming from AI and machine learning.

“Unfortunately, speed to launch or cost savings can typically overwrite standard and future proofing security programming, leaving people and their data at risk of unknown attacks. It is vital that people are aware of what they input into chatbots is not always protected.”

    Share Story:


Deborah Ritchie speaks to Chief Inspector Tracy Mortimer of the Specialist Operations Planning Unit in Greater Manchester Police's Civil Contingencies and Resilience Unit; Inspector Darren Spurgeon, AtHoc lead at Greater Manchester Police; and Chris Ullah, Solutions Expert at BlackBerry AtHoc, and himself a former Police Superintendent. For more information click here

Modelling and measuring transition and physical risks
CIR's editor, Deborah Ritchie speaks with Giorgio Baldasarri, global head of the Analytical Innovation & Development Group at S&P Global Market Intelligence; and James McMahon, CEO of The Climate Service, a S&P Global company. April 2023