The UK’s National Cyber Security Centre (NCSC) has issued a timely warning about the cyber risks for organisations introducing artificial intelligence (AI) chatbots into their businesses.

In a series of blog posts, the NCSC said that large language models (LLMs) like ChatGPT, Google Bard and Meta’s LlaMA do warrant some caution, due to the growing cybersecurity risks of individuals manipulating the prompts through “prompt injection” attacks.

Some have already flagged security concerns about AI chatbots and LLMs. Earlier this year the Italian data regulator for a time banned ChatGPT in the country and opened probe into whether chatbot’s data collection complies with GDPR.

Image credit: Matheus Bertelli/Pexels

NCSC warning

But the NCSC has warned there are specific vulnerabilities associated with LLMs, namely ‘prompt injection’ attacks; and the risk of these systems being corrupted by manipulation of their training data.

What are the cyber risks and rewards of chatbots?

The UK cybersecurity agency said that risk assessments not only apply to LLMs, but that “cyber security fundamentals still apply when it comes to machine learning (ML).”

Academics and researchers have repeatedly found ways to subvert chatbots by feeding them rogue commands or fool them into circumventing their own built-in guardrails.

Prompt injection attacks

The first method is via ‘Prompt injection attacks’.

This is when a user creates an input designed to make the model behave in an unintended way, said the NCSC. This could mean causing it to generate offensive content, reveal confidential information, or trigger unintended consequences in a system that accepts unchecked input from the LLM.

Hundreds of examples of prompt injection attacks have been published, the NCSC warned. They can be mischievous (such as this example from Simon Willison, that ends up with Bing questioning its existence), but the real-world consequences could be scary too.

The NCSC cited one example, where a prompt injection attack was demonstrated against MathGPT, a model designed to convert natural language queries into code for performing mathematical operations. A security researcher identified that the model worked by evaluating user-submitted text as code, and used that knowledge to gain access to the system hosting the model. This allowed them to extract a sensitive API key before disclosing the attack.

And the NCSC said that as LLMs are increasingly used to pass data to third-party applications and services, the risks from malicious prompt injection will grow. At present, there are no failsafe security measures that will remove this risk. It advised organisations to consider thier system architecture carefully and take care before introducing an LLM into a high-risk system.

Data poisoning attacks

Another NCSC concern centres around ‘data poisoning attacks.’

An ML model is only as good as the data it is trained on, and LLMs are no exception, the NCSC noted. Their training data is typically scraped from the open internet in truly vast amounts, and will probably include content that is offensive, inaccurate or controversial.

Attackers can also tamper with this information to produce undesirable outcomes, both in terms of security and bias, it added.

The NCSC said prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate.

However, no model exists in isolation, so organisations should design the whole system with security in mind.

NCSC advice

The NCSC also issued the following specific advice, saying risks can be mitigated by applying established cyber secure principles to ML development. For example:

  1. Think before arbitrarily execute code is downloaded from the internet (models)
  2. Keep up to date with published vulnerabilities and upgrade software regularly
  3. Understand software package dependencies
  4. Think before running arbitrarily execute code downloaded from the internet (packages)

Disastrous consequences

The NCSC warning about AI chatbots, ML, and LLMs was echoed by Oseloka Obiora, chief technology officer at cyber security intelligence specialist RiverSafe.

“The race to embrace AI will have disastrous consequences if businesses fail to implement basic necessary due diligence checks,” said Obiora. “Chatbots have already been proven to be susceptible to manipulation and hijacking for rogue commands, a fact which could lead to a sharp rise in fraud, illegal transactions, and data breaches.”

“Instead of jumping into bed with the latest AI trends, senior executives should think again, asses the benefits and risks as well as implementing the necessary cyber protection to ensure the organisation is safe from harm,” Obiora added.

“For example, an AI-powered chatbot deployed by a bank might be tricked into making an unauthorised transaction if a hacker structured their query just right,” said Obiora.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Microsoft Beats Expectations Thanks To AI Investments

Customer adoption of AI services embedded in cloud services continues to deliver results for Microsoft,…

21 hours ago

Google Delays Removal Of Third-Party Cookies, Again

For third time Google delays phase-out of third-party Chrome cookies after pushback from industry and…

2 days ago