Staying Safe in Our New AI World: How Organisations Can Protect Themselves

Security
Jesper Trolle, CEO of Exclusive Networks

The advent of AI models such as ChatGPT and more recently GPT-4 has been a game-changer for communication and information exchange across entire industries – automating time-consuming tasks and increasing the speed of interactions with customers. For instance, ChatGPT’s ability to generate human-like text has enabled it to pass exams, write letters, and even draft song lyrics. So, given the potential to improve our lives drastically, what are the cybersecurity risks associated with this model and what can we do to better protect ourselves?

Research shows the world experienced a 38% increase in cyberattacks in 2022 compared to 2021, and in Q4, attacks reached an all-time high of an average of 1,168 a week. Now, with advances in AI, cyber criminals are looking at scaling up their attacks and enhancing them. With business operations and reputational damage on the line, companies need to understand the cybersecurity risks associated with AI tools like ChatGPT and what they can do to better safeguard themselves.

ChatGPT: Lowering the barrier to entry into cybercrime

From the outset, ChatGPT has been praised for its ability to create software in various programming languages, ultimately transforming the software industry. However, its ability to generate code, requiring no coding skills or development knowledge, enables cybercriminals with insufficient skills to carry out cybercrimes beyond their capabilities.

Studies looking into underground hacking forums found that cybercriminals were using the chatbot to “recreate malware strains” and even found that some were using ChatGPT to create their first-ever scripts. Evidently, ChatGPT is widening the pool of cybercriminals to less-skilled actors to help them create malicious software from viruses to ransomware – and it will only be a matter of time before more skilled actors enter the picture.

Crafting phishing emails will become easier

Tools like ChatGPT and GPT-3 specialise in creating believable text. They tap into vast vocabulary and information to mimic speech patterns and generate text with minimal errors. Unfortunately, this ability to string words together is helping to improve the email writing skills of threat actors who are not fluent in English. To date, poorly written English and unnatural language expressions have been the hallmark of phishing emails.

Now, however, ChatGPT’s ability to deliver detailed prompts is helping bad actors overcome language barriers. Researchers recently found that it was hard to distinguish genuine emails from tailored ones because of prompts. And as a result of more sophisticated phishing attacks, cybercriminals can trick more victims into downloading and installing attachments, subsequently accessing IT estate and sensitive data. Clearly, ChatGPT is supporting an evolution in phishing emails in terms of the quality of the text, and consequently, the success rate of scam emails will increase.

Sensitive queries typed into the chatbot at risk

Another major issue for businesses is around the risk of sensitive queries – those inputted into AI chatbots which expose sensitive business data or privacy-protected information by employees.

For example, a CEO might ask a chatbot about the best way to lay off employees, or an employee might feed it market-sensitive information. The companies operating the chatbot might then read these queries and potentially leak this information. Additionally, the chatbot can repeat it, given that the machine learns based on previous interactions. The consequences of these leaks could be catastrophic for businesses causing reputational harm or even serious financial damage. Companies such as Amazon and JPMorgan have recognised this, restricting the chatbot use amongst their workers.  

How can organisations better protect themselves?

With the ever-growing popularity of ChatGPT, we recognise that this technology delivers numerous business benefits. As such AI tools are embraced, companies must ensure to do so safely. For example, businesses can protect themselves by:

  • Investing in cybersecurity training. Engaging employees in ongoing and relevant training will improve their ability to recognise phishing emails and thus reduce attempts by cybercriminals to enter networks.
  • Looking at how third parties are using data. Given that data comes from third-party sources and the wider ecosystem, businesses need to assure the provenance of data to ensure it can be trusted.
  • Prioritising security checks within the organisation. Businesses need to focus on security checks, such as running automated testing processes to identify existing vulnerabilities.

Undoubtedly, given that ChatGPT is streamlining business operations and boosting productivity, businesses will continue to adopt the technology. As such, the capabilities of the model are likely to become even more dangerous over time. In order to better protect themselves, businesses must stay abreast of the latest security concerns and information regarding ChatGPT and urgently implement measures to mitigate these risks.

Jesper Trolle, CEO of Exclusive Networks.

Author
Learn More 
Learn More 

Latest Whitepapers