Categories: Security

Staying Safe in Our New AI World: How Organisations Can Protect Themselves

Research shows the world experienced a 38% increase in cyberattacks in 2022 compared to 2021, and in Q4, attacks reached an all-time high of an average of 1,168 a week. Now, with advances in AI, cyber criminals are looking at scaling up their attacks and enhancing them. With business operations and reputational damage on the line, companies need to understand the cybersecurity risks associated with AI tools like ChatGPT and what they can do to better safeguard themselves.

ChatGPT: Lowering the barrier to entry into cybercrime

From the outset, ChatGPT has been praised for its ability to create software in various programming languages, ultimately transforming the software industry. However, its ability to generate code, requiring no coding skills or development knowledge, enables cybercriminals with insufficient skills to carry out cybercrimes beyond their capabilities.

Studies looking into underground hacking forums found that cybercriminals were using the chatbot to “recreate malware strains” and even found that some were using ChatGPT to create their first-ever scripts. Evidently, ChatGPT is widening the pool of cybercriminals to less-skilled actors to help them create malicious software from viruses to ransomware – and it will only be a matter of time before more skilled actors enter the picture.

Crafting phishing emails will become easier

Tools like ChatGPT and GPT-3 specialise in creating believable text. They tap into vast vocabulary and information to mimic speech patterns and generate text with minimal errors. Unfortunately, this ability to string words together is helping to improve the email writing skills of threat actors who are not fluent in English. To date, poorly written English and unnatural language expressions have been the hallmark of phishing emails.

Now, however, ChatGPT’s ability to deliver detailed prompts is helping bad actors overcome language barriers. Researchers recently found that it was hard to distinguish genuine emails from tailored ones because of prompts. And as a result of more sophisticated phishing attacks, cybercriminals can trick more victims into downloading and installing attachments, subsequently accessing IT estate and sensitive data. Clearly, ChatGPT is supporting an evolution in phishing emails in terms of the quality of the text, and consequently, the success rate of scam emails will increase.

Sensitive queries typed into the chatbot at risk

Another major issue for businesses is around the risk of sensitive queries – those inputted into AI chatbots which expose sensitive business data or privacy-protected information by employees.

For example, a CEO might ask a chatbot about the best way to lay off employees, or an employee might feed it market-sensitive information. The companies operating the chatbot might then read these queries and potentially leak this information. Additionally, the chatbot can repeat it, given that the machine learns based on previous interactions. The consequences of these leaks could be catastrophic for businesses causing reputational harm or even serious financial damage. Companies such as Amazon and JPMorgan have recognised this, restricting the chatbot use amongst their workers.

How can organisations better protect themselves?

With the ever-growing popularity of ChatGPT, we recognise that this technology delivers numerous business benefits. As such AI tools are embraced, companies must ensure to do so safely. For example, businesses can protect themselves by:

  • Investing in cybersecurity training. Engaging employees in ongoing and relevant training will improve their ability to recognise phishing emails and thus reduce attempts by cybercriminals to enter networks.
  • Looking at how third parties are using data. Given that data comes from third-party sources and the wider ecosystem, businesses need to assure the provenance of data to ensure it can be trusted.
  • Prioritising security checks within the organisation. Businesses need to focus on security checks, such as running automated testing processes to identify existing vulnerabilities.

Undoubtedly, given that ChatGPT is streamlining business operations and boosting productivity, businesses will continue to adopt the technology. As such, the capabilities of the model are likely to become even more dangerous over time. In order to better protect themselves, businesses must stay abreast of the latest security concerns and information regarding ChatGPT and urgently implement measures to mitigate these risks.

Jesper Trolle, CEO of Exclusive Networks.

David Howell

Dave Howell is a freelance journalist and writer. His work has appeared across the national press and in industry-leading magazines and websites. He specialises in technology and business. Read more about Dave on his website: Nexus Publishing. https://www.nexuspublishing.co.uk.

Recent Posts

Demystifying AI Models: How to Choose the Right Ones

Large Language Models (LLMs) have revolutionized artificial intelligence, transforming how businesses interact with and generate…

3 weeks ago

Beyond CISO Scapegoating: Cultivating Company-Wide Security Mindsets

In the evolving cybersecurity landscape, the role of the Chief Information Security Officer (CISO) has…

3 weeks ago

Three Key Considerations for Companies Implementing Ethical AI

Artificial Intelligence (AI) has grown exponentially, transforming industries worldwide. As its use cases expand, concerns…

3 weeks ago

The secrets to Developing a High-Performing Data Team

Building a high-performing data team is key to leveraging data for better decision-making. By balancing…

3 weeks ago

Bridging the UK Skills Gap in the Tech Sector

The UK tech sector is facing a critical skills shortage, posing challenges for industries reliant…

2 months ago