The EU’s law enforcement agency has issued a stark warning regarding the increasing use of AI technologies like ChatGPT by criminals, claiming that the AI language models can fuel fraud, cybercrime, and disinformation.

In a recent report, the European Union Agency for Law Enforcement Cooperation, Europol, said the potential exploitation of artificial intelligence-powered chatbot ChatGPT, and other similar systems, by criminals give cause for concern.

The agency further claimed that criminals are already using such tools to carry out illegal activities.

“The impact these types of models might have on the work of law enforcement can already be anticipated,” Europol stated in its report.

“Criminals are typically quick to exploit new technologies and were fast seen coming up with concrete criminal exploitations, providing the first practical examples mere weeks after the public release of ChatGPT.”

ChatGPT is an artificial intelligence chatbot built on top of OpenAI’s GPT-3.5 and GPT-4 families of large language models.

Since its launch in November 2022, ChatGPT has taken the internet by storm, becoming the fastest-growing app in history.

Top Three Areas of Crime for ChatGPT Misuse

Europol noted that the harmful use of ChatGPT is more prominent in three areas of crime.

In the first place, ChatGPT’s ability to create highly realistic text makes it a useful tool for fraud and phishing purposes. The tool’s ability to impersonate the style of specific individuals or groups makes it even more dangerous in the hands of criminals.

The EU enforcement agency also voiced concern that ChatGPT could be used to spread disinformation as the tool is well-trained to produce authentic-sounding text at speed and scale.

“In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages,” the agency said, mentioning that criminals with little technical knowledge could turn to ChatGPT to produce malicious code.

Criminals Can Bypass OpenAI’s Content Filter System

While ChatGPT is better at refusing to comply with input requests that are potentially harmful compared to other AI-powered chatbots, still there are loopholes that allow users to bypass the tool’s content filter system.

For instance, some have managed to trick the chatbot into revealing instructions on how to create a pipe bomb or crack cocaine.

“If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps,” the agency warned.

“As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home, to terrorism, cybercrime and child sexual abuse.”

Europol acknowledged that while all the information criminals need to carry out specific crimes might be already publicly available, AI-powered chatbots make it much easier to find all the required details and even provide a step-by-step guide.

The agency concluded that as new models become available, law enforcement needs “to stay at the forefront of these developments to anticipate and prevent abuse.”