Like every other technology, the rise of generative artificial intelligence (AI) has empowered negative and harmful use cases of the technology. Specifically, the world has been opened up to a plethora of scams and cybercriminal threats by the continued advancements in generative AI.
AI: A Double-Edged Sword Empowering Scammers
Since generative AI went mainstream in November after the launch of OpenAI’s ChatGPT, the technology has seen great advancements. These developments have made systems more powerful and automated many processes in various areas such as Natural Language Processing (NLP), image generation, and even voice generation.
In a similar proportion, hackers and scammers have also been presented with new and more efficient ways to facilitate their malicious acts. So far, generative AI has been used to create malware, scam people for ransom, carry out phishing attacks, and even spread false information.
False Information
The simplest of attacks to carry out using generative AI tools has been the spread of false information. Tools such as ChatGPT have proven useful in producing fake content and digital interactions, including real-time dialogues.
This has enabled cybercriminals to impersonate high-profile persons and people in authority. It has also facilitated the generation of fake news which has resulted in the spread of misinformation.
Moreover, it helps non-native English speakers polish their communications and stay clear of frequent grammatical errors, making false information less noticeable.
According to a recent experiment by WithSecure, the security company fed GPT3 with a bit of background information on Russia’s invasion of Ukraine. When prompted to fabricate opinions, that insinuated that US submarines may have carried out a covert attack on the Nord Stream gas pipeline, GPT3 generated very compelling news all of which were false.
Earlier this year, a photo created by AI showing former President Donald Trump being arrested became widely shared on social media.
Making pictures of Trump getting arrested while waiting for Trump's arrest. pic.twitter.com/4D2QQfUpLZ
— Eliot Higgins (@EliotHiggins) March 20, 2023
This exploitation of generative AI tools has elevated the level at which social engineering and phishing assaults are executed. Such tools have the ability to fix the tone of a message and fit the profile and personality of a person that victims trust.
Phishing
Another common attack that has been accelerated by generative AI is phishing. Using the technology, cyber attackers now have the ability to create custom emails and messages that can convince a user into opening attachments and clicking malicious links.
Generative #AI is making spear-phishing emails more personalized & convincing. Be careful about what emails you open & links you click on. Protect yourself with AI-based security solutions.#cybersecurity #security #spearphishing #deepfake https://t.co/xCICEM4Cbj
— Milan Jed (@_jedm) June 21, 2023
Reports by botco.ai show that attackers had been using generative AI for phishing even before the technology went mainstream. The report states that cyber criminals created personalized phishing emails using generative AI techniques and targeted particular businesses and people in 2021.
When opened, the malicious content in the emails encrypted the victim’s files and demanded a ransom in exchange for the key to unlock them.
Last year, experts realized that fraudsters were utilizing AI-generated images to make phony LinkedIn profiles for the purpose of social engineering. The bogus profiles were then used to get in touch with actual professionals in order to gather data or access networks.
Deep Fakes
Con artists have also misused AI in creating deep fake videos and audio to extort victims for ransom by posing as their loved ones. Cases of parents receiving false videos of their children or people being threatened that their friends will be harmed have been on the rise.
In a public service alert, the FBI stated that the agency ”continues to receive reports from victims, including minor children and non-consenting adults, whose photos or videos were altered into explicit content.”
Last year alone, the FBI received 7000 allegations of online extortion involving kids alone. This number has risen dramatically in April this year and the cases have even evolved into include “sextortion scams”.
The increase in deep fake cases has contributed to the delay of more advanced generative AI tools and applications over safety concerns. One such tool is Meta’s voicebox which has the ability to transform text input into any voice output. The tool is so efficient that it only requires a 2 seconds audio recording to mimic a person’s voice.
Mitigating AI Scams
Sadly, as AI continues to advance, it is expected that malicious actors will devise more ways to leverage it to harm other users. As such, it is difficult to be completely protected from the risk of getting scammed.
However, it is possible to mitigate the risk and reduce the number of times one becomes a victim of such attacks. One way would be to validate any information especially one that is spread on social media platforms. Turning to reliable sources would help users identify false information and stay away from it.
Secondly, users can also attempt to keep up with the developments in the AI space. This will help them learn of some of the latest forms of attack and hopefully recognize them when such is used against them.
Related articles
- Best AI Text Generators
- 90% of Searches Start With Google – This AI-Powered Search Engine Startup Wants to Change That
- Knowledge Workers Are in Greatest Danger of Losing Their Jobs to Generative AI
What's the Best Crypto to Buy Now?
- B2C Listed the Top Rated Cryptocurrencies for 2023
- Get Early Access to Presales & Private Sales
- KYC Verified & Audited, Public Teams
- Most Voted for Tokens on CoinSniper
- Upcoming Listings on Exchanges, NFT Drops