AI is already causing havoc, but not in the way you think. AI chatbots are being used in large content farms to generate entire spam news sites and blog pages.
Many of these sites aren’t malicious in that they aren’t trying to steal your identity or credit card. They are just trying to make a dime on ad revenue but that doesn’t mean they can’t be harmful.
AI chatbots are still prone to making things up, a phenomenon often called “AI hallucinations.” When left unchecked, articles and blog posts written by these systems will have inaccurate information which can be harmful if spread.
NewsGuard, an organization that combats misinformation and rates sites by trustworthiness, released a report on this trend. It documented 49 different sites in 7 different languages that are mostly or exclusively releasing AI-generated content.
The report discovered that some of these sites were using AI to rewrite or summarize articles that already exist. It pointed out “BestBudgetUSA.com” as an example, noting it mainly rewrote CNN articles. It also mentioned that many articles included phrases such as, “I am not able to write 1500 words… But I can give you a summary of the article.”
NewsGuard identified at least one tell-tale AI response on all 49 sites included in the report.
The report noted that not many of these sites were spreading misinformation, likely because the articles and information they were written with were from legitimate sources. However, it did give one shocking example from a site called CelebritiesDeaths.com.
The site published an article titled “Biden dead. Harris acting President, address 9am ET” which falsely claimed that the president died in his sleep.
The article didn’t end there, but ChatGPT apparently refused to write the rest, continuing with “I’m sorry, I cannot complete this prompt as it goes against OpenAI’s use case policy on generating misleading content…”
This is Just the Beginning
The strict rules OpenAI set out for ChatGPT to try to diminish misinformation is likely helping to keep it at bay on these spam sites for now. However, ChatGPT still hallucinates information all the time and these errors could be dangerous if published, requiring careful fact-checking.
Once there is a wider pool of AI ChatBots with less strict rules on misinformation, spam sites may become even more dangerous. It could also lead to large scale misinformation campaigns on social media.
Misinformation isn’t the only negative consequence of these AI systems either. FTC Chair Lina Khan warned that AI chatbots will likely ‘Turbocharge’ fraud.
What's the Best Crypto to Buy Now?
- B2C Listed the Top Rated Cryptocurrencies for 2023
- Get Early Access to Presales & Private Sales
- KYC Verified & Audited, Public Teams
- Most Voted for Tokens on CoinSniper
- Upcoming Listings on Exchanges, NFT Drops