china chatgpt arrest

In what’s perhaps the first case of its type, Chinese police have arrested a person surnamed Hong for allegedly using ChatGPT to disseminate fake news online.

While many have expressed concerns over AI’s ability to turbocharge fraud – this case exemplifies how generative AI models like ChatGPT can be misused.

Police in China’s Gansu province arrested Hong for “using artificial intelligence technology to concoct false and untrue information” – alleging that he spread fake news of a train crash.

Chinese authorities said that more than 20 accounts spread the fake article on a blogging site owned by Baidu and together these got more than 15,000 views.

Incidentally, Baidu also unveiled its AI chatbot called Ernie in the first quarter. While the model got mixed reviews on debut – Baidu said that the bot would get better over time.

Alibaba has also launched its AI chatbot. Over the weekend, China’s iFLYTEK unveiled a general AI model called SparkDesk and said that the model would surpass ChatGPT’s abilities next year.

Coming back to the China arrest, it is the first arrest made under the country’s law governing “deep synthesis technologies” – which prohibits using AI to spread fake news.

China’s Cyberspace Administration enacted the law in January as the country is concerned about the risks associated with AI.

Also, it mandates that media made using deep synthesis technologies should be labeled so to prevent confusion.

China Arrests Person for Using ChatGPT to Spread Fake News

Incidentally, China has banned ChatGPT but users can however access it through virtual private networks.

Chinese police said that Hong posted the fake news in the hope of making a profit alleging that he “used modern technology to fabricate false information, spreading it on the internet, which was widely disseminated.”

Meanwhile, US lawmakers are also cognizant of risks associated with AI and last week Vice President Kamala Harris met executives of companies including Google, OpenAI, and Microsoft over the matter.

At her Congressional testimony last month, FTC chair Lina Khan said “I think we’ve already seen ways in which it could be used to turbocharge fraud and scams.”

She added, “We’ve been putting market participants on notice that instances in which AI tools are effectively being designed to deceive people can place them on the hook for FTC action.”

Countries Look at Regulating AI

Several other countries are also actively considering AI regulations and earlier this month UK’s CMA (Competition and Markets Authority) launched an initial review of AI models.

The European Commission has been working on the AI Act for around two years. However, a sense of urgency now seems to be creeping in – amid the growing popularity – and the multiple concerns associated with generative AI models like ChatGPT.

Microsoft has committed billions of dollars to ChatGPT parent OpenAI and sees the partnership driving its growth in the long term.

Microsoft surged after it released its earnings for the March quarter even as many other tech companies including Netflix, Alphabet, and Amazon fell after the earnings release.

Meanwhile, the arrest in China over alleged misuse of ChatGPT exemplifies the risks associated with generative AI and would only strengthen the calls for regulatory oversight of the technology.

Related stock news and analysis

What's the Best Crypto to Buy Now?

  • B2C Listed the Top Rated Cryptocurrencies for 2023
  • Get Early Access to Presales & Private Sales
  • KYC Verified & Audited, Public Teams
  • Most Voted for Tokens on CoinSniper
  • Upcoming Listings on Exchanges, NFT Drops