Global leaders in the field of artificial intelligence (AI) have warned that the technology, which has been growing exponentially over the past few years, poses an existential risk to humans similar to that posed by nuclear war and pandemics.
Experts Sound the Alarm on Humanity’s Safety
Hundreds of experts and researchers in the AI industry, including executives from tech giants leading in the technology such as OpenAI, Microsoft, and Google, have signed a statement arguing that the technology they are developing for artificial intelligence may one day create a genuine and immediate threat to humanity.
The statement posted by a nonprofit advocating for the development of safe AI, Center for AI Safety, only said:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
According to the statement, AI poses an immediate danger at par with a nuclear war or a worldwide pandemic yet these titans of technology chose not to elaborate on their foreboding caution.
So far, the statement has been signed by CEO of OpenAi Sam Altman, CEO of Anthropic Dario Amodei, Google DeepMind CEO Demis Hassabis, the ‘Godfather’ of AI Geoffrey Hinton, along with professors and scientists from notable universities such as MIT, Harvard, Carnegie Mellon University and the University of Oxford.
We’ve released a statement on the risk of extinction from AI.
Signatories include:
– Three Turing Award winners
– Authors of the standard textbooks on AI/DL/RL
– CEOs and Execs from OpenAI, Microsoft, Google, Google DeepMind, Anthropic
– Many morehttps://t.co/mkJWhCRVwB— Center for AI Safety (@ai_risks) May 30, 2023
For years, philosophers have discussed the possibility that AI could become difficult to govern and either accidentally or consciously kill humans. However, the topic has become much more frequently and seriously debated during the last six months as a result of some startling and unsettling advancements in the performance of AI algorithms.
This is not the first time tech leaders are warning of such dangers. In March, researchers and executives led by Tesla’s CEO Elon Musk called for a six-month moratorium on the training of AI systems more potent than GPT-4, the most recent iteration of the ChatGPT chatbot.
The open letter, which now bears over 30,000 signatures, warned:
“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.”
The Risks of AI
Notably, in all these warnings, none of the scientists offer any elaborate risks that the world faces due to this technology. However, the Centre for AI Safety offers examples of disaster scenarios that are imminent as a result of the continued development of AI.
The nonprofit warns that AI could be weaponized and repurposed to become highly destructive and risk political instability. The Center for AI Safety warned”
“In recent years, researchers have been developing AI systems for automated cyberattacks, military leaders have discussed giving AI systems decisive control over nuclear silos, and superpowers of the world have declined to sign agreements banning autonomous weapons.”
The organization also listed the possibility of power-seeking behavior among agents developed to accomplish various goals by companies and or governments. Such agents have instrumental incentives to acquire power, potentially making them harder to control and they could collude with other AI and overpower their monitors.
During an interview with BBC, Hinton who has been dubbed the godfather of AI due to his work in deep learning, was asked what risk he fears AI poses to humanity and he said that the intelligence AI possesses is different from human intelligence.
He warned that due to the ability of digital systems to communicate instantly, what one AI learns can easily be updated to all other AI making the technology more knowledgeable than any one human being on earth.
Related articles:
- Best AI Text Generators
- AI Startups Already Raised $20 Billion in 2023 and Show No Signs of Stopping
- 8 Years Later and Google Photos AI Still Refuses to Label Gorillas After Racist Errors
What's the Best Crypto to Buy Now?
- B2C Listed the Top Rated Cryptocurrencies for 2023
- Get Early Access to Presales & Private Sales
- KYC Verified & Audited, Public Teams
- Most Voted for Tokens on CoinSniper
- Upcoming Listings on Exchanges, NFT Drops