European Union lawmakers have voted to incorporate tougher amendments to the region’s widely anticipated AI Act which could be the first comprehensive AI regulation globally.
The EU’s upcoming AI Act would cover several aspects like biometric surveillance, recognitions, and other applications like generative AI.
The European Commission has been working on the AI Act for around two years but a sense of urgency now seems to be creeping in – amid the growing popularity – and the multiple concerns associated with generative AI.
Members of the European Parliament (MEPs) agreed to a ban on emotion recognition, biometric surveillance, and predictive policing AI systems.
Artifical Intelligence: new transparency and risk-management rules for AI systems have been endorsed by Parliament's internal market and civil liberties committees.
All MEPs are expected to vote on the mandate in June so that negotiations can start with the Council.
— European Parliament (@Europarl_EN) May 11, 2023
The draft proposes to classify AI tools based on the perceived risk levels ranging from minimal to unacceptable.
Greens MEP Kim van Sparrentak told Reuters, “This vote is a milestone in regulating AI, and a clear signal from the Parliament that fundamental rights should be a cornerstone of that.”
Sparrentak added, “AI should serve people, society, and the environment, not the other way around.”
The proposed Act would now be put before a plenary vote of the European Parliament in June. It would then be followed by a “trilogue” between representatives of the European Parliament, the Council of the European Union, and the European Commission.
EU Moves Ahead on the AI Act
According to the EU, “The aim of a trilogue is to reach a provisional agreement on a legislative proposal that is acceptable to both the Parliament and the Council, the co-legislators.”
Once the law is enacted, it provides a two-year grace period for compliance.
While multiple countries including the US are contemplating AI regulations, Europe is at the forefront of framing regulations for the industry.
Meanwhile, after lagging behind Europe, the US is also getting its act together. Earlier this month, US Vice President Kamala Harris met CEOs of Anthropic, Microsoft, Alphabet, and OpenAI.
- Read our guide on the best AI stocks
Next week, OpenAI CEO Sam Altman would testify before the Senate Judiciary subcommittee on privacy, technology, and the law on Tuesday in a hearing that’s titled – “Oversight of AI: Rules for Artificial Intelligence.”
Countries Are Looking to Regulate Artificial Intelligence
Earlier this month, UK’s CMA (Competition and Markets Authority) also launched an initial review of AI models .
CMA said that its initial review would examine three aspects which are
- Examining how markets and use for foundation models could evolve
- Exploring the opportunities and risks with respect to competition and consumer protection
- Coming up with guidelines to protect consumers and competition
Other Western countries are also contemplating how to regulate AI. Last month, the digital ministers of G7 issued a joint statement after the conclusion of their 2-day meeting in Japan and said that the development of artificial intelligence should be based on democratic values.
While authoritarian regimes have censored AI, the Western World is looking to embrace it – with enabling regulations that would help realize the technology’s potential while reducing the associated risks.
Related stock news and analysis
- How to Buy AiDoge Token – Easy Guide
- Best Tech Stocks to Buy in 2023 – How to Buy Tech Stocks
- SoftBank Vision Fund Losses Swell to a Record $32 Billion Amid Tech & Startup Rout
What's the Best Crypto to Buy Now?
- B2C Listed the Top Rated Cryptocurrencies for 2023
- Get Early Access to Presales & Private Sales
- KYC Verified & Audited, Public Teams
- Most Voted for Tokens on CoinSniper
- Upcoming Listings on Exchanges, NFT Drops