The European Union has proposed additional safeguards against the use of artificial intelligence (AI) for biometric surveillance and identification.

Members of the European Parliament passed the amendment of the proposed Artificial Intelligence Act, initially introduced in April 2021, agreeing on a ban on remote biometric identification, including AI-aided facial recognition techniques, in public places.

The compromise text received strong support from the European lawmakers, as 84 members voted in favor, with only 7 voting against, while 12 abstained.

The ban applies to both real-time and after-the-fact algorithms, marking a deviation from the original proposal of the Commission and the stance supported by EU member countries in the Council.

If passed, the regulation would require a public database of “high-risk” AI systems deployed by governments and public authorities to be created.

The aim is to ensure that EU citizens are informed about when and how they are being affected by this technology.

The law would also ban mass facial recognition programs in public spaces and predictive policing algorithms that try to identify future offenders using personal data.

Moreover, there would be prohibitions on biometric categorization systems using sensitive characteristics, emotion recognition systems in law enforcement, border management, workplace and educational institutions, and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.

The EU Takes AI Danger Seriously

The new draft of the EU’s AI Act, which was approved by two key committees, suggests that the EU has taken the potential dangers of this fast-moving technology seriously.

“[It’s] globally significant,” Sarah Chander, senior policy advisor at digital advocacy group European Digital Rights, told The Verge.

“Never has the democratic arm of a regional bloc like the EU made such a significant step on prohibiting tech uses from a human rights perspective.”

The EU is a significant market. Therefore, tech companies often comply with EU-specific regulations on a global scale to reduce the friction of maintaining multiple sets of standards.

Information demanded by the EU on AI systems will also be available globally, potentially benefitting users in the US, UK, and elsewhere.

The draft legislation also includes new measures aimed at controlling so-called “general-purpose AI” or “foundational” AI systems.

These AI systems, created by tech giants like Microsoft, Google, and OpenAI, are large-scale models that can be put to a range of uses.

The creators of these systems will have new obligations to assess and mitigate various risks before these tools are made available.

Another key provision in the draft AI Act is the creation of a database of general-purpose and high-risk AI systems to explain where, when, and how they’re being deployed in the EU. The creation of such a database has been a long-standing demand of digital rights campaigners.

“This database should be freely and publicly accessible, easily understandable, and machine-readable,” says the draft.

“The database should also be user-friendly and easily navigable, with search functionalities at minimum allowing the general public to search the database for specific high-risk systems, locations, categories of risk [and] keywords.”

AI Act to Face Plenary Vote Next Month

After its approval by EU parliamentary committees, the act will now face a plenary vote next month before going into trilogues.

Trilogues are a series of closed-door negotiations involving EU member states and the bloc’s controlling bodies. Some of the prohibitions most prized by campaigners, including biometric surveillance and predictive policing, will cause “a major fight” at the trilogues.

The AI Act itself is a sprawling document that’s been in the works for around two years.

However, the recent surge in popularity of generative AI products, which have taken the internet by storm since the launch of OpenAI’s ChatGPT in November last year, has pushed lawmakers to rush into regulating this nascent industry.

As reported, the US government has put out a formal public request for comment regarding AI chatbots to help formulate advice for US policymakers about how to approach these emerging technologies.

Furthermore, the Biden administration has already recommended five principles companies should uphold regarding the development of AI technologies through a volunteer “bill of rights.”

Read More:

What's the Best Crypto to Buy Now?

  • B2C Listed the Top Rated Cryptocurrencies for 2023
  • Get Early Access to Presales & Private Sales
  • KYC Verified & Audited, Public Teams
  • Most Voted for Tokens on CoinSniper
  • Upcoming Listings on Exchanges, NFT Drops