Drastic advancements in artificial intelligence have regulators in the EU and around the world rushing to formulate rules to oversee the industry. OpenAI’s success story with ChatGPT, a generative AI application has exposed the lack of proper laws in a field believed to possess unknown danger.

A report by Reuters, citing sources familiar with the matter revealed that lawmakers in the European Union (EU) are conflicted on the landmark AI laws, proposed a couple of years ago.

The draft laws by the European Commission mainly surrounded the need to protect consumers from the dangers associated with the emerging technology which has gained popularity over the last few months and investments from big companies like Microsoft, Google, and Meta.

Although the push for regulations in the AI sector is the most prudent thing to do, the draft laws from the European Commission are supposed to be applied between countries in the EU. Therefore, there is a need for a trialogue before they become law.

EU Lawmakers Debate Proposed AI ACT, But Time Is Ticking

Reuters reports that several lawmakers were looking forward to reaching a consensus on the 108-page bill in February during a meeting convened in Strasbourg, France, ahead of a trialogue over the next few months.

However, the five-hour meeting hit a snag with lawmakers failing to resolve. They disagreed over several facets of the Act, as confirmed by the three sources privy to the matter.

The industry is gearing up for an agreement by the end of 2023, but doubt as to whether the stalemate among lawmakers reaching a consensus is rising. Some people believe with lawmakers at loggerheads, the legislation could be pushed to 2024.

The idea of the law delaying until 2024 does not sit well with most people, who say that the European elections could see MEPs adopt a different set of priorities as they assume office—a situation likely to leave the AI industry in limbo.

“The pace at which new systems are being released makes regulation a real challenge,” Daniel Leufer, a senior policy analyst with rights group Access Now said in a statement to Reuters. “It’s a fast-moving target, but there are measures that remain relevant despite the speed of development: transparency, quality control, and measures to assert their fundamental rights.”

The Lawmakers’ Race Against Time

EU lawmakers are on a sprint to work through over 3,000 of the tabled amendments, and considerations of a new AI office to the extent of the new Act. According to Brando Benifei, an Italian MEP, “negotiations are quite complex because there are many different committees involved.”

Benifei is one of the two members tasked with leading negotiations on the region’s much-awaited AI Act. The lawmaker admits “the discussions can be quite long,” as they are supposed to talk to many MEPs—as much as 20 every time.

EU lawmakers have endeavored to find an equilibrium between promoting ingenuity and safeguarding the basic rights of the populace.

With that in mind, legislators classified AI tools differently in reference to their perceived risk level. The levels start from minimal to limited, high-risk, and lastly unacceptable. So far, there are indications that high-risk tools will not be banned, however, their creators would be expected to be very transparent in their operations.

Despite the debate, not much has been said regarding the rapidly expanding generative AI technologies such as OpenAI’s ChatGPT and Stable Diffusion, which have taken the world by storm in just a few months while delicately balancing between user fascination and controversy.

A few weeks after launching in February, ChatGPT is reported to have gained a vast userbase that continues to grow to date, especially with the release of GPT4, which is a multimodal large language model from OpenAI.

Exploring The AI Act – What Is The Scope?

The AI Act is expansive and has been designed to govern every individual or entity providing a product or service tapping emerging AI technologies. Reuters states that it will provide oversight of all systems that produce output in the form of content, predictions, recommendations, and even decisions likely to impact environments.

In addition to governing the use of AI by companies and businesses, the Act will explore the use cases of AI in the public sector and enforcement. It will not operate alone but hand in hand with existing laws like the General Data Protection Regulation (GDPR).

AI systems that engage with people, conduct surveillance, or can create “deepfake” content are subject to strict transparency requirements.

What Do Lawmakers Consider As ‘High-Risk’?

Some AI tools have been categorized as high-risk and these would include systems utilized in critical infrastructure, education, migration, product safety, administration of justice, or law enforcement. Since they are a level below those in the “unacceptable” category, they will not be banned.

Individuals or organizations utilizing high-risk artificial intelligence (AI) may be required to undergo thorough risk evaluations, maintain a record of their activities, and grant access to data for regular inspections. This may result in an escalation in regulatory compliance costs for businesses.

What Is A General Purpose AI System

A General-Purpose AI System (GPAIS) is one of the categories, European Union legislators proposed to refer to AI tools fronting more than one application, for example, OpenAI’s language models like ChatGPT.

Lawmakers in the EU are still deliberating whether to classify all forms of GPAIS as high-risk. If that is the case, it is still unknown how the rule would impact tech companies looking forward to integrating AI into their products. The Act is not clear about the obligations AI system manufacturers will need to abide by.

Representatives of AI tools that are likely to fall under the GPAIS classification are unanimously rejecting the move, and basically telling the lawmakers to trust their in-house guidelines, which they claim are robust to ensure the safety of deployment and use.

The companies are lobbying for an opt-in clause that would leave the decision to comply with the regulations, particularly for this category, solely on the organizations themselves.

What Happens If A Company Disregards The Law

Breaking the AI law would set back companies up to 30 million euros in fines or part with 6% of their global profits—the higher of the two.

For a tech company like Microsoft, which backs OpenAI, the fines could rally to the tune of $10 billion if found in violation of the law.

For that reason, Big Tech firms are currently involved in extensive lobbying, especially those that have already invested billions of dollars in disruptive AI technologies and products.

They are fighting to have their innovations outside the purview of the high-risk clarification that would eventually result in higher compliance costs and accountability surrounding their products, the unnamed sources told Reuters.

According to a survey conducted by appliedAI, over half of the participants predict a deceleration in the progress of artificial intelligence (AI) initiatives following the implementation of the AI Act.

Should Companies In AI Participate In the Lawmaking Process?

According to DeepMind, an AI company owned by Google, which is currently trialing its AI chatbot Sparrow, told Reuters in a statement that rules governing multi-purpose systems, for example, are complex and there “needs to be an inclusive process,” for all parties involved to participate.

“We believe the creation of a governance framework around GPAIS needs to be an inclusive process, which means all affected communities and civil society should be involved,” Alexandra Belias, DeepMind’s head of international policy said.

She argued that this is the most suitable time to come up with a regulatory framework that will stand the test of time and “still be adequate tomorrow.”

Spotify, an audio streaming platform, recently released “AI DJ,” an artificial intelligence tool with the ability to deliver a curated personalized playlist to listeners. Daniel Ek, Spotify’s CEO believes that AI technology is a “double-edged sword” but with close collaboration with regulators, it could benefit more people and still be safe.

“There are lots of things that we have to take into account. Our team is working very actively with regulators, trying to make sure that this technology benefits as many as possible and is as safe as possible,” Ek said.

The lawmakers in the EU have told stakeholders in the AI industry that the Act, although being a hot topic at the moment, would be subject to regular reviews—and allow for updates when new issues arise.

“Discussions must not be rushed, and compromises must not be made just so the file can be closed before the end of the year,” Leufer told Reuters. “People’s rights are at stake.”

For now, the race is against time for the lawmakers who are hoping to make significant strides before the European elections in 2024.

Related Articles:

What's the Best Crypto to Buy Now?

  • B2C Listed the Top Rated Cryptocurrencies for 2023
  • Get Early Access to Presales & Private Sales
  • KYC Verified & Audited, Public Teams
  • Most Voted for Tokens on CoinSniper
  • Upcoming Listings on Exchanges, NFT Drops