Eight years after Google Photos’ artificial intelligence (AI) incorrectly labeled black people as gorillas and the company got rid of the label, the AI still does not label gorillas when provided with an image of one or any other animal in the same family despite the massive advances in the technology.

Unresolved AI Mistakes

In May 2015, Google launched Google Photos which had the ability to analyze photographs to identify the people, locations, and things in them. This classification feature was intriguing to users since it was one of the few available to consumers at the time.

However, a few months later, a software developer, Jacky Alciné, learned that the app’s AI had mistakenly tagged images of him and a Black friend as “gorillas,” a phrase that is particularly hurtful since it evokes centuries-old racist stereotypes.

When Alcine tweeted the tech giant about the issue, Google apologized saying it was “appalled and genuinely sorry” about the mistake. It promised to fix the issue and explained that during training, the model was not provided with enough images of the black race causing it to make this grave mistake.

In response to the error, Google got rid of the gorilla tag as a temporary fix as Yonatan Zunger, the company’s chief architect of social at the time said the company was “working on longer-term fixes around both linguistics — words to be careful about in photos of people — and image recognition itself — e.g. better recognition of dark-skinned faces.”

A recent experiment by New York Times that entailed testing the app on various animal images showed that the app still refuses to label gorillas or other primates including baboons, chimpanzees, orangutans, and monkeys.

An extension of the experiment to other similar AIs revealed that several other tech giants have disabled this tag from their models including Apple, Amazon, and Microsoft’s OneDrive potentially due to the fear of making the same mistake.

While customers might not need to run this type of search very often, the problem raises more serious concerns about additional unresolved or unrepairable faults in computer vision-based services and other artificial intelligence-powered products.

Trust at Stake

Despite the difference between Computer Vision and Natural Language Processing models like chatbots, both types of AI are based on the same principles where the data used for training the models informs its output and biases if any.

Similar to Google’s approach, Microsoft also opted to restrict customers’ access to the chatbot integrated into its search engine, Bing, after it started inappropriate conversations. This shows a typical practice among tech companies of blocking access to broken technology capabilities rather than fixing them.

Due to Google’s failure to fix the problem, Alcine said “I’m going to forever have no faith in this AI.” It is important to solve these issues otherwise how will users trust the software for other scenarios, said Vicente Ordóñez, a professor at Rice University who studies computer vision.

However, according to Google spokesman Michael Marconi, the company felt the benefit “does not outweigh the risk of harm” and thus barred its camera app from classifying anything as a monkey or ape.

Marconi said that “our goal is to prevent these types of mistakes from ever happening” adding that the company had improved its technology “by partnering with experts and diversifying our image datasets.”

What's the Best Crypto to Buy Now?

  • B2C Listed the Top Rated Cryptocurrencies for 2023
  • Get Early Access to Presales & Private Sales
  • KYC Verified & Audited, Public Teams
  • Most Voted for Tokens on CoinSniper
  • Upcoming Listings on Exchanges, NFT Drops