Experts caution that artificial intelligence (AI) systems incorporate prejudiced inclinations, leading machines to mirror human biases. This concern is particularly worrisome as AI becomes more widely adopted, potentially posing racial bias.
A BuzzFeed writer used Midjourney, an AI image generator, to produce Barbie doll representations from different countries. Regrettably, the outcomes were met with strong disapproval. Notably, the depiction of the German Barbie featured her in a Nazi SS uniform, the South Sudanese Barbie was portrayed holding a firearm, and the Lebanese Barbie was situated on “top of the rubble.”
While this instance may seem relatively minor, it indicates the possibility of more profound and far-reaching consequences as AI technology is applied to a wide range of real-world scenarios. Moreover, it’s not the initial occurrence where AI has been labeled as exhibiting biases.
Racial bias way before
Most recently, Google’s Vision Cloud wrongly categorized individuals with darker skin holding a thermometer as if carrying a “firearm.” While those with lighter skin were identified as holding an “electronic device.”
In 2009, Nikon’s facial recognition software mistakenly inquired if they were blinking. Then, in 2016, an artificial intelligence application employed by U.S. courts to evaluate the probability of reoffending produced twice as many incorrect identifications for black defendants (45%) compared to white ones (23%), as per an analysis by ProPublica.
The inclination of AI to exhibit racial bias has prompted the UK Information Commissioner’s Office (ICO) to launch an investigation. This is to express concerns about the potential harm it could inflict on people’s lives.