A Racist Beauty Bot
Created by ‘Youth Laboratories’ in 2015, Beauty.AI was a robot judging an international beauty contest. After taking selfies on a standardized app, participants had to submit these photos to the Beauty.AI website. Post-gathering all submissions, the robot utilized facial recognition to rate people’s beauty. More in-depth, it compared one photo to others, reviewing them based on age, ethnicity, nationality, wrinkles etc.
This attracted a lot of excitement, receiving approximately 6000 submissions from across the world. However, the results brought up a major red flag. In the 44 winners, only one had dark skin. The notion that submissions from those with dark skin were little was dismissed, as there proved to be many submissions from India and Africa.
How did this happen?
The chief science officer of Beauty.AI, Alex Zhavoronkov, stated that the key issue was the training data the machine used in establishing what was attractive. The dataset didn’t include enough minorities, leading to biased results as the robot was trained to recognize patterns. The patterns with minority groups were weaker, so the algorithm ended up considering light skin more beautiful. This links to the issue that human bias is integrated into the technology that we create. By feeding datasets entrenched with our own prejudices, computers will use this as a basis to extract insights, possibly perpetuating prejudicial beliefs and sparking conflict.
The underlying problem has manifested itself in other human-built programs: Tay the chatbot was shown to advocate for neo-Nazis and used racial slurs, online advertisements for high-profile jobs were shown mostly to men, and Google’s photo functionality classified black people as gorillas.
Given the moral of Beauty.AI’s failure, it becomes inherent that programmers should be extremely careful not to embed their biases in their inventions. Although a completely neutral machine may not be possible, there are measures that can be taken to achieve anything close to this. For instance, allowing the public to evaluate AI algorithms or having external firms pinpoint any biases internal workers are unable to recognize are few of many techniques that can be employed. One day, if bias no longer acts as a barrier in AI, the capabilities of these machines will truly become limitless.
Written by Amanda Y