Facial Recognition Technology: Echoing Historical Racism
Written on
Facial recognition and artificial intelligence (AI) have become prominent themes in science fiction and dystopian narratives. For instance, in the 2002 film Minority Report, billboards utilize facial recognition to identify passersby, making it nearly impossible for the protagonist to remain undetected. Similarly, 2001: A Space Odyssey features an AI capable of discerning complex emotions and intentions.
Currently, AI-driven facial recognition is widely implemented across the globe, being utilized by companies, educational institutions, and law enforcement agencies. Marketed as a tool for interpreting emotions through facial expressions and body language, this technology has achieved a multi-billion dollar market presence. However, the algorithms behind this technology are fundamentally flawed and racially biased.
This article will examine the historically racist underpinnings of facial recognition technology, shedding light on how contemporary practices reflect the prejudices of the 1800s, albeit cloaked in the allure of advanced technology and big data.
The Racist Roots of Facial Recognition
Skull Shape and Morality
In the 1800s, Franz Joseph Gall became well-known for his controversial theories regarding the mind. Gall posited that the contours and bumps of the skull could reveal significant insights about an individual's character. This discredited theory, known as phrenology, linked skull shapes to personality traits and moral standing.
Popularity surged for phrenology in the United States during the 1830s and 1840s, particularly in response to the growing abolitionist movement, as it was used to rationalize slavery and the mistreatment of Native Americans. Physician Charles Caldwell asserted that measurements of skulls indicated mental inferiority among Africans, claiming they were "tameable" and required "a master."
Similarly, Samuel Morton applied a comparable rationale to argue that Native Americans were also inferior. These erroneous scientific beliefs served as justifications for Andrew Jackson’s oppressive colonial policies towards Native populations. It is crucial to understand that there are no inherent differences among the skulls of various racial groups.
Facial Features and Criminality
The notion that facial characteristics could indicate criminality has its roots in pseudoscientific beliefs dating back to ancient civilizations, including Mesopotamia and Greece. This practice, referred to as physiognomy, suggested that one could infer emotions from facial expressions and body language.
In the 19th century, Cesare Lombroso began to analyze facial features to identify criminals, claiming that specific facial structures could reveal a predisposition to criminal behavior. Lombroso argued:
> “Thus were explained anatomically the enormous jaws, high cheek bones, prominent superciliary arches, solitary lines in the palms...”
This perspective inherently supports a primitive form of eugenics, suggesting that some individuals are born evil and cannot be rehabilitated, echoing the flawed ideas of measuring skulls.
Despite lacking any scientific foundation, these beliefs have persisted into modern times, where the internet facilitates research into these erroneous concepts.
Facial Recognition in the Present Day
Facial recognition technology is routinely employed by governments and corporations globally. Unfortunately, racism and bias often masquerade as objectivity. Without credible scientific support, facial recognition algorithms are claimed to assess everything from sexuality to criminality based on facial features.
AI and the Detection of Sexual Orientation
In a controversial study, Michal Kosinsky developed a deep learning algorithm capable of identifying sexual orientation from facial features. The algorithm was trained using a database of images from a dating site and later applied to Facebook photos to assess sexual orientation.
While the algorithm performed relatively well at identifying individuals as gay, it often misclassified many straight individuals as well, leading to significant criticism regarding its methodologies and conclusions. The study relied on the contentious assumption that prenatal testosterone levels influence sexual orientation, a premise that has faced substantial scrutiny from experts.
AI and Law Enforcement
In 2016, researchers sought to utilize algorithms to detect criminality through facial features. Authors Xiaolin Wu and Xi Zhang contended that computers could discern cues that human observers might miss, claiming a 90% accuracy rate in identifying criminals from headshots.
However, the dataset used in this study was flawed, as non-criminal images were sourced from promotional or professional contexts, while criminal images were exclusively from convicted individuals. This methodology overlooks the possibility of wrongful convictions and may inadvertently capture superficial traits like attractiveness that correlate with conviction rates.
Facial recognition technology has been controversial in law enforcement, as illustrated by a scandal involving ICE in 2019, where they analyzed driver's license photos without consent to locate undocumented immigrants. The technology itself has been criticized for its racial and gender biases.
AI and Emotion Recognition
Many organizations and governments are betting on AI's ability to accurately detect human emotions. However, numerous studies have shown that facial expressions do not reliably reflect internal emotional states. Research indicates that AI systems are more prone to identify negative emotions in Black men compared to their white counterparts, and some algorithms struggle to recognize Black faces altogether.
In an ideal scenario, society would acknowledge the flaws in these AI systems and cease their use. Regrettably, facial recognition technology continues to be employed for various applications, including emotion detection in vehicles, monitoring academic integrity during online learning, assessing job suitability, and identifying potential threats.
The underlying racism present in these technologies is obscured by claims of objectivity. AI systems frequently fail to accurately assess Black faces or their emotional expressions, and many algorithms disregard cultural differences in facial cues.
Proctoring software is now commonplace in educational settings, while emotion recognition technology is being trialed on Uyghurs in China’s Xinjiang province.
Despite the multi-million dollar industry surrounding AI facial recognition, its foundation rests on discredited pseudoscience that perpetuates racial and gender biases without any credible empirical support.