World Journal

Racial Discrimination in Face Recognition Technology

Racial Discrimination in Face Recognition Technology

The Automated Discrimination of Facial Recognition

Facial recognition technology is ubiquitous in modern life. We use it to unlock our iPhones, tag ourselves in photos on social media and verify our identities at airports. But beyond these conveniences lies a darker side of facial recognition, where the technology is employed for law enforcement surveillance, airport passenger screening, and even in employment and housing decisions. Despite its widespread adoption, cities like Boston and San Francisco have recently banned its use by police and local agencies. The reason? Among the dominant biometrics—fingerprint, iris, palm, voice, and face—facial recognition is the least accurate and fraught with privacy concerns.

The Inherent Biases of Facial Recognition

Facial recognition technology is deeply flawed, especially for people of color, women and nonbinary individuals. The “Gender Shades” study by Joy Buolamwini and Timnit Gebru from the MIT Media Lab highlighted that error rates for facial recognition can be as low as 0.8% for light-skinned men and as high as 34.7% for darker-skinned women. Further assessments by the National Institute of Standards and Technology (NIST) confirmed these disparities across 189 algorithms, revealing the poorest accuracy for women of color.

These biases have severe consequences. Law enforcement’s use of facial recognition technology has led to false arrests and detentions, disproportionately affecting Black and Brown communities. The case of Kylese Perryman in Minnesota, who was wrongfully arrested due to incorrect facial identification, underscores these dangers. Furthermore, over 117 million American adults—almost half of the population—have their photos in facial recognition networks used by law enforcement, often without their consent or awareness. This lack of oversight exacerbates the technology’s racial biases.

This enables indiscriminate surveillance, allowing authorities to track individuals without their consent. In Minnesota, law enforcement’s use of this technology operates with minimal oversight, infringing on personal privacy and freedom. This unregulated tracking is akin to walking around with a driver’s license on display, as described by ACLU-MN Policy Associate Munira Mohamed.

The technology’s deployment often targets vulnerable groups, such as immigrants and refugees. The Department of Homeland Security and its sub-agencies, ICE and Customs and Border Protection, have used facial recognition to locate and arrest family members of unaccompanied migrant children, separating families and leaving children in detention. This demonstrates the potential for facial recognition to be used in harmful ways against marginalized communities.

Constitutional Rights Violations

Significant threats to constitutional rights can be seen in this context. The First Amendment protects the right to protest, but the fear of being recorded and identified by facial recognition can deter individuals from exercising this right. The Fourth Amendment protects against unreasonable searches and seizures, yet faulty facial recognition has led to the wrongful arrest of innocent people, forcing them to prove their innocence against computer-generated accusations.

While facial recognition technology has beneficial uses, such as identifying missing persons or victims of disasters, its rapid development outpaces the creation of necessary legal protections. The disparity between technological advancement and legislative action leaves significant gaps in safeguarding individual rights. Advocacy groups, including the ACLU, are pushing for comprehensive legislation to regulate the use of facial recognition technology, especially by law enforcement.

Efforts to address these inequities include improving the accuracy of facial recognition algorithms through diverse and representative training datasets and ensuring consent for the inclusion of individuals in these datasets. Companies like IBM and Microsoft have begun taking steps to reduce bias by modifying their testing cohorts and data collection practices. Additionally, regular and ethical auditing of these technologies by independent sources is crucial for holding companies accountable.

Building a More Equitable Future

Several legislative efforts aim to monitor and regulate the use of facial recognition technology. For instance, the Safe Face Pledge calls on organizations to address bias in their technologies and evaluate their applications. The 2019 Algorithmic Accountability Act empowered the Federal Trade Commission to regulate companies, mandating assessments of algorithmic training, accuracy, and data privacy. In response to the murder of George Floyd and subsequent protests, companies like IBM, Amazon, and Microsoft have taken significant steps to halt or limit the use of their facial recognition technologies by law enforcement until federal regulations are in place.

The movement for equitable facial recognition is intertwined with broader efforts to reform the criminal justice system. Addressing racial bias within facial recognition and its applications is essential to making these algorithms equitable and impactful. As we continue to use facial recognition technology in our daily lives, it is crucial to remember the importance of implementing policies that protect our rights and ensure fairness for all individuals.

 

Lascia un commento