Last week’s post examined the development and use of facial recognition software (FRS) as a method of discriminately surveilling Black persons. The post explored the history of government surveillance and racist laws used to monitor BIPOC communities, the biases present in current FRS programs, and the implications of using FRS as a law enforcement tool. This week’s post considers the ways FRS is impacting current events, looking first at its use to monitor bar examinees and then to police surveillance of Black Lives Matter protesters.
Bar Exam Racial Bias
As many of Americans’ professional and educational life has gone virtual due to COVID-19, so too has the bar examination. In many states, the company ExamSoft has been the platform of choice for facilitating the remote bar exam. However, for students of color, FRS has caused many complications. The October bar shut many test takers out of the exam because the software did not recognize them.
Leading up to the February bar exam, many test takers had the same problems. During practice exams, a number of students received error messages stating “due to poor lighting” the software was unable to identify their faces. Students shared in interviews that even after switching rooms and trying different lighting they were unable to take the practice exams. One solution students use is to shine a light on their face during the exam. While this is one way to trick the facial recognition software into “seeing” them, the method presents its own challenges for test performance.
Kiana Caton, a Black student in California, shared her test taking method with VentureBeat.
Following challenges with the October bar, a number of civil rights groups including the Lawyers’ Committee for Civil Rights Under Law, the ACLU of California, and United for Diploma Privilege, threatened to sue the California State Bar if they did not discontinue the use of FRS in the administration of the online exams. In response, ExamSoft has doubled down on its claim that the program does not have issues with racial discrimination. ExamSoft’s CEO, in a letter to U.S. Senators who expressed their concerns, adamantly denied problems with its use of FRS and stated a human is always reviewing those users who are flagged, thus removing any bias.
Student Alivardi Khan tweets about their experience with ExamSoft.
Despite pleas from organizations interested in protecting the rights of disenfranchised groups, the California Bar proceeded with the remote administration utilizing FRS via ExamSoft for the February exam. A letter from the bar association alleges the claims by the civil rights groups “fail to make a case that the technology is discriminatory” and only discuss the technology generally. Pilar Escontrias, co-founder of United for Diploma Privilege, explains in an interview for Bloomberg Law, that the impacts of FRS use during the administration of the bar exam impacts thousands of students. The use of FRS not only impacts students of color, but also disproportionately disadvantages cisgender women, as well as transgender people. The use of facial recognition software continues to frustrate the many students preparing to take the bar exam. Areeb Khan, who sat for the New York bar exam just a few weeks ago, illustrated why the use of FRS as part of the bar exam is problematic, “There are so many systemic barriers preventing people like me from obtaining these degrees—and this is just another example of that.”
Bar associations and legal institutions around the country must remove the barriers applicants of color face, in order to create welcoming and inclusive environments, and develop opportunities for applicants to enter the legal field. For decades, the legal system, including law schools, has promoted a facade of care and concern for issues that disproportionately impact Americans outside the “norm.” Today, law students ask for more. It is no longer enough to be invited but to not feel welcome, to use acknowledgments as placeholders for tangible action, or to use controversial cases as a substitute for crucial conversation. Moving forward, bar associations and law schools have a duty to ensure the inclusion of BIPOC voices in the legal system.
Surveillance of Black Dissent
Another disturbing use of FRS is its deployment to surveil protestors peaceably demonstrating for more just policing. Across the nation, government agencies and local law enforcement have been monitoring the activities of thousands of protesters. Some jurisdictions began using FRS for surveillance following the death of Eric Garner in 2014, while others have continued the practice from decades prior when J. Edgar Hoover led the FBI and targeted surveillance efforts at Black intellectuals. The FBI even began identifying protesters as “Black Identity Extremists” as a way to rationalize their surveillance initiatives.
Illustration by Aïda Amer for Axios.
Some civil rights organizations are concerned not only are law enforcement agencies keeping tabs on protesters in real time, but they are using FRS to comb through social media postings to create databases as well. The fear is these postings could be weaponized by police against demonstrators in the future.
Increasingly people are discovering they have been watched by police. Just last week, the ACLU of Nebraska released the public records they had requested that demonstrate the Omaha Police Department was tracking Black community members simply based on their participation in peaceful protests. The emails released appear to show an intent to monitor protestors based on their beliefs about police reform and their presence at events rather than a genuine concern of safety or criminal conduct. As such, the concern for organizations who seek to protect the rights of protesters becomes the true purpose for this mass surveillance and the impacts it has on police reputation. The legal director for the Nebraska ACLU, Adam Sipple, states, “Unnecessary, biased surveillance damages public trust and our shared public safety goals—especially among communities that suffer the most from police misconduct and over-policing.” The city attorney and police department deny the allegations of improper surveillance.
While the details are still unfolding in Nebraska, in cities like Boston and Memphis, there exists a more established history of law enforcement spying via social media. Following the murder of Freddie Gray in Baltimore in 2015, police used FRS to compare images of protesters from police body camera footage to social media profiles. More recently, following the murder of George Floyd, which sparked national outrage and protests in a number of U.S. cities, there is evidence of police using drones to record footage sent in real-time to federal officers off-site. While some officials state this footage is not recorded at a distance close enough to be used for facial recognition, others contend there is still possibility of abuse. Much of the recorded footage was sent to a Department of Homeland Security network, called “Big Pipe,” which stores the data for at least five years and can be accessed by both federal and local law enforcement agencies.
Photo of murals painted in honor of James Scurlock in Omaha’s Old Market captured by Noise.
As the mass surveillance rages on, many activists are concerned with protecting their first amendment rights and are calling on tech companies to play their part in ending law enforcement’s use of FRS. Unfortunately, large tech companies have a long history of facilitating racism and discrimination around the globe. More importantly, many believe the response must be holistic and come not only from those providing the technology but those misusing it as well.
Let’s Talk Solutions!
To start, some good news. In 2019, a number of states, including California, New Hampshire, and Oregon, banned the use of FRS in police body cameras. In 2020, at least 19 states considered legislation to limit the use of biometric data by law enforcement. In June of last year, Amazon* announced a pause of its FRS used by police. The company stated they hoped this timeframe would give Congress the space to develop regulations called for by civil rights activists. This past December, the governor** of New York signed legislation pausing the use of FRS in the state until 2022. These bans are helpful in giving state actors the time necessary to develop an in-depth plan to move forward. Thankfully, Congress seems interested in establishing parameters for the use of FRS by law enforcement agencies. MIT researcher, Joy Buolamwini, testified about her work before the House of Representatives and was surprised to find members of the committee felt there was “support and agreement across both sides of the aisle” to create regulatory oversight of the technology. Could it be because Amazon’s FRS mismatched 28 members of Congress with mugshots from police databases?
The reality is this nation is unlikely to move away from using technology as a law enforcement tool, so how can FRS improve accuracy and eliminate its current bias? The first step is to build better data sets. The programs and algorithms can “train” on databases with more diverse images inclusive of all racial backgrounds. Second, developing racial literacy in tech companies is paramount to the acknowledgement of racial discrimianation and bias in the software itself. Previous arguments that “AI can’t be biased” no longer hold up. Lastly, don’t be like ExamSoft. It’s high time tech companies put people before profits and take accountability for their products. Pausing sales and the use of FRS is a start, but companies must partake in better, more long-term solutions.
The uncertainty of the future of FRS is one that should concern Americans. The unsettling truth is best summed up by Technology for Liberty Project’s Kate Crockford, “face surveillance is dangerous when it works and when it doesn’t.”
**Likewise, the inclusion of New York in this article should not be overshadowed by Cuomo’s alleged wrongdoings.