Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

6/12/2019 How white engineers built racist code – and why it's dangerous for black people | Technology

ack people | Technology | The Guardian

How white engineers built racist code and why it's


dangerous for black people
As facial recognition tools play a bigger role in fighting crime, inbuilt racial biases raise troubling
questions about the systems that create them

Ali Breland
Mon 4 Dec 2017 04.00 EST

“Y
ou good?” a man asked two narcotics detectives late in the summer of 2015.

The detectives had just finished an undercover drug deal in Brentwood, a


predominately black neighborhood in Jacksonville, Florida, that is among the
poorest in the country, when the man unexpectedly approached them. One of the
detectives responded that he was looking for $50 worth of “hard”– slang for crack cocaine. The
man disappeared into a nearby apartment and came back out to fulfill the detective’s request,
swapping the drugs for money.

“You see me around, my name is Midnight,” the dealer said as he left.

https://www.theguardian.com/technology/2017/dec/04/racist-facial-recognition-white-coders-black-people-police 1/5
6/12/2019 How white engineers built racist code – and why it's dangerous for black people | Technology | The Guardian

Before Midnight departed, one of the detectives was able to take several photos of him, discreetly
snapping pictures with his phone held to his ear as though he were taking a call.

Two weeks later, police wanted to make the arrest. The only information they had about the
dealer were the smartphone pictures, the address where the exchange had taken place, and the
nickname Midnight. Stumped, the Jacksonville sheriff’s office turned to a new tool to help them
track down the dealer: facial recognition software.

The technology helped them pin down a suspect named Willie Lynch. Lynch, who has been
described by close observers of the case such as Georgetown University researcher Clare Garvie as
a “highly intelligent, highly motivated individual” despite only having graduated high school – he
even filed his own case motions, which could be mistaken for ones written by an actual lawyer –
was eventually convicted and sentenced to eight years in prison. He is now appealing his
conviction.

Whether or not Willie Lynch is “Midnight” remains to be seen. But many experts see the facial
recognition technology used against him as flawed, especially against black individuals.
Moreover, the way the Jacksonville sheriff’s office used the technology – as the basis for
identifying and arresting Lynch, not as one component of a case supported by firmer evidence –
makes his conviction even more questionable.

The methods used to convict Lynch weren’t made clear during his court case. The Jacksonville
sheriff’s office initially didn’t even disclose that they had used facial recognition software.
Instead, they claimed to have used a mugshot database to identify Lynch on the basis of a single
photo that the detectives had taken the night of the exchange.

An ‘imperfect biometric’
The lack of answers the Jacksonville sheriff’s office have provided in Lynch’s case is
representative of the problems that facial recognition poses across the country. “It’s considered
an imperfect biometric,” said Garvie, who in 2016 created a study on facial recognition software,
published by the Center on Privacy and Technology at Georgetown Law, called The Perpetual
Line-Up. “There’s no consensus in the scientific community that it provides a positive
identification of somebody.”

The software, which has taken an expanding role among law enforcement agencies in the US over
the last several years, has been mired in controversy because of its effect on people of color.
Experts fear that the new technology may actually be hurting the communities the police claims
they are trying to protect.

“If you’re black, you’re more likely to be subjected to this technology and the technology is more
likely to be wrong,” House oversight committee ranking member Elijah Cummings said in a
congressional hearing on law enforcement’s use of facial recognition software in March 2017.
“That’s a hell of a combination.”

Cummings was referring to studies such as Garvie’s. This report found that black individuals, as
with so many aspects of the justice system, were the most likely to be scrutinized by facial
recognition software in cases. It also suggested that software was most likely to be incorrect when
used on black individuals – a finding corroborated by the FBI’s own research. This combination,
which is making Lynch’s and other black Americans’ lives excruciatingly difficult, is born from

https://www.theguardian.com/technology/2017/dec/04/racist-facial-recognition-white-coders-black-people-police 2/5
6/12/2019 How white engineers built racist code – and why it's dangerous for black people | Technology | The Guardian

another race issue that has become a subject of national discourse: the lack of diversity in the
technology sector.

Algorithms are usually written by white engineers who dominate the


technology sector. Photograph: Dominic Lipinski/PA

Racialized code
Experts such as Joy Buolamwini, a researcher at the MIT Media Lab, think that facial recognition
software has problems recognizing black faces because its algorithms are usually written by white
engineers who dominate the technology sector. These engineers build on pre-existing code
libraries, typically written by other white engineers.

As the coder constructs the algorithms, they focus on facial features that may be more visible in
one race, but not another. These considerations can stem from previous research on facial
recognition techniques and practices, which may have its own biases, or the engineer’s own
experiences and understanding. The code that results is geared to focus on white faces, and
mostly tested on white subjects.

And even though the software is built to get smarter and more accurate with machine learning
techniques, the training data sets it uses are often composed of white faces. The code “learns” by
looking at more white people – which doesn’t help it improve with a diverse array of races.

Technology spaces aren’t exclusively white, however. Asians and south Asians tend to be well
represented. But this may not widen the pool of diversity enough to fix the problem. Research in
the field certainly suggests that the status quo simply isn’t working for all people of color –
especially for groups that remain underrepresented in technology. According to a 2011 study by
the National Institute of Standards and Technologies (Nist), facial recognition software is actually
more accurate on Asian faces when it’s created by firms in Asian countries, suggesting that who
makes the software strongly affects how it works.

In a TEDx lecture, Buolamwini, who is black, recalled several moments throughout her career
when facial recognition software didn’t notice her. “The demo worked on everybody until it got
to me, and you can probably guess it. It couldn’t detect my face,” she said.

Unregulated algorithms
Even as the use of facial recognition software increases in law enforcement agencies across the
country, the deeper analysis that experts are demanding isn’t happening.

https://www.theguardian.com/technology/2017/dec/04/racist-facial-recognition-white-coders-black-people-police 3/5
6/12/2019 How white engineers built racist code – and why it's dangerous for black people | Technology | The Guardian

Law enforcement agencies often don’t review their software to check for baked-in racial bias –
and there aren’t laws or regulations forcing them to. In some cases, like Lynch’s, law enforcement
agencies are even obscuring the fact that they’re using such software.

Garvie said she is confident that police are using facial recognition software more than they let
on, which she referred to as “evidence laundering”. This is problematic because it obscures just
how much of a role facial recognition software plays in law enforcement. Both legal advocates
and facial recognition software companies themselves say that the technology should only supply
a portion of the case – not evidence that can lead to an arrest.

“Upon review, all facial recognition matches should be treated no differently than someone
calling in a possible lead from a dedicated tip line,” writes Roger Rodriguez, an employee at facial
recognition vendor Vigilant Solutions, in a post defending the software. “The onus still falls on
the investigator in an agency to independently establish probable cause to effect an arrest,” he
continues – probable cause that “must be met by other investigatory means”.

Even if facial recognition software is used correctly, however, the technology has significant
underlying flaws. The firms creating the software are not held to specific requirements for racial
bias, and in many cases, they don’t even test for them.

One company I spoke to, CyberExtruder, a facial recognition technology company that markets
itself to law enforcement, also said that they had not performed testing or research on bias in
their software. Vigilant Solutions declined to say whether or not they tested for it. CyberExtruder
did note that certain skin colors are simply harder for the software to handle given current
limitations of the technology. “Just as individuals with very dark skin are hard to identify with
high significance via facial recognition, individuals with very pale skin are the same,” said Blake
Senftner, a senior software engineer at CyberExtruder.

As for Lynch, his case is currently playing out in the Florida courts. The clock also can’t be turned
back for others like him, who may have been unfairly tried as a result of less-than-perfect
software without transparent standards for its use.

Facial recognition software raises many questions that need clear answers. Obtaining those
answers will take more than commissioning studies, as vital as they are. It’s also essential that
laws catch up with the technology, in order to provide people like Lynch with the opportunity to
know the tools that are being used against them. Most importantly, we need to take a closer look
at who’s making these algorithms, and how they’re doing it.

Ali Breland is a reporter at The Hill, where he focuses on the intersection of technology and politics. A
longer version of this piece appears in the upcoming “Justice” issue of Logic, a magazine about
technology. Visit logicmag.io to learn more.

Since you’re here…


… we have a small favour to ask. More people are reading and supporting our independent,
investigative reporting than ever before. And unlike many news organisations, we have chosen an
approach that allows us to keep our journalism accessible to all, regardless of where they live or
what they can afford.

The Guardian is editorially independent, meaning we set our own agenda. Our journalism is free
from commercial bias and not influenced by billionaire owners, politicians or shareholders. No

https://www.theguardian.com/technology/2017/dec/04/racist-facial-recognition-white-coders-black-people-police 4/5
6/12/2019 How white engineers built racist code – and why it's dangerous for black people | Technology | The Guardian

one edits our editor. No one steers our opinion. This is important as it enables us to give a voice to
those less heard, challenge the powerful and hold them to account. It’s what makes us different to
so many others in the media, at a time when factual, honest reporting is critical.

Every contribution we receive from readers like you, big or small, goes directly into funding our
journalism. This support enables us to keep working as we do – but we must maintain and build
on it for every year to come. Support The Guardian from as little as $1 – and it only takes a
minute. Thank you.

Support The Guardian

Topics
Facial recognition
US crime
Race
Technology sector
features

https://www.theguardian.com/technology/2017/dec/04/racist-facial-recognition-white-coders-black-people-police 5/5

You might also like