How police manipulate facial recognition

January 17, 2020 0 By Peter Engel

– So, this is gonna sound weird, but, a few years ago, the NYPD
used facial recognition to catch a shoplifter. And they didn’t even have a
clear picture of his face. The clerk said the guy kind of looked like the
actor Woody Harrelson. So they just pulled up
a photo of Harrelson and put it in the system. It worked, they caught the guy, it turns out he really did
look like Woody Harrelson. But facial recognition
systems were never built to be used this way. And incidents like this raise the question of whether police should
have this technology at all. It’s not the only time something
like this has happened. This past year, Georgetown University conducted a study of some of the use cases and they’re pretty wild. Some police departments
have actually pasted in different facial features in an effort to get the system to produce a match. If the left eye is
blocked in your picture, just paste in a new left eye. If the suspect’s mouth is open, paste in a new mouth that’s closed. These are delicate algorithms but in most cases there
are no strict rules for how police use the algorithms. And whatever the machine produces can be used as grounds for a police stop. Okay, but before we get into that, let’s talk about how facial
recognition really works. At its core, facial recognition is about tracking key facial landmarks
from photo to photo. The distance between your pupils, the angle of your nose, the
shape of your cheekbones, basically all the details of your face that make it distinctive. That works best from a straight on photo with at least 80 pixels
between the pupils. Think like a passport photo
or a driver’s license. But once you’ve got that basic pattern, sophisticated programs can recognize the same features at an angle. It can even work if part
of your face is blocked. As long as there are two pupils and enough features to be sure. Vendors like NEC, Morpho and Cognitech pioneered these systems, selling their software to local
and federal police forces. But in the past few
years, Amazon and Google have been building it into
their computing clouds too which makes it a lot easier to get. With a couple hundred bucks
and some coding skills, almost anyone can create a
facial recognition system. These programs work off
accuracy thresholds. The tighter the match,
the higher the number. But there’s no firm rule about how high the number needs to be. Which means that the same
time police are playing with the photos they upload to the system, they’re also playing with the standard for what counts as a match. So if you look like Woody
Harrelson even a little bit, an officer could adjust
the accuracy threshold until it registers as a hit. And then if you ask why
you’re being stopped, they can just say, the
machine said it was you. If you talk to the people
making these tools, you’re really not supposed
to do any of this. It’s like steering a car with your feet, you can make it work, but
it’s bizarre and dangerous. And with police, the end
result of all of that is stopping someone, maybe for no reason. That’s even worse because the
algorithms are less accurate for women and people of color. It’s not totally clear why that’s true, a lot of people think it’s just a result of algorithms that are
mostly trained on white men. But government testing shows it really consistently across the industry. You can see on this chart, the red lines are the error rates for black people, and the green lines are the
error rate for white people. The red lines are almost always higher which means the person
getting stopped for no reason is more likely to be
from a community at risk. Now, the NYPD says that
no one has been arrested on the basis of facial recognition alone. And that’s true, but facial recognition has been involved somehow
in more than 2800 arrests in the five and a half years
the program has been running. Even when there’s no arrests at all, a false match can still
lead to a police stop which has dangers of its own. There’s supposed to be a clear legal bar for making those stops,
but facial recognition is short circuiting that. Now, defenders of facial recognition will say that despite the problems, it’s still an effective tool for police to protect their communities. Detroit’s Project Greenlight is a network of connected surveillance cameras recently upgraded with facial recognition, and it’s credited with a 23%
drop in crime in the city. But it’s still controversial,
some community members say there’s no transparent oversight and the flood of new tips is
overwhelming the police force. The fight’s gotten really heated, so heated that one of the
city’s Police Commissioners was actually arrested at a hearing trying to speak out against the system. Other cities have passed local laws banning the use of facial
recognition by police for just that reason. San Francisco, home to some
of the largest tech firms in the world, banned it last year. San Francisco supervisor Aaron Peskin was particularly critical, calling it Big Brother technology. But, this isn’t just a tech problem, fundamentally, San Francisco’s saying the government just can’t be
trusted with this technology. Not because it’s so bad but because we don’t have enough oversight over how police departments
will actually use it. That’s a problem that goes much deeper than just recognizing faces. And as we find more powerful ways to peer into the average person’s life, it’s a problem that’s not going away. Thanks for watching, like and subscribe if you want some more. And, if you’re looking for another video, my colleague Kasey Newton
has an incredible report about Facebook moderators
and just what a difficult and disturbing job it is. So, check that out.