The Secret History of Facial Recognition - CLEARVIEW AI - 20.01.20

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 43

22.01.

20
https://www.wired.com/story/secret-history-facial-recognition/?utm_source=pocket-newtab

Business

01.21.2020 06:00 AM

The Secret History of Facial Recognition


Sixty years ago, a sharecropper’s son invented a technology to identify faces. Then the record of
his role all but vanished. Who was Woody Bledsoe, and who was he working for?

Photograph: Dan Winters

Woody Bledsoe was sitting in a wheelchair in his open garage, waiting. To anyone who had
seen him even a few months earlier—anyone accustomed to greeting him on Sundays at the local
Mormon church, or to spotting him around town on his jogs—the 74-year-old would have been
all but unrecognizable. The healthy round cheeks he had maintained for much of his life were
sunken. The degenerative disease ALS had taken away his ability to speak and walk, leaving him
barely able to scratch out short messages on a portable whiteboard. But Woody’s mind was still
sharp. When his son Lance arrived at the house in Austin, Texas, that morning in early 1995,
Woody immediately began to issue instructions in dry-erase ink.

He told Lance to fetch a trash can from the backyard—one of the old metal kinds that Oscar the
Grouch lives in. Lance grabbed one and set it down near his father. Then Woody sent him into
the house for matches and lighter fluid. When Lance got back, Woody motioned to two large file
cabinets inside the garage.
They’d been around ever since Lance could remember. Now in his late thirties, Lance was pretty
sure they hadn’t been opened since he was a kid. And he knew they weren’t regular file cabinets.
They were the same kind he’d seen when he worked on sonar equipment for US nuclear
submarines—fireproof and very heavy, with a strong combination lock on each drawer. His
father slowly began writing numbers on the whiteboard, and to Lance’s astonishment, the
combination worked. “As I opened the first drawer,” he tells me almost 25 years later, “I felt like
Indiana Jones.”

Related Stories

I See You

It's Hard to Ban Facial Recognition Tech in the iPhone Era

Tom Simonite and Gregory Barber

Total Recall
China's AI Unicorns Can Spot Faces. Now They Need New Tricks

Will Knight

I See You

Facial Recognition Is Suddenly Everywhere. Should You Worry?

Tom Simonite

A thick stack of old, rotting documents lay inside. Lance began removing them and placing them
in his father’s hands. Woody looked over the piles of paper two inches at a time, then had his son
toss them into the fire he’d started in the burn barrel. Some, Lance noticed, were marked
“Classified” or “Eyes only.” The flames kept building until both cabinets were empty. Woody
insisted on sitting in the garage until all that remained was ash.

Lance could only guess at what he’d helped to destroy. For nearly three decades, his father had
been a professor at the University of Texas at Austin, working to advance the fields of automated
reasoning and artificial intelligence. Lance had always known him to be a wide-eyed scientific
optimist, the sort of man who, as far back as the late 1950s, dreamed of building a computer
endowed with all the capabilities of a human—a machine that could prove complex
mathematical theorems, engage in conversation, and play a decent game of Ping-Pong.

But early in his career, Woody had been consumed with an attempt to give machines one
particular, relatively unsung, but dangerously powerful human capacity: the ability to recognize
faces. Lance knew that his father’s work in this area—the earliest research on facial-recognition
technology—had attracted the interest of the US government’s most secretive agencies. Woody’s
chief funders, in fact, seem to have been front companies for the CIA. Had Lance just incinerated
the evidence of Washington’s first efforts to identify individual people on a mass, automated
scale?

Today, facial recognition has become a security feature of choice for phones, laptops, passports,
and payment apps. It promises to revolutionize the business of targeted advertising and speed the
diagnosis of certain illnesses. It makes tagging friends on Instagram a breeze. Yet it is also,
increasingly, a tool of state oppression and corporate surveillance. In China, the government uses
facial recognition to identify and track members of the Uighur ethnic minority, hundreds of
thousands of whom have been interned in “reeducation camps.” In the US, according to The
Washington Post, Immigration and Customs Enforcement and the FBI have deployed the
technology as a digital dragnet, searching for suspects among millions of faces in state driver’s
license databases, sometimes without first seeking a court order. Last year, an investigation by
the Financial Times revealed that researchers at Microsoft and Stanford University had amassed,
and then publicly shared, huge data sets of facial imagery without subjects’ knowledge or
consent. (Stanford’s was called Brainwash, after the defunct café in which the footage was
captured.) Both data sets were taken down, but not before researchers at tech startups and one of
China’s military academies had a chance to mine them.

Woody’s facial-recognition research in the 1960s prefigured all these technological


breakthroughs and their queasy ethical implications. And yet his early, foundational work on the
subject is almost entirely unknown. Much of it was never made public.

Fortunately, whatever Woody’s intentions may have been that day in 1995, the bulk of his
research and correspondence appears to have survived the blaze in his garage. Thousands of
pages of his papers—39 boxes’ worth—reside at the Briscoe Center for American History at the
University of Texas. Those boxes contain, among other things, dozens of photographs of
people’s faces, some of them marked up with strange mathematical notations—as if their human
subjects were afflicted with some kind of geometrical skin disease. In those portraits, you can
discern the origin story of a technology that would only grow more fraught, more powerful, and
more ubiquitous in the decades to come.

An image of Woody Bledsoe from a 1965 study. The computer failed to recognize that two
photos of him, from 1945 and 1965, showed the same person.
Photograph: Dan Winters

Woodrow Wilson Bledsoe—always Woody to everyone he knew—could not remember a time


when he did not have to work. He was born in 1921 in the town of Maysville, Oklahoma, and
spent much of his childhood helping his father, a sharecropper, keep the family afloat. There
were 12 Bledsoe kids in all. Woody, the 10th, spent long days weeding corn, gathering wood,
picking cotton, and feeding chickens. His mother, a former schoolteacher, recognized his
intelligence early on. In an unpublished essay from 1976, Woody described her as an
encouraging presence—even if her encouragement sometimes came from the business end of a
peach-tree switch.

When Woody was 12 his father died, plunging the family even deeper into poverty in the middle
of the Great Depression. Woody took on work at a chicken ranch while he finished high school.
Then he moved to the city of Norman and began attending classes at the University of
Oklahoma, only to quit after three months to join the Army on the eve of World War II.

Showing an aptitude for math, Woody was put in charge of a payroll office at Fort Leonard
Wood in Missouri, where wave after wave of US soldiers were being trained for combat. (“Our
group handled all black troops,” wrote the Oklahoman, “which was a new experience for me.”)
Then on June 7, 1944, the day after D-Day, Woody was finally deployed to Europe, where he
earned a Bronze Star for devising a way to launch large naval vessels—built for beach landings
—into the Rhine.

Having landed in the European theater just as Allied troops were accelerating to victory, Woody
seemed to have an unusually positive experience of war. “These were exciting times,” he wrote.
“Each day is equivalent to a month of ordinary living. I can see why men get enamored with war.
As long as you are winning and don’t sustain many casualties, everything is fine.” He spent the
following summer in liberated Paris, his mind and his experience of the world expanding wildly
in an atmosphere of sometimes euphoric patriotism. “The most sensational news I ever heard
was that we had exploded an atomic bomb,” Woody wrote. “We were glad that such a weapon
was in the hands of Americans and not our enemies.”

Woody couldn’t wait to get back to school once the war ended. He majored in mathematics at the
University of Utah and finished in two and a half years, then went off to Berkeley for his PhD.
After grad school, he got a job at the Sandia Corporation in New Mexico, working on
government-funded nuclear weapons research alongside such luminaries as Stanislaw Ulam, one
of the inventors of the hydrogen bomb. In 1956 Woody flew to the Marshall Islands to observe
weapons tests over Enewetak Atoll, parts of which to this day suffer worse radioactive
contamination than Chernobyl or Fukushima. “It was satisfying to me to be helping my own dear
country remain the strongest in the world,” he wrote.

Sandia also offered Woody his first steps into the world of computing, which would consume
him for the rest of his career. At first, his efforts at writing code tied directly to the grim
calculations of nuclear weapons research. One early effort—“Program for Computing
Probabilities of Fallout From a Large-Scale Thermonuclear Attack”—took into account
explosive yield, burst points, time of detonation, mean wind velocity, and the like to predict
where the fallout would land in the case of an attack.

But as his romance with computing grew, Woody took an interest in automated pattern
recognition, especially machine reading—the process of teaching a computer to recognize
unlabeled images of written characters. He teamed up with his friend and colleague Iben
Browning, a polymath inventor, aeronautical engineer, and biophysicist, and together they
created what would become known as the n-tuple method. They started by projecting a printed
character—the letter Q, say—onto a rectangular grid of cells, resembling a sheet of graph paper.
Then each cell was assigned a binary number according to whether it contained part of the
character: Empty got a 0, populated got a 1. Then the cells were randomly grouped into ordered
pairs, like sets of coordinates. (The groupings could, in theory, include any number of cells,
hence the name n-tuple.) With a few further mathematical manipulations, the computer was able
to assign the character’s grid a unique score. When the computer encountered a new character, it
simply compared that character’s grid with others in its database until it found the closest match.

The beauty of the n-tuple method was that it could recognize many variants of the same
character: Most Qs tended to score pretty close to other Qs. Better yet, the process worked with
any pattern, not just text. According to an essay coauthored by Robert S. Boyer, a mathematician
and longtime friend of Woody’s, the n-tuple method helped define the field of pattern
recognition; it was among the early set of efforts to ask, “How can we make a machine do
something like what people do?”

Around the time when he was devising the n-tuple method, Woody had his first daydream about
building the machine that he called a “computer person.” Years later, he would recall the “wild
excitement” he felt as he conjured up a list of skills for the artificial consciousness:

"I wanted it to read printed characters on a page and handwritten script as well. I could see it,
or a part of it, in a small camera that would fit on my glasses, with an attached earplug that
would whisper into my ear the names of my friends and acquaintances as I met them on the
street … For you see, my computer friend had the ability to recognize faces."

In 1960, Woody struck out with Browning and a third Sandia colleague to found a company of
their own. Panoramic Research Incorporated was based, at first, in a small office in Palo Alto,
California, in what was not yet known as Silicon Valley. At the time, most of the world’s
computers—massive machines that stored data on punch cards or magnetic tape—resided in
large corporate offices and government labs. Panoramic couldn’t afford one of its own, so it
leased computing time from its neighbors, often late in the evenings, when it was cheaper.

Panoramic’s business, as Woody later described it to a colleague, was “trying out ideas which we
hoped would ‘move the world.’ ” According to Nels Winkless, a writer and consultant who
collaborated on several Panoramic projects and later became a founding editor of Personal
Computing magazine, “Their function was literally to do what other people find just too silly.”
The company attracted an odd and eclectic mix of researchers—many of whom, like Woody, had
grown up with nothing during the Great Depression and now wanted to explore everything. Their
inclinations ranged from brilliant to feral. Browning, who came from a family of poor farmers
and had spent two years of his youth eating almost nothing but cabbage, was a perpetual tinkerer.
At one point he worked with another Panoramic researcher, Larry Bellinger, to develop the
concept for a canine-powered truck called the Dog-Mobile. They also built something called the
Hear-a-Lite, a pen-shaped device for blind people that translated light levels into sound.

Bellinger, who had worked as a wing-walker as a teenager (he kept the pastime secret from his
mother by playing off his bruises from bad parachute landings as bicycle injuries), had also
helped design the Bell X-1, the sound-barrier-breaking rocket plane made famous in Tom
Wolfe’s The Right Stuff. Later he created the Mowbot, a self-propelled lawnmower “for cutting
grass in a completely random and unattended manner.” (Johnny Carson featured the device on
The Tonight Show.)

Then there was Helen Chan Wolf, a pioneer in robot programming who started at Panoramic a
couple of years out of college. She would go on to help program Shakey the Robot, described by
the Institute of Electrical and Electronics Engineers as “the world’s first robot to embody
artificial intelligence”; she has been called, by one former colleague, “the Lady Ada Lovelace of
robotics.” In the early 1960s, when Wolf’s coding efforts could involve stacks of punch cards a
foot and a half high, she was awed by the range of ideas her Panoramic colleagues threw at the
wall. At one point, she says, Woody decided that he “wanted to unravel DNA, and he figured out
that it would take 30 or 37 years to do it on the computers that we had at the time. I said, ‘Well, I
guess we won’t do that.’ ”

Perhaps not surprisingly, Panoramic struggled to find adequate commercial funding. Woody did
his best to pitch his character-recognition technology to business clients, including the Equitable
Life Assurance Society and McCall’s magazine, but never landed a contract. By 1963, Woody
was all but certain the company would fold.

But throughout its existence, Panoramic had at least one seemingly reliable patron that helped
keep it afloat: the Central Intelligence Agency. If any direct mentions of the CIA ever existed in
Woody’s papers, they likely ended up in ashes in his driveway; but fragments of evidence that
survived in Woody’s archives strongly suggest that, for years, Panoramic did business with CIA
front companies. Winkless, who was friendly with the entire Panoramic staff—and was a
lifelong friend of Browning—says the company was likely formed, at least in part, with agency
funding in mind. “Nobody ever told me in so many words,” he recalls, “but that was the case.”

Sign Up Today

Sign up for our Longreads newsletter for the best features, ideas, and investigations from WIRED.

According to records obtained by the Black Vault, a website that specializes in esoteric Freedom
of Information Act requests, Panoramic was among 80 organizations that worked on Project
MK-Ultra, the CIA’s infamous “mind control” program, best known for the psychological
tortures it inflicted on frequently unwilling human subjects. Through a front called the Medical
Sciences Research Foundation, Panoramic appears to have been assigned to subprojects 93 and
94, on the study of bacterial and fungal toxins and “the remote directional control of activities of
selected species of animals.” Research by David H. Price, an anthropologist at Saint Martin’s
University, shows that Woody and his colleagues also received money from the Society for the
Investigation of Human Ecology, a CIA front that provided grants to scientists whose work
might improve the agency’s interrogation techniques or act as camouflage for that work. (The
CIA would neither confirm nor deny any knowledge of, or connection to, Woody or Panoramic.)

But it was another front company, called the King-Hurley Research Group, that bankrolled
Woody’s most notable research at Panoramic. According to a series of lawsuits filed in the
1970s, King-Hurley was a shell company that the CIA used to purchase planes and helicopters
for the agency’s secret Air Force, known as Air America. For a time King-Hurley also funded
psychopharmacological research at Stanford. But in early 1963, it was the recipient of a different
sort of pitch from one Woody Bledsoe: He proposed to conduct “a study to determine the
feasibility of a simplified facial recognition machine.” Building on his and Browning’s work
with the n-tuple method, he intended to teach a computer to recognize 10 faces. That is, he
wanted to give the computer a database of 10 photos of different people and see if he could get it
to recognize new photos of each of them. “Soon one would hope to extend the number of persons
to thousands,” Woody wrote. Within a month, King-Hurley had given him the go-ahead.

In one approach, Woody Bledsoe taught his computer to divide a face into features, then
compare distances between them.

Photograph: Dan Winters

Ten faces may now seem like a pretty pipsqueak goal, but in 1963 it was breathtakingly
ambitious. The leap from recognizing written characters to recognizing faces was a giant one. To
begin with, there was no standard method for digitizing photos and no existing database of
digital images to draw from. Today’s researchers can train their algorithms on millions of freely
available selfies, but Panoramic would have to build its database from scratch, photo by photo.

And there was a bigger problem: Three-dimensional faces on living human beings, unlike two-
dimensional letters on a page, are not static. Images of the same person can vary in head rotation,
lighting intensity, and angle; people age and hairstyles change; someone who looks carefree in
one photo might appear anxious in the next. Like finding the common denominator in an
outrageously complex set of fractions, the team would need to somehow correct for all this
variability and normalize the images they were comparing. And it was hardly a sure bet that the
computers at their disposal were up to the task. One of their main machines was a CDC 1604
with 192 KB of RAM—about 21,000 times less working memory than a basic modern
smartphone.

Fully aware of these challenges from the beginning, Woody adopted a divide-and-conquer
approach, breaking the research into pieces and assigning them to different Panoramic
researchers. One young researcher got to work on the digitization problem: He snapped black-
and-white photos of the project’s human subjects on 16-mm film stock. Then he used a scanning
device, developed by Browning, to convert each picture into tens of thousands of data points,
each one representing a light intensity value—ranging from 0 (totally dark) to 3 (totally light)—
at a specific location in the image. That was far too many data points for the computer to handle
all at once, though, so the young researcher wrote a program called NUBLOB, which chopped
the image into randomly sized swatches and computed an n-tuple-like score for each one.

Meanwhile, Woody, Helen Chan Wolf, and a student began studying how to account for head
tilt. First they drew a series of numbered small crosses on the skin of the left side of a subject’s
face, from the peak of his forehead down to his chin. Then they snapped two portraits, one in
which the subject was facing front and another in which he was turned 45 degrees. By analyzing
where all the tiny crosses landed in these two images, they could then extrapolate what the same
face would look like when rotated by 15 or 30 degrees. In the end, they could feed a black-and-
white image of a marked-up face into the computer, and out would pop an automatically rotated
portrait that was creepy, pointillistic, and remarkably accurate.

Keep Reading

The latest on artificial intelligence, from machine learning to computer vision and more

These solutions were ingenious but insufficient. Thirteen months after work began, the
Panoramic team had not taught a computer to recognize a single human face, much less 10 of
them. The triple threat of hair growth, facial expressions, and aging presented a “tremendous
source of variability,” Woody wrote in a March 1964 progress report to King-Hurley. The task,
he said, was “beyond the state of the art of the present pattern recognition and computer
technology at this time.” But he recommended that more studies be funded to attempt “a
completely new approach” toward tackling facial recognition.

Over the following year, Woody came to believe that the most promising path to automated
facial recognition was one that reduced a face to a set of relationships between its major
landmarks: eyes, ears, nose, eyebrows, lips. The system that he imagined was similar to one that
Alphonse Bertillon, the French criminologist who invented the modern mug shot, had pioneered
in 1879. Bertillon described people on the basis of 11 physical measurements, including the
length of the left foot and the length from the elbow to the end of the middle finger. The idea was
that, if you took enough measurements, every person was unique. Although the system was
labor-intensive, it worked: In 1897, years before fingerprinting became widespread, French
gendarmes used it to identify the serial killer Joseph Vacher.

Throughout 1965, Panoramic attempted to create a fully automated Bertillon system for the face.
The team tried to devise a program that could locate noses, lips, and the like by parsing patterns
of lightness and darkness in a photograph, but the effort was mostly a flop.

So Woody and Wolf began exploring what they called a “man-machine” approach to facial
recognition—a method that would incorporate a bit of human assistance into the equation. (A
recently declassified history of the CIA’s Office of Research and Development mentions just
such a project in 1965; that same year, Woody sent a letter on facial recognition to John W.
Kuipers, the division’s chief of analysis.) Panoramic conscripted Woody’s teenage son Gregory
and one of his friends to go through a pile of photographs—122 in all, representing about 50
people—and take 22 measurements of each face, including the length of the ear from top to
bottom and the width of the mouth from corner to corner. Then Wolf wrote a program to process
the numbers.

At the end of the experiment, the computer was able to match every set of measurements with
the correct photograph. The results were modest but indeniable: Wolf and Woody had proved
that the Bertillon system was theoretically workable.

Their next move, near the end of 1965, was to stage a larger-scale version of much the same
experiment—this time using a recently invented piece of technology to make the “man” in their
man-machine system far more efficient. With King-Hurley’s money, they used something called
a RAND tablet, an $18,000 device that looked something like a flatbed image scanner but
worked something like an iPad. Using a stylus, a researcher could draw on the tablet and produce
a relatively high-resolution computer-readable image.

Woody and his colleagues asked some undergraduates to cycle through a new batch of
photographs, laying each one on the RAND tablet and pinpointing key features with the stylus.
The process, though still arduous, was much faster than before: All told, the students managed to
input data for some 2,000 images, including at least two of each face, at a rate of about 40 an
hour.

Even with this larger sample size, though, Woody’s team struggled to overcome all the usual
obstacles. The computer still had trouble with smiles, for instance, which “distort the face and
drastically change inter-facial measurements.” Aging remained a problem too, as Woody’s own
face proved. When asked to cross-match a photo of Woody from 1945 with one from 1965, the
computer was flummoxed. It saw little resemblance between the younger man, with his toothy
smile and dark widow’s peak, and the older one, with his grim expression and thinning hair. It
was as if the decades had created a different person.
And in a sense, they had. By this point, Woody had grown tired of hustling for new contracts for
Panoramic and finding himself “in the ridiculous position of either having too many jobs or not
enough.” He was constantly pitching new ideas to his funders, some treading into territory that
would now be considered ethically dubious. In March 1965—some 50 years before China would
begin using facial pattern-matching to identify ethnic Uighurs in Xinjiang Province—Woody had
proposed to the Defense Department Advanced Research Projects Agency, then known as Arpa,
that it should support Panoramic to study the feasibility of using facial characteristics to
determine a person’s racial background. “There exists a very large number of anthropological
measurements which have been made on people throughout the world from a variety of racial
and environmental backgrounds,” he wrote. “This extensive and valuable store of data, collected
over the years at considerable expense and effort, has not been properly exploited.” It is unclear
whether Arpa agreed to fund the project.

What’s clear is that Woody was investing thousands of dollars of his own money in Panoramic
with no guarantee of getting it back. Meanwhile, friends of his at the University of Texas at
Austin had been urging him to come work there, dangling the promise of a steady salary. Woody
left Panoramic in January 1966. The firm appears to have folded soon after.

With daydreams of building his computer person still playing in his head, Woody moved his
family to Austin to dedicate himself to the study and teaching of automated reasoning. But his
work on facial recognition wasn’t over; its culmination was just around the corner.

In 1967, more than a year after his move to Austin, Woody took on one last assignment that
involved recognizing patterns in the human face. The purpose of the experiment was to help law
enforcement agencies quickly sift through databases of mug shots and portraits, looking for
matches.

As before, funding for the project appears to have come from the US government. A 1967
document declassified by the CIA in 2005 mentions an “external contract” for a facial--
recognition system that would reduce search time by a hundredfold. This time, records suggest,
the money came through an individual acting as an intermediary; in an email, the apparent
intermediary declined to comment.

The machine, they concluded, “dominates” the humans.

Woody’s main collaborator on the project was Peter Hart, a research engineer in the Applied
Physics Laboratory at the Stanford Research Institute. (Now known as SRI International, the
institute split from Stanford University in 1970 because its heavy reliance on military funding
had become so controversial on campus.) Woody and Hart began with a database of around 800
images—two newsprint-quality photos each of about “400 adult male caucasians,” varying in
age and head rotation. (I did not see images of women or people of color, or references to them,
in any of Woody’s facial-recognition studies.) Using the RAND tablet, they recorded 46
coordinates per photo, including five on each ear, seven on the nose, and four on each eyebrow.
Building on Woody’s earlier experience at normalizing variations in images, they used a
mathematical equation to rotate each head into a forward-looking position. Then, to account for
differences in scale, they enlarged or reduced each image to a standard size, with the distance
between the pupils as their anchor metric.

The computer’s task was to memorize one version of each face and use it to identify the other.
Woody and Hart offered the machine one of two shortcuts. With the first, known as group
matching, the computer would divide the face into features—left eyebrow, right ear, and so on—
and compare the relative distances between them. The second approach relied on Bayesian
decision theory; it used 22 measurements to make an educated guess about the whole.

In the end, the two programs handled the task about equally well. More important, they blew
their human competitors out of the water. When Woody and Hart asked three people to cross-
match subsets of 100 faces, even the fastest one took six hours to finish. The CDC 3800
computer completed a similar task in about three minutes, reaching a hundredfold reduction in
time. The humans were better at coping with head rotation and poor photographic quality,
Woody and Hart acknowledged, but the computer was “vastly superior” at tolerating the
differences caused by aging. Overall, they concluded, the machine “dominates” or “very nearly
dominates” the humans.

This was the greatest success Woody ever had with his facial-recognition research. It was also
the last paper he would write on the subject. The paper was never made public—for “government
reasons,” Hart says—which both men lamented. In 1970, two years after the collaboration with
Hart ended, a roboticist named Michael Kassler alerted Woody to a facial-recognition study that
Leon Harmon at Bell Labs was planning. “I’m irked that this second rate study will now be
published and appear to be the best man-machine system available,” Woody replied. “It sounds
to me like Leon, if he works hard, will be almost 10 years behind us by 1975.” He must have
been frustrated when Harmon’s research made the cover of Scientific American a few years later,
while his own, more advanced work was essentially kept in a vault.

Get WIRED Access

subscribe

Most Popular

In the ensuing decades, Woody won awards for his contributions to automated reasoning and
served for a year as president of the Association for the Advancement of Artificial Intelligence.
But his work in facial recognition would go largely unrecognized and be all but forgotten, while
others picked up the mantle.

In 1973 a Japanese computer scientist named Takeo Kanade made a major leap in facial-
recognition technology. Using what was then a very rare commodity—a database of 850
digitized photographs, taken mostly during the 1970 World’s Fair in Suita, Japan—Kanade
developed a program that could extract facial features such as the nose, mouth, and eyes without
human input. Kanade had finally managed Woody’s dream of eliminating the man from the man-
machine system.

Woody did dredge up his expertise in facial recognition on one or two occasions over the years.
In 1982 he was hired as an expert witness in a criminal case in California. An alleged member of
the Mexican mafia was accused of committing a series of robberies in Contra Costa County. The
prosecutor had several pieces of evidence, including surveillance footage of a man with a beard,
sunglasses, a winter hat, and long curly hair. But mug shots of the accused showed a clean-
shaven man with short hair. Woody went back to his Panoramic research to measure the bank
robber’s face and compare it to the pictures of the accused. Much to the defense attorney’s
pleasure, Woody found that the faces were likely of two different people because the noses
differed in width. “It just didn’t fit,” he said. Though the man still went to prison, he was
acquitted on the four counts that were related to Woody’s testimony.

Only in the past 10 years or so has facial recognition started to become capable of dealing with
real-world imperfection, says Anil K. Jain, a computer scientist at Michigan State University and
coeditor of Handbook of Face Recognition. Nearly all of the obstacles that Woody encountered,
in fact, have fallen away. For one thing, there’s now an inexhaustible supply of digitized
imagery. “You can crawl social media and get as many faces as you want,” Jain says. And thanks
to advances in machine learning, storage capacity, and processing power, computers are
effectively self-teaching. Given a few rudimentary rules, they can parse reams and reams of data,
figuring out how to pattern-match virtually anything, from a human face to a bag of chips—no
RAND tablet or Bertillon measurements necessary.

Even given how far facial recognition has come since the mid-1960s, Woody defined many of
the problems that the field still sets out to solve. His process of normalizing the variability of
facial position, for instance, remains part of the picture. To make facial recognition more
accurate, says Jain, deep networks today often realign a face to a forward posture, using
landmarks on the face to extrapolate a new position. And though today’s deep-learning-based
systems aren’t told by a human programmer to identify noses and eyebrows explicitly, Woody’s
turn in that direction in 1965 set the course of the field for decades. “The first 40 years were
dominated by this feature-based method,” says Kanade, now a professor at Carnegie Mellon’s
Robotics Institute. Now, in a way, the field has returned to something like Woody’s earliest
attempts at unriddling the human face, when he used a variation on the n-tuple method to find
patterns of similarity in a giant field of data points. As complex as facial-recognition systems
have become, says Jain, they are really just creating similarity scores for a pair of images and
seeing how they compare.

Get WIRED Access

subscribe

Most Popular

Ideas

Deepfake Porn Harms Adult Performers, Too

Lux Alptraum
Science

A Surge of New Plastic Is About to Hit the Planet


Beth Gardiner

Automated Solar Arrays Could Help Incinerate Global Warming

Laura Mallonee
Lauren Goode
Advertisement
But perhaps most importantly, Woody’s work set an ethical tone for research on facial
recognition that has been enduring and problematic. Unlike other world-changing technologies
whose apocalyptic capabilities became apparent only after years in the wild—see: social media,
YouTube, quadcopter drones—the potential abuses of facial-recognition technology were
apparent almost from its birth at Panoramic. Many of the biases that we may write off as being
relics of Woody’s time—the sample sets skewed almost entirely toward white men; the
seemingly blithe trust in government authority; the temptation to use facial recognition to
discriminate between races—continue to dog the technology today.

Last year, a test of Amazon’s Rekognition software misidentified 28 NFL players as criminals.
Days later, the ACLU sued the US Justice Department, the FBI, and the DEA to get information
on their use of facial-recognition technology produced by Amazon, Microsoft, and other
companies. A 2019 report from the National Institute of Standards and Technology, which tested
code from more than 50 developers of facial-recognition software, found that white males are
falsely matched with mug shots less frequently than other groups. In 2018, a pair of academics
wrote a broadside against the field: “We believe facial recognition technology is the most
uniquely dangerous surveillance mechanism ever invented.”

In the spring of 1993, nerve degeneration from ALS began causing Woody’s speech to slur.
According to a long tribute written after his death, he continued to teach at UT until his speech
became unintelligible, and he kept up his research on automated reasoning until he could no
longer hold a pen. “Always the scientist,” wrote the authors, “Woody made tapes of his speech
so that he could chronicle the progress of the disease.” He died on October 4, 1995. His obituary
in the Austin American-Statesman made no mention of his work on facial recognition. In the
picture that ran alongside it, a white-haired Woody stares directly at the camera, a big smile
spread across his face.

Shaun Raviv (@ShaunRaviv) is a writer living in Atlanta. He wrote about the neuroscientist
Karl Friston in issue 26.12.

This article appears in the February issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.

More Great WIRED Stories

 How Hong Kong’s protests turned into a Mad Max tableau


 Chatroulette was shorthand for chaos online. Then came the 2010s
 Star Wars: The Rise of Skywalker is a lesson in military opposites
 Why the “queen of shitty robots” renounced her crown
 The dazzling designs of Alpine ice formations

 👁 Will AI as a field "hit the wall" soon? Plus, the latest news on artificial intelligence

 🎧 Things not sounding right? Check out our favorite wireless headphones, soundbars, and
Bluetooth speakers

****************-

https://www.wired.com/story/hard-ban-facial-recognition-tech-iphone/

Tom Simonite Gregory Barber

Business
12.19.2019 07:00 AM

It's Hard to Ban Facial Recognition Tech in


the iPhone Era
San Francisco quietly amends its municipal surveillance law to allow for Apple's Face ID,
though the ban on facial recognition still applies.

After San Francisco in May placed new controls, including a ban on facial recognition, on
municipal surveillance, city employees began taking stock of what technology agencies already
owned. They quickly learned that the city owned a lot of facial recognition technology—much of
it in workers’ pockets.

City-issued iPhones equipped with Apple’s signature unlock feature, Face ID, were now illegal
—even if the feature was turned off, says Lee Hepner, an aide to supervisor Aaron Peskin, the
member of the local Board of Supervisors who spearheaded the ban.

Around the same time, police department staffers scurried to disable a facial recognition system
for searching mug shots that was unknown to the public or Peskin’s office. The department
called South Carolina’s DataWorks Plus and asked it to disable facial recognition software the
city had acquired from the company, according to company vice president Todd Pastorini. Police
in New York and Los Angeles use the same DataWorks software to search mug shot databases
using photos of faces gathered from surveillance video and other sources.

The two incidents underscore how efforts to regulate facial recognition—enacted by a handful of
cities and under consideration in Washington—will prove tricky given its many uses and how
common it has become in consumer devices as well as surveillance systems. The technology,
criticized as insufficiently accurate, particularly for people of color, is cheaper than ever and is
becoming a standard feature of police departments.

The latest on artificial intelligence, from machine learning to computer vision and more

After SF's ban, nearby Oakland and Somerville, Massachusetts, adopted similar rules. As other
cities join the movement, some are moving more carefully and exempting iPhones. A facial
recognition ban passed by Brookline, Massachusetts, last week includes exemptions for personal
devices used by city officials, out of concerns about both Face ID and tagging features on
Facebook. The city of Alameda, in San Francisco Bay, is considering similar language in its own
surveillance bill, which is modeled on San Francisco’s trend-setting legislation. “Each city is
going to do it in their own way,” says Matt Cagle, an attorney at the ACLU of Northern
California who has been working with cities considering bans. “There are going to be some
devices that have [facial recognition] built in and they’re trying to figure out how to deal with
that.”
On Tuesday, San Francisco supervisors voted to amend their law to allow the use of iPhones
with Face ID. The amendments allow municipal agencies to obtain products with facial
recognition features—including iPhones—so long as other features are deemed critically
necessary and there are no viable alternatives. The ban on using facial recognition still applies.
City workers are blocked from using Face ID, and must tap in passcodes.

When the surveillance law and facial recognition ban were proposed in late January, San
Francisco police officials told Ars Technica that the department stopped testing facial recognition
in 2017. The department didn’t publicly mention that it had contracted with DataWorks that
same year to maintain a mug shot database and facial recognition software as well as a facial
recognition server through summer 2020, nor did the department reveal that it was exploring an
upgrade to the system.

WIRED learned details of the contract, and of the 2019 testing, through a public records request.
OneZero previously published an email from DataWorks that claimed SFPD as a customer.

Records from the San Francisco Police Department related to facial recognition systems.

The documents WIRED obtained included an internal police department email—sent on the
same day in January that the San Francisco ban was proposed—mentioning tests of a new facial
recognition “engine.” Asked about the tests, department spokesperson Michael Andraychak
acknowledged that SFPD had started a 90-day pilot of a new facial recognition engine in
January, but said access to it was disabled after the trial ended. After the law banning facial
recognition took effect in July, he said, SFPD “dismantled the facial recognition servers
connected with DataWorks.”

Prior to that, SFPD appears to have been in a position to use facial recognition relatively easily,
and without public knowledge. That was news to Brian Hofer, a lawyer and privacy activist who
helped draft the SF ban and similar ordinances passed in nearby Berkeley and Oakland. He says
the fact it escaped public knowledge shows the need to restrict acquisition of surveillance
technology, because departments can obtain the systems without the public’s knowledge. “That’s
one of the reasons why we’ve been pushing these ordinances everywhere,” he adds.

San Francisco's ordinance allows the sheriff and district attorney to ask the Board of Supervisors
for exceptions from the facial recognition ban. The iPhone-related amendments could make it
easier for city agencies to purchase surveillance systems equipped with facial recognition,
provided other features are justified as critically necessary and without alternatives. That might
raise hackles from some privacy advocates, but the ACLU’s Cagle says the important thing is
that the ban on using facial recognition is maintained. “San Francisco is working to future-proof
the ban and strengthen it,” he says.

More Great WIRED Stories

 Instagram, my daughter, and me


 Tweak these Google Chrome settings to level up your browsing
 Welcome to Rachel, Nevada—the town closest to Area 51
 The Irishman gets de-aging right—no tracking dots necessary
 Ewoks are the most tactically advanced fighting force in Star Wars

 👁 Will AI as a field "hit the wall" soon? Plus, the latest news on artificial intelligence

 🎧 Things not sounding right? Check out our favorite wireless headphones, soundbars, and
Bluetooth speakers

Tom Simonite is a senior writer for WIRED in San Francisco covering artificial intelligence and its effects
on the world. He was previously the San Francisco bureau chief at MIT Technology Review, and wrote
and edited technology coverage at New Scientist magazine in London. Simonite received a bachelor’s
degree from... Read more

Senior Writer

Gregory Barber is a staff writer at WIRED who writes about blockchain, AI, and tech policy. He graduated
from Columbia University with a bachelor’s degree in computer science and English literature and now
lives in San Francisco.

Staff Writer

Featured Video
Why Some Cities Are Banning Facial Recognition Technology

https://www.wired.com/story/microsoft-looms-privacy-debate-home-state/

Microsoft Looms Over the Privacy Debate in


Its Home State
The software company helped torpedo a facial recognition bill last year, though a state senator—
who's also a Microsoft program manager—has a new bill in the works.

Two Microsoft employees sat opposite one another in a Washington State Senate hearing room
last Wednesday. Ryan Harkins, the company’s senior director of public policy, spoke in support
of a proposed law that would regulate government use of facial recognition. “We would applaud
the committee and all of the bill sponsors for all of their work to tackle this important issue,” he
said.

On a dais among his fellow lawmakers sat the bill’s primary sponsor, Joseph Nguyen, a state
senator and program manager at Microsoft.

The occasion was a legislative hearing on Nguyen’s proposal and a second, broader privacy bill,
also supported by both Nguyen and Microsoft, that restricts some private uses of facial
recognition.

The two bills are drawing interest from the tech industry, in part because they are seen as
potential models for other states, or even federal lawmakers. The Washington bills, which have
bipartisan support, would introduce restrictions on facial recognition, which is unregulated in
most places today. The laws would require stores using the technology to disclose its use to
consumers, and force police to obtain a warrant before scanning faces in public. But the
proposals stop far short of the bans on the technology adopted by cities including San Francisco
and Cambridge, Massachusetts, and being considered by the European Union.

“People are really paying attention,” says Daniel Castro, vice president of the Information
Technology Innovation Foundation, a Washington, DC, think tank that takes funding from the
tech industry. Microsoft’s interest in the bills, and the state’s status as a tech hub also home to
companies including Amazon, make the legislation potentially influential nationally, he says.

A Microsoft spokesperson declined to discuss the Washington legislation. Other industry voices
at last week’s hearing came from tech industry lobbying group TechNet; Motorola, which sells
facial recognition surveillance systems to customers including schools; and Axon, which makes
police body cameras.

That corporate interest and the text of the draft bills worry some people who favor tighter privacy
regulations, such as the ACLU, and certain other Washington lawmakers. Norma Smith, a
Republican state representative, calls the draft law with rules for private facial recognition “a
corporate-centric bill that big tech supports to further their national interest.”

Smith says the bill lacks bite because it wouldn’t allow consumers to sue companies that violate
the law. Smith has introduced a bipartisan bill of her own that prohibits use of any artificial
intelligence technology to track personal characteristics such as gender or ethnicity in publicly
accessible places.

The two laws proposed in Washington’s Senate recast a bill introduced last year that included
rules on use of facial recognition by both government agencies and private interests, but failed
despite—or perhaps because of—the fact that Microsoft helped draft it. The bill passed
Washington’s Senate but Microsoft opposed the House version of its own bill after legislators
added a requirement that companies certify their technology worked equally well for all skin
tones and genders before deployment. Lawmakers couldn’t reconcile the different versions of the
bill, and it foundered.

Keep Reading

The latest on artificial intelligence, from machine learning to computer vision and more

This year’s bills divide the work of last year’s effort. Together their facial recognition provisions
mostly match suggestions published by Microsoft in 2018, when company president Brad Smith
called for regulating the technology. The company has continued to offer facial recognition
technology through its cloud unit in the meantime; rival Google has said it won’t launch a similar
service until appropriate safeguards are in place.
One of the new proposed bills is a broad privacy law, which, like a California law that took
effect this month, allows consumers to ask companies to delete some personal data, or refrain
from selling it. Microsoft has said it already offers the core rights provided by California’s law to
all US customers.

The proposed Washington privacy law also requires companies to inform consumers when facial
recognition technology is in use in publicly accessible places. That could mean posting notices in
stores, for example. Companies operating such systems could not add a person’s face to their
database without consent, unless there’s reason to believe they were involved in a specific
criminal incident, such as shoplifting.

The second bill, with Nguyen as the lead sponsor, is concerned with government use of facial
recognition. It requires agencies to publish accountability reports in advance of procuring the
technology with information including the system’s capabilities and limitations, and what data it
will use. It specifies that law enforcement agencies need a warrant before using the technology
for ongoing surveillance, unless there is imminent danger of serious physical injury.

Nguyen, the son of Vietnamese immigrants, says his work on the issue springs from a concern
over potential misuse of facial recognition, not his day job at Microsoft. State legislators in
Washington are part-time. “I’d love to not have to work at Microsoft, but because I have three
kids and a mortgage that’s reality,” he says, pointing out that his bill will need many votes
besides his own to become law. “I’m putting more regulation and oversight over the tech
industry.” He says he’s met with representatives of large tech companies, including Facebook,
Google, Amazon, Apple—and Microsoft.

Nguyen describes his bill as designed to prevent harmful uses of facial recognition, like tracking
protestors, while allowing beneficial uses, such as finding a kidnapper. The ACLU of
Washington says he has struck the wrong balance.

Nguyen got into a testy exchange at last week’s hearing with Jennifer Lee, a project manager at
the ACLU, after she said his bill ignored the interests of marginalized communities. Nguyen said
he had met plenty of community representatives; Lee said truly respecting such groups would
require pausing use of facial recognition until the public could say whether it wanted the
technology to be used or not. “Washingtonians deserve good privacy regulations,” she says. “Just
because Microsoft is here doesn’t mean we should have a corporate-friendly privacy bill.”

Nguyen’s facial recognition bill may become more corporate-friendly. Senator Reuven Carlyle,
primary sponsor of the larger privacy bill, said he is talking with Delta Air Lines, which wants to
make sure its rollout of facial recognition check-in will not be disrupted.

TechNet, the tech lobbying group whose members include big tech companies such as Amazon,
Facebook, and Apple, asked that any rules governing private use of facial recognition exempt
apps a person uses on their own device, for example to edit photos, or perhaps Face ID. Motorola
said requiring a warrant for law enforcement use of the technology in public was too onerous,
while Axon expressed concern that requiring allowing outsiders to test facial recognition
technology could allow leakage of private data or trade secrets.
Nguyen’s bill and the privacy bill with commercial facial recognition rules must pass through
committees and floor votes in both houses of Washington’s legislature to become law. They will
also have to withstand criticism from within state government.

At last week’s hearing, a representative of the state attorney general supported allowing
consumers to initiate lawsuits for violations. The Washington Association of Police Chiefs
complained that since no one has an expectation of privacy in public, the requirement for a
warrant before using facial recognition for public surveillance was unnecessary.

More Great WIRED Stories

 Chris Evans goes to Washington


 What Atlanta can teach tech about cultivating black talent
 The display of the future might be in your contact lens
 Scientists fight back against toxic “forever” chemicals
 All the ways Facebook tracks you—and how to limit it

 👁 The secret history of facial recognition. Plus, the latest news on AI

 🏃🏽‍♀️Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness
trackers, running gear (including shoes and socks), and best headphones

Tom Simonite is a senior writer for WIRED in San Francisco covering artificial intelligence and its effects
on the world. He was previously the San Francisco bureau chief at MIT Technology Review, and wrote
and edited technology coverage at New Scientist magazine in London. Simonite received a bachelor’s
degree from... Read more

*************-

https://www.wired.com/story/ai-great-things-burn-planet/

Business

01.21.2020 07:00 AM

AI Can Do Great Things—if It Doesn't Burn


the Planet
The computing power required for AI landmarks, such as recognizing images and defeating
humans at Go, increased 300,000-fold from 2012 to 2018. 
One algorithm that lets a robot manipulate a Rubik's Cube used as much energy as 3 nuclear plants
produce in an hour.Photograph: Getty Images

Last month, researchers at OpenAI in San Francisco revealed an algorithm capable of learning,
through trial and error, how to manipulate the pieces of a Rubik's Cube using a robotic hand. It
was a remarkable research feat, but it required more than 1,000 desktop computers plus a dozen
machines running specialized graphics chips crunching intensive calculations for several months.

The effort may have consumed about 2.8 gigawatt-hours of electricity, estimates Evan Sparks,
CEO of Determined AI, a startup that provides software to help companies manage AI projects.
That’s roughly equal to the output of three nuclear power plants for an hour. A spokesperson for
OpenAI questioned the calculation, noting that it makes several assumptions. But OpenAI
declined to disclose further details of the project or offer an estimate of the electricity it
consumed.

Artificial intelligence routinely produces startling achievements, as computers learn to recognize


images, converse, beat humans at sophisticated games, and drive vehicles. But all those advances
require staggering amounts of computing power—and electricity—to devise and train
algorithms. And as the damage caused by climate change becomes more apparent, AI experts are
increasingly troubled by those energy demands.

“The concern is that machine-learning algorithms in general are consuming more and more
energy, using more data, training for longer and longer,” says Sasha Luccioni, a postdoctoral
researcher at Mila, an AI research institute in Canada.

It’s not just a worry for academics. As more companies across more industries begin to use AI,
there’s growing fear that the technology will only deepen the climate crisis. Sparks says that
Determined.ai is working with a pharmaceutical firm that’s already using huge AI models. “As
an industry, it’s worth thinking about how we want to combat this,” he adds.

Some AI researchers are thinking about it. They’re using tools to track the energy demands of
their algorithms, or taking steps to offset their emissions. A growing number are touting the
energy efficiency of their algorithms in research papers and at conferences. As the costs of AI
rise, the AI industry is developing a new appetite for algorithms that burn fewer kilowatts.

“The concern is that machine-learning algorithms in general are consuming more and more
energy, using more data, training for longer and longer.”

Sasha Luccioni, Mila

Luccioni recently helped launch a website that lets AI researchers roughly calculate the carbon
footprint of their algorithms. She is also testing a more sophisticated approach—code that can be
added to an AI program to track the energy use of individual computer chips. Luccioni and
others are also trying to persuade companies that offer tools for tracking the performance of code
to include some measure of energy or carbon footprint. “Hopefully this will go toward full
transparency,” she says. “So that people will include in the footnotes ‘we emitted X tons of
carbon, which we offset.’”

The energy required to power cutting-edge AI has been on a steep upward curve for some time.
Data published by OpenAI shows that the computing power required for key AI landmarks over
the past few years, such as DeepMind’s Go-playing program AlphaZero, has doubled roughly
every 3.4 months—increasing 300,000 times between 2012 and 2018. That’s faster than the rate
at which computing power historically increased, the phenomenon known as Moore’s Law
(named after Gordon Moore, cofounder of Intel.)

Recent advances in natural language processing—an AI technique that helps machines parse,
interpret, and generate text—have proven especially power-hungry. A research paper from a
team at UMass Amherst found that training a single large NLP model may consume as much
energy as a car over its entire lifetime—including the energy needed to build it.

Training a powerful machine-learning algorithm often means running huge banks of computers
for days, if not weeks. The fine-tuning required to perfect an algorithm, by for example searching
through different neural network architectures to find the best one, can be especially
computationally intensive. For all the hand-wringing, though, it remains difficult to measure how
much energy AI actually consumes, and even harder to predict how much of a problem it could
become.

The Department of Energy estimates that data centers account for about 2 percent of total US
electricity usage. Worldwide, data centers consume about 200 terawatt hours of power per year
—more than some countries. And the forecast is for significant growth over the next decade,
with some predicting that by 2030, computing and communications technology will consume
between 8 percent and 20 percent of the world’s electricity, with data centers accounting for a
third of that.

In recent years, companies offering cloud computing services have sought to address spiraling
power consumption and offset carbon emissions with varying measures of success. Google, for
example, claims “zero net carbon emissions” for its data centers, thanks to extensive renewable
energy purchases. Microsoft last week announced a plan to become “carbon negative” by 2030,
meaning it would offset all of the carbon produced by the company over its history. OpenAI
signed a deal to use Microsoft's cloud last July.

It isn’t clear how the AI boom will fit with the bigger picture of data center energy use, or how it
might alter it. Cloud providers do not disclose the overall energy demands of machine-learning
systems. Microsoft, Amazon, and Google all declined to comment.

The latest on artificial intelligence, from machine learning to computer vision and more

Jonathan Koomey, a researcher and consultant who tracks data center energy use, cautions
against drawing too many conclusions from cutting-edge AI demos. He notes that AI algorithms
often run on specialized chips that are more efficient, so new chip architectures may offset some
of the projected demand for compute power. He also says that the IT industry has in the past
offset rising energy demands in one domain by lowering energy use in others. “People are likely
to take isolated anecdotes and extrapolate to get eye popping numbers, and these numbers are
almost always too high,” Koomey says.

Still, as companies and other organizations increasingly use artificial intelligence, experts say it
will become important to understand the technology’s energy footprint, both in data centers and
in other devices and gadgets. “I would agree that the analysis community needs to get a handle
on it,” says Eric Masanet, a professor at Northwestern University who leads its Energy and
Resource Systems Analysis Laboratory.

Some AI researchers aren’t waiting for the industry to wake up. Luccioni of Mila helped
organize a workshop on climate change last month at an important AI conference, NeurIPS, and
she was pleased to find that the event was standing room only. “There’s a lot of interest in this,”
she says.

The Allen Institute for AI, a research institute founded by the late Microsoft cofounder Paul
Allen, has also called for greater awareness of AI’s environmental impact. The institute’s CEO,
Oren Etzioni, says he is encouraged by the efforts of researchers, as many papers now include
some account of the computational intensity of a particular algorithm or experiment.

Etzioni adds that the industry as a whole is gradually waking up to energy efficiency. Even if this
is largely because of the cost involved with training large AI models, it could help prevent AI
from contributing to a looming climate catastrophe. “AI is clearly moving toward lighter models
and greener AI,” he says.

More Great WIRED Stories

 Chris Evans goes to Washington


 What Atlanta can teach tech about cultivating black talent
 The display of the future might be in your contact lens
 Here's what the world will look like in 2030 ... right?
 The war vet, the dating site, and the phone call from hell

 👁 The case for a light hand with AI. Plus, the latest news on artificial intelligence

 🏃🏽‍♀️Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness
trackers, running gear (including shoes and socks), and best headphones

Will Knight is a senior writer for WIRED, covering artificial intelligence. He was previously a senior editor
at MIT Technology Review, where he wrote about fundamental advances in AI and China’s AI boom.
Before that, he was an editor and writer at New Scientist. He studied anthropology and journalism in...
Read more

Senior Writer
*************-

The Current for Jan. 21, 2020


The end of anonymity? Facial recognition
app used by police raises serious concerns,
say privacy advocates
Clearview A.I. scrapes billions of photos from public sites like Facebook and LinkedIn
CBC Radio · Posted: Jan 21, 2020 7:24 PM ET | Last Updated: January 21

A secretive facial recognition software used by hundreds of police forces is raising concerns after
a New York Times investigation said it could "end privacy as we know it."

Clearview A.I. has extracted more than three billion photos from public web sites like Facebook,
Instagram, employment sites and others, and used them to create a database used by more than
600 law enforcement agencies in the U.S., Canada and elsewhere, according to Times reporter
Kashmir Hill.

"It is being used to solve many murder cases, identity fraud cases, child exploitation cases," Hill
told The Current's host Matt Galloway. 

Police officers who spoke to Hill said the app was a far more powerful tool for cracking cases
than any government database they had used before. The company claims their software finds a
match in three out of four cases.

The software is so effective, Hill said, that even when she covered her face and mouth for a
photo, it still pulled up seven images of her.

"I was just shocked at how well this face recognition algorithm works," she said.

Kashmir Hill investigated Clearview A.I. for the New York Times. (Submitted by Kashmir Hill; Earl
Wilson/New York Times)

The end of anonymity?

Hill said investors and police officers she interviewed expect this software, or another similar
technology, to be available to the public within the next five years.

"If you were in a restaurant having a sensitive conversation about family secrets or work secrets,
a stranger next to you could snap your photo and know who you are, and understand that
conversation in context," she said.
What you lose, if this technology gets out into the wild, is the possibility of any anonymity in public ever.

- Brenda McPhail, Canadian Civil Liberties Association

"You can imagine stalkers using this tool — just really malicious use cases."

The potential uses for this kind of software is ringing alarm bells for privacy advocates.

"What you lose, if this technology gets out into the wild, is the possibility of any anonymity in
public ever. That's something that we need to think about," said Brenda McPhail, director of the
Canadian Civil Liberties Association's Privacy Technology and Surveillance Project.

McPhail said this kind of facial recognition technology could also make it easier for governments
to monitor protesters.

"It's a threat to the fundamental freedoms that we value in a democracy," she said.

A secretive company

When Hill started looking into Clearview, she initially came up against a lot of dead ends.

Its website was only accessible to law enforcement, and their listed New York address led her to
a building that didn't exist. For a long time, the company declined to speak to her.

 A billionaire just helped bankrupt Gawker Media founder Nick Denton. Who should you be
cheering for?

 Chinese snooping tech spreads to countries vulnerable to abuse

But they did find her.

While interviewing police officers about the app, she would ask them to scan a photo of her, to
see how the software worked. 

"The police officers would then get a call from the company saying, 'Are you talking to the
media?' So they were actually tracking who was talking to me while they weren't talking to me,"
she said. 

"So I found that a bit disturbing."

The end of anonymity? Facial recognition app used by police raises serious concerns, say
privacy advocates

Guests: Kashmir Hill, Michael Arntfield, Brenda McPhail

MG: Good morning. I'm Matt Galloway. You're listening to The Current.
[Music: Theme]

MG: Still to come, how anyone, including your humble correspondent, could become a maths
whiz with the right kind of teaching. But first, a new police tool that scrapes the Internet for
photos of suspects and has privacy advocates worried.

[Music: Theme]

SOUNDCLIP

VOICE 1: The manhunt here in New York City after a bomb scare shut down a major subway
station during the morning commute. Soon after the NYPD releasing these surveillance images.

VOICE 2: These 19 men went on social media to have sexual conversations with kids.

VOICE 3: In today's cyber age, we're dealing with the new breed of criminal.

MG: That's an ad for the company, Clearview AI. Clearview uses online images to help police
track down criminals. If you've never heard of this company, well, you're not alone. The
company has worked hard to keep a low corporate profile, which is ironic because profiles are a
big part of Clearview's business. Kashmir Hill is a technology reporter for The New York Times.
She took a deep dive into Clearview's work. Kashmir, good morning.

KASHMIR HILL: Good morning, Matt.

MG: What exactly does Clearview AI do?

KASHMIR HILL: So Clearview is a company that has scraped the web of 3 billion photos from
public websites like Facebook, Venmo, Instagram, educational sites, employment sites in order
to create a big public based database where it now has a facial recognition app that it can run on
all those photos. So you can take a photo of somebody and it will bring back all of the photos it
scraped of that person along with links to those sites.

MG: Just to be clear, where are these pictures coming? When you see it scraped the photos from
those sites, these are photos that people would have posted up on their social media accounts or
what have you?

KASHMIR HILL: Yeah, I mean, it's maybe photos you posted on public social media account
or, you know, photos that people have posted of you. Like, for example, I saw a tax professional
site among the results that were coming in on the app. And so, yeah, it's just photos that are up
on the web. And this is something people have long feared would happen because we have put so
many photos of ourselves out there. But companies that were capable of building a tool like this,
like Google have said, you know, this is the one technology they held back because it could be
used in such a bad way. But now it's been done. That tab has been broken and Clearview AI has
been working with over 600 law enforcement agencies in the U.S. and Canada and other places
for the last year. And no one had any idea except for the company itself and the police until this
article came out this weekend.

MG: Which rattled a lot of people and was shared all around the world. Tell me a little bit, just
briefly about the company itself. Who started this thing up?

KASHMIR HILL: So it's founded by a technologist named Hoan Ton-That, who is from
Australia and Richard Schwartz, who is kind of a longtime New Yorker who worked for Mayor
Rudolph Giuliani in the 1990s, he was the editorial page editor of The New York Daily News.
The two, Hoan is 31, richard is 61. They met at a event at the Manhattan Institute, which is a
conservative think tank in New York, and discovered they had a lot in common and decide to
build this facial recognition app together. It's funded by Peter Thiel, among other investors. And
Peter Thiel, of course, is famous for backing Facebook and Palantir, the surveillance company.
And I mean, as I was looking into this company, I just was amazed by how quickly it grew. You
know, the two founders met in 2016, and they only really got a product up and running in 2018.
And so over the course of a year, they have just spread like wildfire through law enforcement.

MG: As you did your research, what did you think when you when you first not only figured out
what Clearview was doing, but also who was buying the product? Who had access to this
database of three billion photos?

KASHMIR HILL: Well, as you mentioned, it was hard to do my research on the company
initially. When I first was tipped off to the company by a couple of FOIA researchers who
thought turn up in public records requests. When I went to the company's website, it was closed
to the public. It was only accessible by law enforcement. It had an address that was just a couple
of blocks away from the New York Times office and when I walked over there, I discovered that
the building didn't exist. When I checked LinkedIn, they had a fake employee that later turned
out to be a fake name that was being used by one of the founders, and they wouldn't return any of
my calls or emails. And I was reaching out to a lot of people I found affiliated with the company.
And so I ended up going instead to the users of the tool. So I started reaching out to police
departments that I had determined were using it. And in a couple of cases, I was able to talk to
detectives who used the app. And I was sceptical at that point because of how little I could find
on the company. I was like, maybe this is just snake oil or fake, but the police officers said it was
incredible. That it had helped them solve dozens of cases, dead end cases that they had
abandoned. They went back and ran the suspect's photos through the app and were able to
identify them. And they just said it worked so much better than the government provided
databases they had been using before that only have mug shots and driver's licence photos and
only really work when you're, you have a photo that's just a head on photo of the person. In this
case, you know, it could be a partial photo, the person be wearing glasses, a hat and I saw that for
myself when the company eventually started talking to me. I did an interview with their founder
and he ran the app on me and it pulled up photos of me that I didn't know were online.

MG: Photos of you that you'd never seen before.

KASHMIR HILL: Photos I'd never seen before. And then I covered my face. I covered my
mouth and my nose with my hand and he took another and it still pulled out seven photos of me,
including one from ten years earlier, and I was just shocked at how well this face recognition
algorithm works.

MG: You mentioned police forces in the United States are using this and also say that some
Canadian police forces are using this. Who's using it here in Canada?

KASHMIR HILL: A condition of the interview with officers who are using it in Canada is I
cannot say where they are, who they are, but it is being used to solve many cases of murder,
murder cases, identity fraud cases, child exploitation cases. I mean, it's just any case where you
have a face of somebody and you don't know who that is, you can run it through the app and
according to the company, it works up to 75 percent of the time. Three out of four searches, it's
going to find a match.

MG: What concerns you the most about how this technology could be used in the future and
who might use this technology?

KASHMIR HILL: I have a lot of concerns. I mean, I think that face recognition to solve crimes
is a great tool. And I definitely want police to be able to solve these crimes. One thing that
worried me was just how the technology works, that it was this kind of a little known company.
Most of the departments had done no vetting of them and they're sending sensitive information,
you know, police suspects, victims, to the company servers and so the company has this vast
database of everyone that the police department is interested in. And in my case, they actually
abused that power. While they weren't talking to me, it turned out that anytime I talked to a
police officer, I would ask them to run my photo to see what the results were and the police
officers would then get a call from the company saying, are you talking to the media? So they
were actually tracking who was talking to me while they weren't talking to me. So I found that a
bit disturbing. That's a power they could certainly abuse in terms of manipulating results or kind
of knowing who's in trouble. And then as I was talking to an investor behind the company, into
officers, they all predicted that this is an app that will be in public hands, either Clearview's app
or another company's copycat version of it and that we'll all have this power within the next five
years.

MG: An early investor told you that there's never going to be privacy and sure, this might lead to
some sort of dystopian future, but you can't ban it.

KASHMIR HILL: Right.

MG: That's terrifying to a lot of people, I would think.

KASHMIR HILL: I know, it was a quote where my jaw kind of dropped as he was saying it.
But I you know, I've always kind of thought that face recognition may be ubiquitous just because
of, you know, the advance in technology and all the photos of ourselves. But if that were truly
the reality, it would you would end public anonymity. If you were in a restaurant having a
sensitive conversation about family secrets or work secrets, astranger next to you could snap
your photo and know who you are, and understand that conversation in context. And I mean,
that's I don't know, that's terrifying to me because I'm a reporter and I'm always afraid of getting
scooped. But you can imagine, you know, stalkers using this tool. I just really malicious use
cases. I mean, it would be nice to go to a cocktail party and never have to worry about
remembering anyone's name, but I think that the harms may outweigh the benefits.

MG: Kashmir, good to talk to you about this. Thank you.

KASHMIR HILL: Thank you for having me on.

MG: Kashmir Hill, technology reporter for The New York Times. She mentioned that there are
police services here in Canada that are using this technology, although wouldn't say who. We did
reach out to a number of police forces in Canada to see which might be using Clearview AI's
products. The Vancouver police say they've never used or tested facial recognition from
Clearview AI and have no intention of doing so. The Toronto Police Service says it does use
facial recognition, but not through Clearview AI. The Ontario Provincial Police say they have
used facial recognition technology for various types of investigations, but they wouldn't specify
which product they use. Similarly, the RCMP would not comment on which techniques it uses,
but the RCMP did say and this is a quotation, "we continue to monitor new and evolving
technology."Privacy advocates may be concerned. According to my next guest though, this could
be a useful tool in catching criminals. Michael Arntfield is a criminologist and former police
officer. Michael, good morning to you.

MICHAEL ARNTFIELD: Good morning.

MG: What do you make of the services that Clearview A.I. is offering?

MICHAEL ARNTFIELD: Well, I think, as you mentioned, facial recognition has to some
extent already been used for years by law enforcement and intelligence agencies. This obviously,
based on the results that these anecdotally been provided about and are in the article about cold
cases being solved literally in seconds, obviously presents a new, I think, expediency to matches
and a way to sort of circumvent attempts to obscure the face or just shortcomings in the quality
of the image. So this, I think is a more sophisticated tool, based on what we've heard, which
obviously will be appealing to law enforcement, where quite frankly in a great deal of
investigations whenever using any biometric data, whether it be a face or fingerprints or
movement of of how somebody emulates or even DNA and for that matter. There's always a
second step verification process required, so you want the first step to be as accurate as possible.
So I think you should assuage listeners concerns that I mean, if a match gets made that someone
is instantly locked up and there's no due process. Like a fingerprint, this needs to be corroborated
independently by someone as part of sort of a peer review process before any further action is
undertaken.

MG: Tell me more about facial recognition. As I mentioned, there are a lot of police forces in
this country that are using it already. How important is it in catching criminals?

MICHAEL ARNTFIELD: Well, I mean, it's hugely important. I mean, the first sort of
breakthrough in terms of in Canada, at least in terms of understanding that automated systems for
identifying people as as people become as individuals and offenders sort of become more
itinerant is with licence plate scanning, which of course allowed for people stationary and mobile
scanning of vehicles on the road, whereas the licence plate was scanning within, you know, half
a second, you had to register donor information that the insurance information. And these are not
places that are being queried because they've done anything suspicious that's been done at
random. So, you know, privacy advocates said, well, you know, this is somewhat arbitrary and
we're just going fishing. And, you know, the courts have said you have no reasonable
expectation of privacy in public, we already know that. You're on a road operating a vehicle.
You have lessened degree of privacy, certainly in a vehicle than, say, versus your home, that's
also well-established. So we have the groundwork laid for the fact that this is efficient, that this
is minimally invasive, and that whatever the optics are, that the return is utilitarian in that the the
upsides far outweigh the downsides and are in the public interest interest, quite frankly.

MG: The optics here though are that there are three billion pictures that have been scooped up
from various social media sites, they're stored in a database that people can access. How is that
not a massive invasion of privacy?

MICHAEL ARNTFIELD: Well, I think we're deluding ourselves if we think that we have any
privacy whatsoever. I mean, people throw around the word, this is Big Brother. I mean, if you
know anything about Orwellian fiction or dystopian sci-fi, the dystopian future is one where
technocracy and private corporations sort of usurp the government in controlling day to day
affairs and regulating behaviour and we're already there. So, I mean, three billion images for use
by police in apprehending offenders and cold case files versus, I'm sure, far, far more than that
already in the troves of of Alphabet inc and the various media conglomerates that control the
technologies we use day to day and we know are trafficking it, and we know are selling it to
advertisers.

MG: So your point is that privacy is already gone. The horse, as they say, has left the barn.

MICHAEL ARNTFIELD: Precisely. And at this point, I mean, people who get up in arms
when they hear that there's a practical law enforcement application to it and invoke wrongly
terms like Big Brother and tI'm not sure why. Because, I mean, again, there has been no ruckus
about what's been going on already, with these companies. So to think that now we can actually
use this for a productive purpose, for a public safety purpose, I'm not sure now -- why is the
alarm being sounded now?

MG: What about when it ends up not in the hands of the police, but it ends up as Kashmir
saying, maybe in the hands of a stalker, maybe in the hands of somebody who is spying on
somebody else or who's listening in on other people's conversations? Isn't that an alarm bell that
should be ringing loudly as we see the spread of this technology?

MICHAEL ARNTFIELD: So that is the one concern I do have, which is why, quite frankly,
these types of overt relationships with partners, with law enforcement while outwardly
appearing, you know, sort of surreptitious and Clark cloak and dagger and weird. In reality, that's
the type of containment I think you want. My worst case scenario would be just getting in the
hands of, for instance, a lobbying group or political consulting group who could, for instance,
obtain information from polling stations at demonstrations and begin engaging in what we now
refer to as dioxin, which is identifying and publicly releasing their name, image and
corresponding financial family information of those people on the Internet, which quite frankly,
as we know, places these people in extreme danger. And that is the foremost misuse I see of this
system.

MG: Michael, we'll leave it there. It's good to hear from you. Thank you.

MICHAEL ARNTFIELD: Thanks for having me on.

MG: Michael Arntfield is a criminologist, former police officer. Brenda McPhail is director of
the Privacy Technology and Surveillance Project with the Canadian Civil Liberties Association.
She's been listening to these conversations and is with me in our studio in Toronto. Good
morning.

BRENDA MCPHAIL: Good morning.

MG: How comfortable are you with what Clearview AI seems to be doing?

BRENDA MCPHAIL: I'm profoundly uncomfortable with what Clearview AI.

MG: Profoundly uncomfortable and why is that the case?

BRENDA MCPHAIL: Well, this is exactly the sort of scenario that we have been concerned
about for a long time and that people have been saying, no, no, it's OK, big tech is being careful.
We're moving forward cautiously on this. Look our technology companies are even calling on
our regulators to think about this before we implant these technologies and then you've got a hot
shot startup who comes in with a fake it till you make it attitude and puts this technology out
there in the wild.

MG: A technology that police services and we heard from Michael on this, say works. And we
heard he saw him saying that that you can solve crimes, cold cases in seconds and the police
services that are using this technology, according to Kashmir Hill's reporting, are delighted with
it. Shouldn't if you took it society and the benefits to society, shouldn't that be considered as a
plus?

BRENDA MCPHAIL: It should absolutely be considered, but it needs to be weighed. The


benefits of what we get from this technology need to be weighed against what we lose. We had
the reporter at the beginning of the story identifying that what you lose if this technology gets out
into the wild is the possibility of any anonymity in public ever. That's something that we need to
think about.

MG: Michael's point is that, I mean, I'll use that phrase again that horse has left the barn. We
live in a society, it doesn't matter whether you're in a big city or in a small community where
there are cameras everywhere, things happen and not just social media, in terms of what you're
uploading, but that your movements are marked, your movements are recorded. Have we already
given up a reasonable expectation of privacy?
BRENDA MCPHAIL: We have not. Our courts agree that we have not. It's actually not true
that our courts have said we have no reasonable expectation of privacy in public because we do.
It's limited. It's less, but it exists. And one reason that we're holding firm on that line is because
we have a charter of rights and freedoms that says that if you're human, you deserve a certain
threshold of rights.

MG: How complicit are we in this? The app works in part by scooping up these photos off the
Internet, off social media sites, publicly available photos that we presumably have uploaded. So
how complicit are we in this?

BRENDA MCPHAIL: I think rather than thinking of it as complicity, I think of it as being


duped. We have been told for a long time, hey, if you share this, it's convenient. If you share this,
it's fun. If you share this, it's OK because all of our platforms have terms of service that won't let
anybody do anything bad with your photos. And here you've got an app that is in complete
defiance of all those terms of service, scooping up the images that you put up to share with your
friends and your family and your loved ones, handing them over to law enforcement and
rendering you subject to search, millions of times a day, potentially across the number of police
services that are using it.

MG: We asked Facebook about Clearview AI and the statement that Facebook sent reads in part,
"Scraping Facebook information or adding it to a directory are prohibited by our policies. We are
reviewing the claims about this company and will take appropriate action if we find that they are
violating our rules." What do you make of that statement?

BRENDA MCPHAIL: Well, the short version of that is they have a responsibility to do
something to protect their customers. And if they don't that speaks volumes about their corporate
attitude towards profit over people.

MG: What is that responsibility, whether it's Facebook or YouTube or Venmo, to ensure that the
photos that you might upload stay on that site and don't end up elsewhere?

BRENDA MCPHAIL: They have actually entered into a contract with individuals. It's a take it
or leave it contract that we don't have any choice but to accept their terms, which places the onus
on them to respect it and to safeguard our privacy.

MG: The other argument that people might make is and people make this around privacy issues
all the time, if you've done nothing wrong, then what are you worried about? If the police run
your photo and you're innocent, you're not going to come up. If you are somebody who has done
something and they run your photo and that photo comes up, then you are going to get caught
because you're guilty. What do you make of that argument?

BRENDA MCPHAIL: The argument that if you have nothing to hide, you have nothing to
worry about is passé. Honestly, if you have nothing to hide, what do you have to lose? And what
we have to lose is our privacy, our ability to be anonymous, our ability to function as a dignified
human without being watched, without being listened to, without being tracked. And it's
different when law enforcement does those things because they've got a great deal of power over
us, up to and including taking away our liberty.

MG: This is an attitude, I think, that a lot of people might share, though. We went out and spoke
with some students at the University of Toronto for their thoughts on this technology. Have a
listen to what they said.

SOUNDCLIP

VOICE 1: There is a lot of cybercrime happening now and there's a lot of threats, especially to
younger children with predators and such. I believe that this will help solve that.

VOICE 2: I have an international student visa. So in case if I protest anywhere, like it could be
revoked or something.

VOICE 3: Yes, I am OK with the idea of being monitored just because I don't think I do have
anything to hide. And also, I think like government officials and people who are taking
advantage of the sort of technology are not using it to spy on every single individual, but only
individuals that sort of pose a threat to national safety.

VOICE 4: It's an invasion of privacy. Like, I don't feel comfortable knowing that people know
what my face looks like and they can just pick me out of a crowd and know my name and my
history and my background. So that's creepy is the word I would use.

MG: Is it creepy or is it more than that?

BRENDA MCPHAIL: It's creepy and it's dangerous. It's a threat to democracy. It's a threat to
the fundamental freedoms that we value in a democracy.

MG: Do you think that students concerned that one student's concern about attending a protest
and being picked up because, you know, they were at that protest, is that something that's a
legitimate concern?

BRENDA MCPHAIL: That's an extremely legitimate concern. We know in other countries in


the world that and potentially in Canada that protesters are very much monitored. And we also
know that if people are afraid to stand up in public and protest and to speak out for what they
believe in, that governments cannot be properly held to account. So it's a fundamental risk if
people are afraid to attend a protest.

MG: And yet the cameras are everywhere, the photos have been uploaded. Yet now they're
being scraped, but they're there. Where are we at just finally in this in this conversation around
privacy in this country?

BRENDA MCPHAIL: Well, when we talk about the cameras being out there right now, we
have laws governing who gets to access this data and what it gets to be used for. They're not
always followed. There's problems with enforcement, but we have rules that underlie this. What
happens with this kind of AI technology and this app is that those rules are being tossed out the
window. The way the data is collected is probably against Canadian law and then we've got law
enforcement agencies using it without confirming that the use of the tool is compliant with
Canadian law. That's a big problem.

MG: Brenda McPhai, thank you. Ben McPhail, director of the Privacy Technology and
Surveillance Project with the Canadian Civil Liberties Association. We did ask Clearview AI for
comment on this. We didn't hear back. We'd love to hear from you. Is privacy a long gone
subject of concern or something that you are worried about? Let us know at cbc.ca/thecurrent or
on Twitter @TheCurrentCBC. And coming up in about 90 seconds, right after your regional
update, I'm going to get a crash course in dividing fractions. %One of the concepts that might put
people off math, but according to John Mighton, it does not need to be that way. And if we did a
better job teaching math, the world would be a better place. All of that coming up in 90 seconds.
I'm Matt Galloway and this is The Current on CBC Radio One.

Kashmir Hill & CLEAR VIEW (CBC Reportage 21.01.20)


Former Staff|Tech

I'm a privacy pragmatist, writing about the intersection of law, technology, social media and our
personal information. If you have story ideas or tips, e-mail me at khill@forbes.com. PGP key here.
These days, I'm a senior online editor at Forbes. I was previously an editor at Above the Law, a legal
blog, relying on the legal knowledge gained from two years working for corporate law firm Covington &
Burling -- a Cliff's Notes version of law school. In the past, I've been found slaving away as an intern in
midtown Manhattan at The Week Magazine, in Hong Kong at the International Herald Tribune, and in
D.C. at the Washington Examiner. I also spent a few years traveling the world managing educational
programs for international journalists for the National Press Foundation. I have few illusions about
privacy -- feel free to follow me on Twitter: kashhill, subscribe to me on Facebook, Circle me on Google+,
or use Google Maps to figure out where the Forbes San Francisco bureau is, and come a-knockin'.

https://www.cbc.ca/news/technology/chinese-snooping-tech-1.5322428

Chinese snooping tech spreads to countries


vulnerable to abuse
Systems installed in Serbia, Turkey, Russia, Kenya and even Germany, France, Italy
The Associated Press · Posted: Oct 16, 2019 9:32 AM ET | Last Updated: October 16, 2019

A surveillance camera is mounted near the Huawei headquarters in Shenzhen in south China's
Guangdong province. Huawei systems consisting of cameras equipped with facial recognition
technology are being rolled out across hundreds of cities around the world. (Andy
Wong/Associated Press)

When hundreds of video cameras with the power to identify and track individuals started
appearing in the streets of Belgrade as part of a major surveillance project, some protesters began
having second thoughts about joining anti-government demonstrations in the Serbian capital.

Local authorities assert the system, created by Chinese telecommunications company Huawei,
helps reduce crime in the city of two million. Critics contend it erodes personal freedoms, makes
political opponents vulnerable to retribution and even exposes the country's citizens to snooping
by the Chinese government.

The cameras, equipped with facial recognition technology, are being rolled out across hundreds
of cities around the world, particularly in poorer countries with weak track records on human
rights where Beijing has increased its influence through big business deals.

With the United States claiming that Chinese state authorities can get backdoor access to Huawei
data, the aggressive rollout is raising concerns about the privacy of millions of people in
countries with little power to stand up to China.

"The system can be used to trail political opponents, monitor regime critics at any moment,
which is completely against the law," said Serbia's former commissioner for personal data
protection, Rodoljub Sabic.

In this photo taken Sept. 25, members of a rights group paint a face with makeup to confuse the Huawei
surveillance video cameras with face-recognition software in Belgrade, Serbia. (Darko
Vojinovic/Associated Press)

Groups opposed to Serbian President Aleksandar Vucic say police are leaking video of protests
to pro-government media, which publish the images, along with the identities of participants.
Vucic himself has boasted the police have the capability to count "each head" at anti-government
gatherings.

During a recent rally, protesters climbed up a pole and covered a camera lens with duct tape
scrawled with the word "censored."

'Complies with all applicable laws'

Serbian police deny any such abuse of the Huawei system, which will eventually encompass
1,000 cameras in 800 locations throughout Belgrade. Huawei said in a statement that it
"complies with all applicable laws and regulations" in Serbia and anywhere else it does business.

While facial recognition technology is being adopted in many countries, spurring debate over the
balance between privacy and safety, the Huawei system has gained extra attention due to
accusations that Chinese laws requiring companies to assist in national intelligence work give
authorities access to its data.
As a result, some countries are reconsidering using Huawei technology, particularly the superfast
5G networks that are being rolled out later this year.

Still, Huawei, which denies accusations of any Chinese government control, has had no trouble
finding customers eager to install its so-called Safe Cities technology, particularly among
countries that China has brought closer into its diplomatic and economic orbit.

A high-tech video camera hangs from a lamppost in Belgrade, Serbia, in July. A video
surveillance system with facial recognition by Chinese tech giant Huawei is being rolled out
across hundreds of cities worldwide, particularly in poor countries with weak track records on
human rights where Beijing has increased its influence through big business deals. (Darko
Vojinovic/Associated Press)

Besides Serbia, that list includes Turkey, Russia, Ukraine, Azerbaijan, Angola, Laos,
Kazakhstan, Kenya and Uganda, as well as a few liberal democracies like Germany, France and
Italy. The system is used in some 230 cities, exposing tens of millions of people to its screening.

In a promotional brochure, Huawei says its video surveillance technology can scan over long
distances to detect "abnormal behaviour" such as loitering, track the movement of cars and
people, calculate crowd size and send alerts to a command centre if it detects something
suspicious. Local authorities can then act upon the information they receive.

In one case advertised on its website, the company says a suspect in a hit-and-run accident in
Belgrade was later discovered in China with the help of face recognition data shared by the
Serbian police with their Chinese counterparts.

In view of the cybersecurity accusations levelled by the U.S. and international rights groups
against Huawei, the relationship between China and countries that use the company's technology
is coming under renewed scrutiny.

China's influence in Serbia, a European Union candidate that Beijing views as a gateway to the
continent, has significantly expanded in recent years through Beijing's global Belt and Road
investment programs.

The populist Serbian regime has been keen to develop closer ties and the country's fragile
democracy allows China's economic interests to grow relatively unchecked, without raising too
many questions about human rights, environmental standards or transparency.

China's state investment bank has granted billions of dollars in easy-term loans to build coal-
powered plants, roads, railroads and bridges. Chinese police officers even help patrol the streets
of Belgrade, a security presence officially billed as assisting the growing number of Chinese
tourists who visit the city.

Huawei said in a statement that it 'complies with all applicable laws and regulations' in Serbia and
anywhere else it does business. (Mark Schiefelbein/Associated Press)
It's a similar story in Uganda, where China has invested heavily in infrastructure like highways
and a hydropower dam on the Nile.

When longtime President Yoweri Museveni launched a $167 million Cdn project to install
Huawei facial recognition systems a year ago, he said the cameras were "eyes, ears and a nose"
to fight rampant street crime in the sprawling capital, Kampala. Opposition activists say the real
goal is to deter street protesters against an increasingly unpopular government.

"The cameras are politically motivated," said Joel Ssenyonyi, a spokesman for the musician and
activist known as Bobi Wine who has emerged as a powerful challenger to Museveni. "They are
not doing this for security. The focus for them is hunting down political opponents."

In neighbouring Kenya, the government has also renewed its focus on public safety after a spate
of extremist attacks. It has been pushing to register people digitally, including by recording
DNA, iris and facial data. To do so, it turned to China, which helped finance the installation of
surveillance cameras in Kenya as far back as 2012.

The Kenyan government wants to pool into one database all the information from public and
private CCTV cameras, including those with facial recognition technology, a move that activists
warn would vastly expand its surveillance powers in a country that does not have comprehensive
data protection laws.

50 countries

A growing number of countries are following China's lead in deploying artificial intelligence to
track citizens, according to the Carnegie Endowment for International Peace.

The group says at least 75 countries are actively using AI tools such as facial recognition for
surveillance — and Huawei has sold its systems in 50 of those countries, giving it a far wider
reach than competitors such as Japan-based NEC and U.S.-based IBM.

"It's very unclear what safeguards are being put in place," said Steven Feldstein, a Carnegie
Endowment fellow who authored a report on the issue.

"Where are images being stored? How long are they being stored for? What kind of
accountability procedures will there be? What type of operations will be linked to these
surveillance cameras?"

Huawei said in an emailed statement that it "complies with all applicable laws and regulations in
our countries of business. This is the most fundamental principle of our business operations. We
are dedicated to bringing people better connectivity, eliminating digital gaps, and promoting the
sustainable development of our societies and economies."

A young Serbian rights group activist has her face painted to confuse the Huawei surveillance
video cameras with face-recognition software installed in Belgrade, Serbia. With public
authorities disclosing little about how the cameras work, the group has set up a tent to ask
pedestrians whether they know they are being watched. (Darko Vojinovic/Associated Press)

In Belgrade's bustling downtown Republic Square, high-tech video cameras are pointed in all
directions from an office building as pedestrians hurry about their everyday business.

With public authorities disclosing little about how the cameras work, a rights group has set up a
tent to ask pedestrians whether they know they are being watched.

"We don't want to be in some kind of Big Brother society," said rights activist Ivana Markulic.
"We are asking: Where are the cameras, where are they hidden, how much did we pay for them
and what's going to happen with information collected after this surveillance?"

You might also like