Start-Up Tech Company Uses Facial Recognition to Find Victims of Child Sex Abuse
What if a software start-up company said it could identify the victim pictured in child pornography and warn them about criminal charges? Would it be the new best tool for law enforcement or an invasion of victims’ privacy? It turns out, the technology to use facial recognition to find victims of child sex abuse is already out there and being used across the country. Now, state legislators and law enforcement officers debating the ethics and legality of such a program.
What is Clearview AI’s Facial Recognition Software, and How is It Used?
Clearview AI is a start-up tech company that developed an app that allows law enforcement officers to use a database of over 3 billion images as part of their investigations into child exploitative materials. Users — who are exclusively law enforcement officers and private security firms at this time — can upload a cropped image of a person’s face and receive back identifying information including where they live or frequently visit based on where else that face is found in the background of other people’s photos. In its marketing materials, Clearview AI says it builds its database by scraping images from publicly available sources, including news sites, mugshots, and social media sites like Facebook, Twitter, Venmo, and YouTube.
However, the start-up tech company is also strangely secretive. A New York Times investigator found that its publicly posted business address doesn’t exist, and no one answers the company’s phone or email. The company also actively interfered with the New York Times investigation, calling law enforcement officers who searched her picture and telling them not to talk to her.
Despite the secrecy, the FBI, state, and local law enforcement offices across the country are currently using this software to close old and cold cases. They said the app was far better at identifying a sex abuse offender than other existing databases because they can use partial facial images (such as when the person was wearing a hat or glasses) rather than full-face images. The app then provides identifying information about the alleged perpetrator, such as information about where else the person’s image pops up online.
Facial Recognition Turns to Identifying Victims, Not Perpetrators
Originally, Clearview AI and other facial recognition technologies were used to identify perpetrators of sexual misconduct and other crimes. However, more recently, the tool has been turned toward identifying the victims pictured in known perpetrators’ imagery. For example, in one Indiana case, detectives uploaded the cropped facial images of 21 victims of the same sexual offender into the app. Of the 21 victims, they received 14 IDs for victims as young as 13. Charles Cohen, the retired chief of the Indiana State Police told the New York TImes:
“These were kids or young women, and we wanted to be able to find them to tell them we had arrested this guy and see if they wanted to make victim statements.”
Making those victim impact statements can have a substantial effect on the length of a convicted criminal’s sentence. It can also sometimes lead to the victims receiving restitution for the harm caused by the perpetrator’s actions including to their mental and physical health. However, repeated sharing of the same child exploitive imagery means that many times the same individual may face reliving the trauma of their experiences every time their face comes up in another search.
Facial Recognition Raises Privacy Concerns Among Advocates
The use of the Clearview AI facial recognition software to identify victims has raised significant concerns about those victims’ privacy. While the app is currently only sold to U.S. law enforcement and private security firms, there is no regulation in place to stop a company like Clearview AI from making similar software for public use. When asked about privacy, New York Times reporter Kashmir Hill said:
“We’ve been building the technology to make this possible for years now. Facebook building this huge database of our photos with our names attached to it, advances in image recognition and search technologies, it all led us here. But there’s been no accompanying regulation or rules around how the technology should be used. There’s no real law or regulation that makes this illegal. The scraping seems to be O.K. We don’t have a big ban on facial recognition. We don’t need to give consent for people to process our faces. And so in terms of holding this tool back, we’re just relying on the moral compasses of the companies that are making this technology and on the thoughtfulness of people like Hoan Tan-That [the owner of Clearview AI].”
Privacy advocates are concerned that the app itself has not been submitted to a technical audit by an independent company. The Clearview AI database stores sensitive data about child victims of sexual abuse and exploitation, never deleting them unless an administrator adjusts a user’s settings to purge the data.
State Legislators and Lawsuits Try to Ban Facial Recognition by Law Enforcement
The social media companies have their own concerns about the software. Many have issued cease and desist letters to Clearview AI directing them to stop scraping images from their websites and to dispose of databases already collected. There is also a class-action lawsuit underway in Illinois based on that state’s strong biometric privacy laws that prohibit the use of a person’s faceprint without their consent.
The New Jersey attorney general banned state officers from using the app and asked the state’s Division of Criminal Justice to investigate its use in that state. In support of his decision, Mr. Grewal said:
“I’m not categorically opposed to using any of these types of tools or technologies that make it easier for us to solve crimes, and to catch child predators or other dangerous criminals. . . . But we need to have a full understanding of what is happening here and ensure there are appropriate safeguards.”
Here in New York, the issue has gone to the legislature. On January 27, 2020, State Senator Brad Hoylman (D/WF-Manhattan), the chair of the New York State Senate Judiciary Committee announced he was introducing a bill to prohibit law enforcement from using facial recognition and other biometric surveillance technology based on concerns over privacy and residents’ civil liberties. He said in a statement:
“Facial recognition technology threatens to end every New Yorker’s ability to walk down the street anonymously. In the wrong hands, this technology presents a chilling threat to our privacy and civil liberties – especially when evidence shows this technology is less accurate when used on people of color, and transgender, non-binary and non-conforming people. New York must take action to regulate this increasingly pervasive and dangerously powerful technology, before it’s too late.”
There is a delicate balance to be had between safety and privacy. The line between effective law enforcement and an invasion of privacy can sometimes be extraordinarily thin. What side of that line facial recognition software like Clearview AI falls onto remains to be seen. However these new technologies are used, the victims identified through them need to know they have advocates standing up for their best interests, against the perpetrators who commit the crimes, and the companies that incorrectly use their images.
At Eisenberg & Baum, our sexual abuse attorneys stand beside child victims and their families against their abusers and companies that would improperly use or store their images. We know how to use all the tools the law provides to protect their rights and their identity. Contact us today to schedule a free consultation.