A new software tool – Fawkes

0


The use of facial recognition systems by large companies has serious ramifications for privacy. Fawkes, a free tool from UCicago’s computer scientists, is one way to fight back.

A new tool to protect against facial recognition software designed by University of Chicago researchers.

The rapid rise of facial recognition systems has placed technology in many facets of our daily lives, whether we know it or not. What may seem trivial when Facebook identifies a friend in a uploaded photo is becoming increasingly worrisome at companies like Clearview AI, a private company that has trained its facial recognition system on billions of images scraped off without the consent of social media and the Internet.

But so far, people have had little protection against this use of their images, other than not sharing photos publicly at all.

A new research project from the Department of Computer Science at the University of Chicago provides a powerful new protection mechanism. Named Fawkes, the software tool “masks” photos to trick deep learning computer models that fuel facial recognition, without noticeable changes visible to the human eye. With enough masked photos in circulation, a computer watcher will be unable to identify a person even from an unmodified image, thus protecting people’s privacy from unauthorized and malicious intrusion. The tool targets the unauthorized use of personal images and has no effect on models constructed from legitimately obtained images, such as those used by law enforcement.

“It’s about empowering people to take action,” said Emily Wenger, third-year doctoral student and co-lead of the project with first-year doctoral student Shawn Shan. “We have no illusions that this will resolve all privacy breaches, and there are likely solutions both technical and legal to help ward off abuse of this technology. But the purpose of Fawkes is to provide individuals with some power to defend themselves, because at the moment, no such thing exists. “

“The purpose of Fawkes is to provide individuals with some power to defend themselves, because at the moment, no such thing exists.”

—PhD student Emily Wenger

The technique is based on the fact that machines “see” images differently from humans. For a machine learning model, images are simply numbers representing each pixel, which systems known as neural networks mathematically organize into features they use to distinguish objects or individuals. When fed with enough different photos of a person, these models can use these unique characteristics to identify the person in new photos, a technique used for security systems, smartphones and, increasingly, people. law enforcement, advertising and other controversial applications.

With Fawkes – named after the Guy Fawkes mask used by revolutionaries in the graphic novel V for Vendetta—Wenger and Shan with their collaborators Jiayun Zhang, Huiying Li and UChicago Professors Ben Zhao and Heather Zheng exploit this difference between human and computer perception to protect privacy. By altering a small percentage of the pixels to drastically alter the way the person is viewed by the computer’s “eye”, the approach alters the facial recognition model, so that it labels real photos of the body. user with someone else’s identity. But to a human observer, the image appears unchanged.

Masked images to block facial recognition

Original, masked portraits of the study’s authors show how the changes Fawkes introduced are invisible to human viewers while disrupting facial recognition software. Credit: Image courtesy of SAND Lab at UChicago

In an article to be presented at USENIX Security Symposium next month, researchers found the method was nearly 100 percent effective in blocking recognition by leading models from Amazon, Microsoft, and other companies. While this may not disrupt existing models already trained on unmodified images uploaded to the Internet, posting masked images can potentially erase a person’s online “fingerprint”, the authors said, making future ones models unable to recognize this person.

“In many cases, we don’t control all of the images of ourselves online; some might be posted from a public source or posted by our friends, ”Shan said. “In this scenario, Fawkes succeeds when the number of masked images exceeds the number of unmasked images. So, for those users who already have a lot of images online, one way to improve their protection is to post even more images of themselves, all hidden, to balance the ratio.

In early August, Fawkes was featured in the New York Times. However, the researchers clarified a few points in the article. As of August 3, the tool had racked up nearly 100,000 downloads, and the team had updated the software to avoid the significant distortions described by the article, which were in part due to outliers in a public dataset. .

Zhao also responded to Clearview CEO Hoan Ton-That’s claim that it was too late for such technology to be effective given the billions of images already collected by the company, and that the company could use Fawkes to improve his model’s ability to decipher altered images.

“Fawkes is based on a poisoning attack,” Zhao said. What the CEO of Clearview suggested is akin to adversarial training, which does not work against a poisoning attack. Training his model on masked images will corrupt the model, because his model will not know which photos are being masked for a single user, let alone the hundreds of millions they are targeting.

“As for the billions of images already online, these photos are spread over several million users. Photos of other people do not affect the effectiveness of your cloak, so the total number of photos does not matter. Over time, your masked images will outnumber older images and masking will have the desired effect.

To use Fawkes, users simply apply concealment software to photos before posting them to a public site. Currently, the tool is free and available at the project site for users accustomed to using the command line interface on their computer. The team has also made it available as software for Mac and PC operating systems, and hopes photo-sharing or social media platforms may offer it as an option to their users.

“It basically resets the bar for mass surveillance… It level the playing field a bit.”

-Teacher. Ben zhao

“This essentially resets the bar of mass surveillance to the days of the facial recognition model before deep learning. It level the playing field a bit, to prevent resource-rich companies like Clearview from really messing things up, ”said Zhao, Neubauer professor of computer science and machine learning security expert. “If this fits into the larger social media or internet ecosystem, it could really be an effective tool to start tackling these kind of intrusive algorithms.”

Given the large market for facial recognition software, the team expects model developers to try to accommodate the concealment protections provided by Fawkes. But in the long run, the strategy holds promise as a technical hurdle to make facial recognition more difficult and costly for businesses to perform effectively without user consent, putting the choice to participate in the hands of the public.

“I think there might be short-term countermeasures, where people come up with little things to break this approach,” Zhao said. “But in the long run, I think image editing tools like Fawkes will continue to play an important role in protecting us from increasingly powerful machine learning systems. “


Share.

Comments are closed.