Tech

Can we really fool facial recognition systems?

Cameras for facial recognition cover image
Published on
November 4, 2024

One picture and I can recognize you among millions

What if I told you that just by looking at a picture of your face for less than a second, I can recognize you between your siblings? It doesn’t matter if they look alike; I can recognize you among millions of people. What if I told you that I could do it tomorrow, next week, next month, next year… That I will never forget your face? That it doesn’t matter if I saw a picture of your face when you were a kid, ten, twenty, thirty, or forty years ago, I can still recognize you.

Well, I certainly can’t do that, but I know how to do it through artificial intelligence, and I’m not the only one: hundreds, in fact, thousands of people know how to do it. These technologies are becoming better and better each year and are now more accessible than ever. It’s no secret that millions of images are downloaded each day from different websites and social media to feed these face recognition systems, and several concerns arise from this.

We’re not going to dive into the long list of concerns that come with the use of facial recognition systems (remember that facial recognition systems also have a lot of benefits; it depends on when, where, and how they are used). Today, we will understand how these systems work and if we can fool them.

We can divide facial recognition into two steps: detecting a face and then identifying that face. With all the concerns around having our faces on multiple websites, social media, and surveillance on public streets, several people are trying to fool these facial recognition systems in both of these steps. For now, we’re just going to talk about the second one: fooling face recognition systems by altering facial features. If you wanna check out how people are trying to fool it in the first step (face detection) and how effective it is, stay tuned for our next facial recognition article.

How can an AI system identify a face by just looking at one picture?

Before diving into how to fool these systems, we should know a little bit about how they work. We said earlier that this process can be simplified into two steps, detecting faces and identifying faces. You can imagine this as a two-piece system where the output of one piece (face detector) is used as input in the other one (face identifier). You can change the face detector or the identifier for your favorite one: there are lots of them.

Face detection steps
Face detection steps. Photo by Christopher Campbell on Unsplash.

The face detector just finds a bounding box of the faces, some post-processing functions crop the faces and gives them to the identifier, which turns each face into an N-dimensional feature vector (feature vector is just a fancy mathematical way to say a list of ordered numbers and N-dimensional is the length of that list), the dimensions are usually 128, 256, 512 or 1024. This vector is what represents the face in this N-dimensional space, and once you have this vector you can compare two given faces simply by measuring the distances between vectors. Here’s a two-dimensional toy example:

2-dimensional space example
2-dimensional space example. Each face has a vector i.e. (1,1) and you can measure the distance between two vectors.

This is just made up space using the Euclidean distance, there’s a lot of ways to measure distances between N-dimensional vectors, some make more sense than others. In the following cases, we’re going to use cosine similarity, which gives us a score between -1 and 1, the closest the score is to one, the higher similarity between two faces, and the closest we get to -1, the lowest similarity we have.

Formula of cosine similarity
Here’s the spooky formula of cosine similarity.

The questions you might have now: how do we know with this similarity score if both images belong to the same person? How can we identify a person with this value? What’s the score from which we will consider that bigger values imply the image belongs to the same person? Is it 0.9? Is it 0.5? Is it 0.3333? Well, the answer is not that simple. It depends, there are a lot of graphs, curves, and math stuff to take into account at the moment to set a threshold, there are also some tradeoffs. For now, we’re not going to dive into this statistical analysis, if you’re interested in finding better thresholds for your models leave a comment, and if there’s enough interest we can write an article about it. In our facial recognition system we found out that 0.5 as an identification threshold is a good starting point, so let’s keep that number for now.

Okay, before we move on let’s do a quick recap of how this system works: Some artificial intelligence module (usually known as a face detector) detects faces in an image and gives those faces to another module that turns faces into an N-dimensional vector. Then, we can compare different vectors and measure how similar they are to each other using some distance metric that results in a score, and if this score is above a certain threshold we say that is the same person. How cool is that?

In practice, each facial recognition system has an M number of people enrolled (enroll is how we call the process of turning a face into a feature vector and then storing that feature vector for future comparisons). When we want to identify a person in an image or video, we detect the faces and extract the feature vector to compare it against the ones we have stored, this will give us M different scores and if the highest score is above the threshold we could say with a certain level of confidence that is the same person.

For those who want to know more, dig deeper on our article on how does facial recognition work.

The fooling

So, based on what we learned, to fool a facial recognition system, we need to make the picture of a face score lower than a certain threshold against the same person's picture stored in the facial recognition system. Easy enough, right? Just occlude your face, throw on a scarf, glasses, and what-have-you, or draw some black rectangles over the image of your face. Well… we’ve worked with facial recognition systems for quite some time and tested how different types of occlusions affect the score; you will be surprised by the results, stay tuned for our article explaining these results. Spoiler alert: biometric systems are rock-solid.

Face occlusion example
Artificial face occlusion examples.

But even if this works, you wouldn’t want to upload all your photos like this to social media or use this image as a profile picture (or maybe you would… I don’t know what you’re into, but most people won’t). With this in mind, the SAND Lab at the University of Chicago came up with a clever solution: Fawkes.

The SAND Lab at the University of Chicago has developed Fawkes, an algorithm and software tool (running locally on your computer) that allows individuals to limit how unknown third parties can track them by building facial recognition models out of their publicly available photos. At a high level, Fawkes “poisons” models that try to learn what you look like by putting hidden changes into your photos and using them as Trojan horses to deliver that poison to any facial recognition models of you. More about Fawkes here.

The basic idea is to add changes to an image at a pixel level (for the uninitiated, those are the tiny dots that make a digital image), and these changes will go unnoticeable to the human eye. Still, A.I. models will have a hard time learning the feature vectors of your face.

Fawkes has been tested extensively and proven effective in a variety of environments and is 100% effective against state-of-the-art facial recognition models (Microsoft Azure Face API, Amazon Rekognition, and Face++).

That’s what they claim, so… Fawkes is available for free, we have a facial recognition system, let’s see how this claim holds up against our system. We’ll be checking two things: Are the changes unnoticeable for the human eye? And can this fool our facial recognition system? We won’t be training a model from scratch with “cloaked images” (as they call it in the article) because of the large number of resources it takes to train these models. We’re going to use our custom version of ArcFace, and an identification threshold at 0.5. Tests will be performed with the same images they use on their website.

In the site of Fawkes they have a picture of Queen Elizabeth II, so let’s start by enrolling a picture of Queen Elizabeth II.

Enrolled image of the queen
This is the image we enrolled.

Now, let’s try to identify their image on the website without any alterations.

ID scores for the image of the queen
Identification scores for the uncloaked image.

We can see a result over 0.5 so we can say quite confidently that it’s Queen Elizabeth II (the other bars you see in the graph are there for reference, so you can get an idea of the scores for different faces). Usually, you would get a higher score than 0.6 but with the angle of the face and the hat, we have lower confidence (still above the threshold of 0.5). Now let’s try with the “cloaked image”.

ID scores for the cloaked image
Identification scores for the cloaked image.

The resulting score is still over 0.5, so, this cloaking is not fooling our facial recognition system, it barely affects the score. But did you notice any change in the image? Is this unnoticeable to the human eye? You can compute the difference between two images quite easily with skimage, you also get a similarity score between -1 and 1, in this case, we have a similarity of 0.986.

Comparisson cloaked and uncloaked images
Comparison of the cloaked and uncloaked images.

At first glance, these images might look very similar but I will leave here an animated GIF with the two images so you can have a better appreciation of the differences.

Image in full resolution of the Queen
Image in full resolution (compression might not do it justice).

Conclusion

I would say that the claim “100% effective against state-of-the-art” is quite the stretch. To be clear, we didn’t test this against any of the facial recognition systems they say (Microsoft Azure Face API, Amazon Rekognition, or Face++), did any extensive testing, or trained a model with “cloaked images”. But when we tried to compare “cloaked images” of a subject against our system the results weren’t that great.

We found that “cloaked images” appear similar to the “uncloaked images“ of the subject. This means that a model trained without cloaked images can still match cloaked and uncloaked images.

Unnoticeable by the human eye? Well, it depends on your definition. I wouldn’t have noticed if you showed me the cloaked image, but once you compare the images you can definitely notice a change (even when doing the minimal perturbation). I don’t see any Instagram model or the common folk using this any time soon (it also takes quite some time to cloak an image).

I find very interesting the work that the SAND Lab has been doing, and they came up with a very clever solution (you can read the paper here). It’s good to know that people are working on this important privacy issue. I’m sure in a few years down the line, we will have image cloaking apps on our phones.

Before you go, here’s a picture from 1955 of Queen Elizabeth II against a picture from 2011 with a score over 0.5 in our facial recognition system, told ya!

Queen comparison recognition system

If computer vision and biometrics are still a wonder to you, check our what is computer vision article or our biometrics solutions.