Pin It
TikToks using the AI anime face filter
Via TikTok

The dark side of surrendering your face to AI

TikTok is flooded with anime face filters and companies like Lensa are transforming our selfies into virtual masterpieces using artificial intelligence – but who’s profiting from our selfies?

We can no longer be sure that what we see is what we get. Scrolling down TikTok, an influencer’s face is replaced with a glitching anime girl. On Twitter, an anachronistic avatar is painted in the unmistakable style of Leonardo Da Vinci. In an Instagram reel, a smooth, round face is chiselled square at the jaw and stubbled with dark hairs. In real time, face filters are changing the way we appear to each other online, and to ourselves through our front-facing cameras.

For a long time, AI-powered face filters occupied a small niche in online life, limited to future-facing art projects, quirky lo-fi Snapchat animations, and short-lived virtual beauty looks. Now, though, they’re increasingly integrated with the platforms we use to communicate. TikTok has built-in apps to transplant you into a manga or an Impressionist watercolour canvas – or, lately, Lensa AI has become the face filter du jour, turning plain old selfies into dreamily-rendered digital portraits.

At the same time that these filters have become more accessible, the AI that drives them has become increasingly competent. Now, almost anyone with a phone camera can effectively commission a pretty convincing portrait in their desired style. Want to be an intergalactic warrior atop an alien volcano? Done. A Frida Kahlo painting? Easy. A photorealistic, anthropomorphic frog? Sure, why not.

These new filters come with a vague sense of empowerment, as well. In 2023, they suggest, you’re free to go beyond your biological limits. You’re free to experiment, to break boundaries, and explore different versions of yourself. You’re free to be whoever you want to be. But are these digital mutations really free, though? And if not, then what’s the cost?

There’s an adage that’s been knocking around the internet for years, in various forms: If you’re not paying for the product, then you are the product. Actually, this quote can be traced back to a 1973 video by the artists Richard Serra and Carlota Fay Schoolman, Television Delivers People. “Commercial television delivers 20 million people a minute,” reads scrolling text. “It is the consumer who is consumed. You are the product of t.v. You are delivered to the advertiser who is the customer. He consumes you… You are the end product.” Three decades later, as social media ushered in a new age of surveillance, selling our data and deepest desires en masse, this idea was taken to its extreme; at the same time, we came to accept it as an unavoidable fact of life. If relinquishing our privacy was the cost of scrolling through Facebook, we collectively decided, it was a cost we were willing to pay.

Jak Ritger, an artist and researcher at Do Not Research and New Models, calls this awareness that social media sites are harvesting and monetising our data “surveillance realism”. When it comes to AI face filters, though, our role in the transaction becomes a bit hazier. What would advertisers want with millions of selfies, we might ask. Who stands to profit from a FaceApp photo that shows what you’ll look like as a 90-year-old? This is where things get dark.

In March 2022, the American company Clearview AI (which has previously been criticised for its far-right ties) made its world-leading facial recognition technology available to Ukraine, free of charge. Using Clearview tech, the Ukrainian government has identified the dead bodies of hundreds of Russian soldiers, matching their faces to billions of images scraped from the Russian social media site Vkontakte, and sent confirmation of their deaths to family members back home. Clearview itself says that this is instrumental in countering Russian propaganda that downplays the consequences of war for its own citizens; others have suggested that it’s a form of psychological warfare aimed at eroding the enemy’s morale. Either way, it’s far from the only way that people’s faces – scraped from the public internet and stored in vast datasets – have been turned into fodder for new-era surveillance tools in the last 12 months. 

There’s also the case of Kelly Conlon, who was chaperoning her nine-year-old daughter’s Girl Scout troop to see the Radio City Christmas Spectacular in December 2022, when she was refused entry after being flagged by the venue’s facial recognition system. The reason? She worked for a law firm with a case against Madison Square Garden CEO James Dolan, which landed her on a strict no-entry list. Then, early this year, there was the mistaken arrest of Randall Reid after facial recognition technology wrongfully identified him as another Black man, drawing renewed attention to the in-built racial bias of AI.

Together, these cases form an unsettling picture of where facial recognition technology is heading, and what it’s really used for: a world where the tech is spearheaded by the military, the government, and corporations, for purposes of psychological manipulation, panoptic law enforcement, and arbitrary exclusion. They also showcase the full potential of technology trained on our faces, without our explicit permission.

Are we saying that you’re helping usher in a techno-feudalist dystopia every time you generate a cute profile picture using an AI filter? Of course not. But it’s worth noting that many of these tools retain the original images you upload – the ones your followers don’t see – and the right to use or share them in perpetuity. (Up until December, this included the highly popular Lensa, though it appears to have edited its terms of use since being publicly called out on Twitter – now, it allows users to delete their data, but in the meantime it still uses their photos to train its AI.) Is this any different to having an unedited selfie scraped off social media? Not really.

“Facial recognition has become ubiquitous,” says Ritger. “We are training a likeness dataset every time we post our face online. Lensa or [the] similar Portrait AI are similarly intrusive in terms of data captured. The difference is that the act of surveillance is aestheticized.”

In fact, Ritger suggests, the relatively recent fad for “surveillance-as-a-service” companies like Lensa represents a bleak acceptance that privacy is dead. “The recent Lensa hype and the backlash illustrate the sense of agency panic under tech-feudalism,” he says. “We are fully aware that we have no way to control how our likeness is used, so why not at least have fun while being surveilled?” In other words, if someone’s going to steal your data anyway, why not hand it over by turning yourself into a princess?

This may seem defeatist, but what else are we going to do? It seems like there are only two options: the playful nihilism we see in our social media feeds, or a full-on, Ted K-style retreat into technopessimism, away from the all-seeing eye of phone cameras and CCTV – reject modernity, return to monke.

“There is definitely a feeling that the only way to exit tech-feudal-surveillance-capitalism is by exiting modern life completely and going off the grid,” Ritger agrees. 

Many “primitivist” podcasters are already roleplaying this exit, of course, showing off their survival tips in the woods at the end of their garden, and gnawing on raw meat. However, unless they forego the internet and their audience altogether, they’re still willingly opting into their own surveillance. For a younger generation, which relies on social media to make professional connections, maintain a social life, and share creative work, logging off is a position of extreme privilege – dubbed “anonymity chic” – or else it doesn’t seem like an option at all. So why not throw caution to the wind, and simply enjoy the new, novel methods that companies devise to steal your face?

Whatever we decide – whether we want to form a whole pantheon of selves using AI face filters, or never upload a selfie again – it’s worth remembering that we are, as the saying goes, the product in the eyes of the platforms we use. Whenever we’re online, we’re pretty much always helping to train tools for surveillance, and how these tools will be used in the future depends on laws that could take years to wrap our heads around. The German government, for one, already rejects AI systems for biometric recognition in public spaces, and there are movements to shift broader European and US laws in the same direction (following widespread controversy about Clearview AI). In the UK, despite similar legal controversies, companies maintain full authority to scrape billions of images from the public internet – essentially, while you continue to post, your face is free real estate.