The past year has been a wake-up call about the pervasiveness and sophistication of deepfakes. Whether it was fake porn created using Taylor Swift's likeness and spread across social media, or deepfake audio of Sadiq Khan talking about the Gaza war, AI-generated content is becoming more convincing – and dangerous. In what looks to be an election year in both the US and UK, the threat these images pose to our democracy feels more tangible than ever (deepfakes of Joe Biden and Donald Trump are ubiquitous, and Rishi Sunak and Keir Starmer have already been targeted).

The people we spend the most time talking about the dangers of deepfakes are politicians and global celebrities. But another demographic is being targeted more than others: social media influencers, especially women. When social media agency Twicsy conducted a survey of more than 22,000 influencer accounts across Twitch, TikTok, Instagram, YouTube, and Twitter/X in March, they found that 84% had been victims of deepfake porn at least once (89% of the deepfakes found were of female influencers). These weren't small accounts, each with five-figure followings. And some of these deepfakes garnered more than 100 million views in just a month.

Influencers are perfect subjects for deepfake technology. They upload thousands of images and videos of themselves, often from multiple angles, in a short period of time (you only need one high-quality image to create a convincing deepfake). They speak in a similar cadence to one another, which is in keeping with algorithmic trends, meaning their voices can be easily imitated. They may use filters that make them appear smoother and more cyborg-like than any person you'd encounter in real life. And there are a number of apps, such as HeyGen and ElevenLab, that anyone can download to create a deepfake. These apps only require users to upload a handful of images to create something that looks highly realistic.

Influencers are more likely to be targeted than average celebrities. Images of pop stars and athletes are also plentiful, but these public figures typically have the money and resources to sue over deepfakes. Influencers, by comparison, have limited means to do anything about videos and images made using their likeness. Platforms are also more likely to respond to deepfakes of celebrities than those of less well-known people. When a pornographic deepfake of Swift became a trending topic on Twitter earlier this year, the site blocked all searches of her name, halting its spread almost immediately. It's hard to imagine the same reaction would have been met with an influencer with only a few thousand followers.

Deepfake porn is perhaps the issue of most concern for famous women. But the technology can be used for many other nefarious purposes beyond creating humiliating sexual content. Influencers' likenesses are now increasingly being used to create false advertisements to sell dangerous products such as erectile dysfunction supplements, and to promote propaganda disinformation, such as a deepfake of a Ukrainian influencer praising Russia.

Please select and enter your email address

Saturday Reading



A weekly guide to the best stories on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com.

morning Call



The New Statesman's quick and essential guide to the day's news and politics. The best way to sign up for Morning Call is through Morningcall.substack.com.






  • Management Office
  • art and culture
  • board member
  • Business/Corporate Services
  • Client/Customer Service
  • communication
  • Construction, Engineering & Construction
  • education, curriculum, instruction
  • Environment, Conservation, NRM
  • Facility/site management/maintenance management
  • Financial management
  • Health – Medical/Nursing Management
  • Human Resources, Training and Organizational Development
  • information and communication technology
  • Information services, statistics, records and archives
  • Infrastructure Management – Transportation, Utilities
  • Legal professionals and practitioners
  • Librarians and library management
  • management
  • marketing
  • OH&S, risk management
  • Business management
  • Planning/Policy/Strategy
  • Printing, design, publishing, web
  • Projects, Programs and Advisors
  • Real estate, asset and vehicle management
  • Public Relations and Media
  • Purchasing and procurement
  • quality management
  • science and technology research and development
  • Public Safety and Law Enforcement
  • Service delivery
  • sports and recreation
  • Travel, Accommodation, and Tourism
  • Welfare, Community/Social Services




In addition to using deepfakes of already popular influencers to create advertisements they did not consent to, we are also beginning to see how tech entrepreneurs can build entirely new fake influencers, created entirely generatively through a scrapbook of images of multiple media figures. These accounts are filled with highly realistic computer-generated images in which fake influencers talk about fake hobbies and share fake personality quirks while securing highly realistic and lucrative brand deals. Some have garnered hundreds of thousands of followers, generating thousands more monthly for male creators. Entrepreneurs can also concoct deepfake influencers who embody sexist stereotypes of the “perfect woman” to appeal directly to male audiences, potentially making them more popular than the real human influencers whose likenesses were used.

Of course, this affects more than just social influencers and affects the livelihoods of everyone who does creative work, whether it's music, art, acting, writing, etc. Just this week, actor Scarlett Johansson claimed that OpenAI, the technology organization building ChatGPT, asked her to use her voice as a chatbot, and even after she declined, she imitated his voice. OpenAI extracted the audio, but insisted it was not an imitation of Johansson.

It is easy to accuse influencers of being shallow and attention-seeking, promoting over-consumption and narrow standards of beauty. But this trend shows that deepfakes (and other forms of technology that can be used misogynistically) pose a danger to all of us, especially women. Anyone who shares an image of themselves online is at risk of having a deepfake created by someone with malicious internet access. If there is a digital representation of you online (images, something as general as your Facebook profile picture, a professional headshot on LinkedIn, videos, voice, written work, etc.), you are vulnerable. This reality should help us understand why deepfake technology requires immediate legislation – comprehensive, far-reaching laws that address the risks deepfakes pose to all of us.

[See also: What we must learn from the infected blood scandal]

Content from partners



Source link