Opinion | The threat from deepfakes isnt hypothetical. Women feel it every day.

Nina Jankowicz, the author of “How to Lose the Information War: Russia, Fake News, and the Future of Conflict,” studies disinformation at the Wilson Center.

A few weeks ago, a digital artist caused a stir by distributing a set of videos that appeared to show Tom Cruise cavorting on TikTok. The videos were actually “deepfakes” — media that is digitally manipulated to create footage that looks eerily authentic. It’s only a matter of time, we’re told, until deepfakes start wreaking havoc in the digital ecosystem by extending disinformation to the realm of moving images.

For one group, though, that havoc actually arrived a long time ago: Women have been enduring the trauma of deepfakes for years. Though they make up about half the world’s population, the ways online technologies such as deepfakes and disinformation target them are overlooked. The online abuse that women endure has been weaponized by foreign adversaries seeking to influence and further polarize us. Unless we change the way we conceive of this problem, the first successful foreign deepfake will likely target an American woman. We cannot allow these campaigns to continue; women’s participation in our representative democracy is at stake.

Advertisement

As you read this, deepfakes and other online disinformation are being deployed against women to silence and humiliate them. The technology can take images of a woman’s face and credibly superimpose it on a porn actress’s body. The most advanced deepfakes can already generate images that look absolutely convincing.

We aren’t talking about shoddy Photoshop jobs. These images look real, and to the untrained eye, they may as well be. In 2018, investigative journalist and Post contributing columnist Rana Ayyub was targeted with a deepfake porn video that aimed to stifle her. It ended up on millions of cellphones in India, Ayyub ended up in the hospital with anxiety and heart palpitations, and neither the Indian government nor social media networks acted until special rapporteurs from the United Nations intervened and called for Ayyub’s protection.

In South Korea, citizens want such abuse to end; hundreds of thousands of South Koreans have signed a petition to the president to stop the deepfake porn images targeting famous Korean women. But even less-convincing, manipulated photos and videos can prove damaging. In 2017, I interviewed Svitlana Zalishchuk, then a member of parliament in Ukraine. Her public image had been tarnished by “cheap fakes” — poorly edited photos that sexualized, demeaned and attempted to drive her out of public life. It is almost certain the campaign against Zalishchuk was a Kremlin operation.

Advertisement

These attacks are not isolated incidents. Research by Henry Ajder and Giorgio Patrini has demonstrated that the vast majority of existing deepfakes depict women in nonconsensual pornography. Last year, Ajder also uncovered a Telegram channel through which users paid $1.25 to generate nude images of more than 680,000 women, mostly in Eastern Europe and Russia, without their knowledge or consent.

Share this articleShare

Closer to home, crudely edited images and false sexualized narratives targeting Democratic vice-presidential nominee Kamala D. Harris proliferated on social media during the 2020 campaign. That the images were unconvincing did not stop social media users from sharing them or related narratives. A research team I led found more than 250,000 instances of such abuse across six social media platforms in the two months leading up to Election Day.

Unless leaders in national security, technology and media reframe this discussion to focus on its current victims, we risk being caught unprepared — as we were by Russian disinformation in 2016 — when the deepfake threat is fully unleashed on American information consumers. The consistent appeal of misogyny, which has powered the world for millennia and makes the Internet a dangerous place for women, means that women who speak up about online abuse are written off as being emotional. We are told to suck it up, that this is simply part of being in the public eye.

Advertisement

But the harassment we face (questioning our fitness for our careers, policing our appearance and tone of voice) — along with the looming threat that one day we will unwillingly star in deepfake pornography — changes how we engage in discourse online and offline. Downstream, millions of our female peers, watching as social media platforms allow the abuse to stand and as governments discuss the threats as though they do not already affect us, reconsider how or whether they want to engage in public life at all.

Rather than attempt to solve the hypothetical problems of tomorrow, we must work to remedy the damage that deepfakes and gendered disinformation are creating today. When we establish regulation to crack down on nonconsensual deepfake pornography, when technology companies train their resources on protecting women against the outsize burden of abuse they bear, when we raise awareness of the threat as it stands, we are not only showing solidarity with the women affected — we are ensuring them a more equitable place in the public discourse.

On this issue of paramount importance to our national security, to our democracy, for once, let’s listen to women. Keep us at the core of the challenge. Let us lead.

Read more:

Alyssa Rosenberg: What a cheerleading squad, a mom’s arrest and deepfakes reveal about regulating technology

Clint Watts and Tim Hwang: Deepfakes are coming for American democracy. Here’s how we can prepare.

The Post’s View: Deepfakes are dangerous — and they target a huge weakness

ncG1vNJzZmivp6x7uK3SoaCnn6Sku7G70q1lnKedZLyxtc2ipqerX2d9c32OaWpoamVkwam%2BxJqrZpyVmr2nrcqeqmaho6PBbrTYqaatoJWptqSty2auqKWVo3qnscSlZKKsXZrDpr7YZpuasV8%3D