Accessibility links

Breaking News

Journalist 'haunted' by AI deepfake porn video


A screengrab of Channel 4 News journalist Cathy Newman, who found a deepfake video of her likeness while investigating online harassment of high-profile people. (YouTube/Channel 4 News)
A screengrab of Channel 4 News journalist Cathy Newman, who found a deepfake video of her likeness while investigating online harassment of high-profile people. (YouTube/Channel 4 News)

Veteran journalist Cathy Newman has weathered death threats, personal attacks and withering criticisms in her 25-year career. But the presenter on Britain’s Channel 4 News says the release of an AI-generated porn video featuring fake images of her was difficult to view.

“You know, I present the nightly news,” said Newman. “There’s a lot of grim stuff happening in the world and I am pretty inured to seeing awful videos. I thought I would take it with a pinch of salt and move on and carry on with the investigation. But I think I found it slightly more haunting than I expected.”

Newman discovered the 3 ½ minute pornographic AI-generated fake video while she and her team were investigating AI attacks against politicians, celebrities and other high-profile individuals.

“I watched every frame of it,” Newman told VOA. “It’s my face superimposed on someone else’s naked body. It swiftly became more disturbing.”

Newman’s case is the latest in a growing list of female journalists attacked with generative AI.

Channel 4 News presented Cathy Newman. Credit: Channel 4 News/Peter Searle
Channel 4 News presented Cathy Newman. Credit: Channel 4 News/Peter Searle

“I was under no illusions about the potential dangers of AI,” Newman said. “It brought home to me that this is going to happen to potentially every women and girl; everyone is at risk of this.”

Nabeelah Shabbir of the International Center for Journalism says AI is being weaponized to humiliate and discredit the work of women journalists. And women journalists are singled out for these attacks far more than their male colleagues.

“It’s a completely malicious intent and the point is it is a gender problem,” Shabbir told VOA earlier this year.

“And a woman journalist will be getting this more than a male journalist, at least with the data we have so far and the cases we’ve looked at,” she added.

Shabbir was part of the team behind the ICFJ and UNESCO 2022 report “The Chilling: A global study of online violence against women journalists.” Among the findings in the 325-page report are that threats of sexual assault, physical violence and manipulated images are prominent in attacks.

“Many psychological traumas, stress, PTSD, anxiety, depression, that has been one of our top findings of what happens to a woman journalist when she is simply doing her job,” said Shabbir.

During a World Press Freedom Day event in Chile on May 7, UNESCO updated the findings from that 2020 report at the panel called “Physical and digital violence against female journalists: new challenges for an old problem.”

The panelists said the situation has worsened for many female journalists, with upticks in virtual attacks, physical and sexual threats, and threats made against those close to women journalists.

Generative AI technology is able to amplify these online attacks.

Kiran Nazish of the Coalition for Women in Journalism says the use of AI to generate images targeting high-profile women is “happening all over the world.”

“This is one of the technologies available to anyone who wants to create misinformation,” she told VOA in an interview in January.

The Coalition is tracking women journalists who have been targeted.

Nazish said for the journalists targeted, there is a tremendous cost in being discredited in their work by trolls attacking on social media, along with false and unflattering photo-shopped images. Generative AI now takes it to a new level.

“What AI is doing is putting that on steroids,” Nazish told VOA. “This is highly sophisticated altered audio and video that is fake, but difficult to detect because of advancement in technology.”

VOA has been targeted, too. A fake video pushing cryptocurrency circulated last year of VOA Russian Service journalist Ksenia Turkova.

“It becomes a security issue and it becomes a visibility issue for those women journalists to be able to go back on air or to report,” Nazish said.

For the British presenter Newman, the images were dehumanizing — and clearly showed her likeness. “It’s a kind of violation on multiple levels,” Newman said. “It is the most intimate and graphic activity that I have not consented to.”

Newman didn’t allow the AI attack to stop her from going on the air. She included her case in a report she aired in March. Not long after Newman’s investigation, the British government announced that it was banning the creation of AI deepfake porn.

But Newman wonders how the government will track down the makers of these digital assaults. During her investigation, she and her team exposed the five most popular deepfake sites. These sites, she said, received 100 million views and featured 4,000 individuals, 250 of whom are high profile women.

“The people responsible for this are the people making money out of it,” said Newman. “The people who can stop this stuff is big tech, who are making money out of victimizing women and girls.”

Cecilia D'Anastasio of Bloomberg recently reported that Google, which accounts for some 80-90% of U.S. search queries, has lowered deepfake porn sites in search rankings following months of lobbying by victims, but that federal prohibition remains elusive despite some state-level bans. The Associated Press has reported that numerous big tech companies have also tried to ban deepfakes from their platforms, but keeping them off requires diligence.

Newman was able to identify the pornographic website that hosted the AI-generated video of her. But, she says, they weren’t able to track down who made — and profited — from the video.

XS
SM
MD
LG