Thursday, May 23, 2024

Deepfake Porn Targeting Celebrities to Be Investigated by Meta’s Independent Board

HomeEntertainmentDeepfake Porn Targeting Celebrities to Be Investigated by Meta's Independent Board

The rapid advancement of artificial intelligence technology has ushered in a dark new frontier – the proliferation of non-consensual deepfake pornography targeting celebrities and public figures. As AI tools become increasingly sophisticated and accessible, the battle to combat this disturbing content on social media platforms intensifies.

In a significant development, the independent Meta Oversight Board announced today that it will scrutinize two high-profile cases involving deepfake porn depicting celebrities. This decision could force the tech giant to confront the complex challenges of moderating and removing such explicit AI-generated content from its platforms.

The Board, which serves as an independent oversight body capable of issuing binding rulings and recommendations to Meta, aims to examine the company’s policies and systems for detecting and eradicating non-consensual deepfake pornography.

“Detection is not as perfect or at least is not as efficient as we would wish,” admitted Julie Owono, a member of the Oversight Board, signaling the immense difficulties platforms face in combating this rapidly evolving threat.

The two cases under review involve explicit deepfake images of an unnamed American celebrity and an unnamed Indian celebrity. In the first instance, a deepfake porn image depicting the American celebrity had initially evaded removal from Facebook, despite being flagged elsewhere on the platform. It was only after the image was added to Meta’s Media Matching Service Bank, an automated system designed to identify and purge previously flagged violative content, that it was finally taken down.

>>Related  Prince Harry and Meghan Markle Face Backlash Over 'Insensitive' New Projects Amid Royal Family Health Crisis

However, the second case presents a more troubling scenario. A deepfake image of an Indian celebrity remained live on Instagram even after users reported it for violating Meta’s policies on pornography. The image was removed only after the Oversight Board intervened and took up the case.

While Meta prohibits content that “depicts, threatens or promotes sexual violence, sexual assault or sexual exploitation,” as well as pornography and sexually explicit advertisements on its platforms, the explicit celebrity deepfakes were initially removed for violating the company’s policies on bullying and harassment.

In a blog post accompanying the announcement, Meta stated that the posts were taken down for breaching the “derogatory sexualized photoshops or drawings” clause of its bullying and harassment policy, as well as its adult nudity and sexual activity policy.

The Oversight Board’s decision to examine these cases underscores the growing urgency to address the scourge of non-consensual deepfake pornography, which overwhelmingly targets and harms women.

“Deepfake pornography is a growing cause of gender-based harassment online and is increasingly used to target, silence, and intimidate women on- and offline,” asserted Helle Thorning-Schmidt, co-chair of the Oversight Board. “Multiple studies show that deepfake pornography overwhelmingly targets women. This content can be extremely harmful for victims, and the tools used for creating it are becoming more sophisticated and accessible.”

>>Related  Does Beyoncé's "Jolene" Cover Stay True to the Original? Fans Weigh In

The board has also voiced concerns over the perceived discrepancy in how Meta handled the American and Indian celebrity deepfakes, raising questions about the platform’s ability to moderate content equitably across different markets and languages.

“We know that Meta is quicker and more effective at moderating content in some markets and languages than others,” said Thorning-Schmidt. “By taking one case from the United States and one from India, we want to see if Meta is protecting all women globally in a fair way.”

The proliferation of non-consensual deepfake pornography has become a pervasive issue, with a recent Channel 4 investigation uncovering deepfakes of more than 4,000 celebrities. In January, a viral deepfake of Taylor Swift circulated widely on Facebook, Instagram, and the platform formerly known as Twitter, garnering over 45 million views on a single post.

Platforms have struggled to contain the spread of such content, often relying on fan communities to report and block offending accounts. In March, NBC News revealed that ads for a deepfake app featuring undressed images of an underage Jenna Ortega had run on Facebook and Instagram.

The issue extends far beyond American celebrities, with major Bollywood actresses like Priyanka Chopra Jonas, Alia Bhatt, and Rashmika Mandanna among the victims of deepfake pornography in India.

Research has consistently shown that non-consensual deepfake pornography disproportionately targets women. A WIRED report from last year found that 244,625 videos had been uploaded to the top 35 deepfake porn hosting sites – more than any previous year.

>>Related  Second Doctored Photo? Princess Kate's Picture of Queen Raises Questions

The technology behind deepfakes has also become increasingly accessible, requiring as little as 15 seconds of video footage, according to a 2019 VICE investigation. Just last month, a school in Beverly Hills expelled five students for creating non-consensual deepfakes of 16 of their classmates.

In response to the growing crisis, legislators have introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, which would allow victims of deepfake pornography to sue if they can prove the content was created without their consent.

“Victims of non-consensual pornographic deepfakes have waited too long for federal legislation to hold perpetrators accountable,” said Congresswoman Alexandria Ocasio-Cortez, who sponsored the bill and was herself a target of deepfake pornography earlier this year.

As the Meta Oversight Board delves into these high-profile cases, the spotlight intensifies on social media platforms’ ability to effectively moderate and combat the scourge of non-consensual deepfake pornography. With AI technology advancing at a breakneck pace, the urgency to develop robust detection and removal systems has never been greater – not just to protect the privacy and dignity of celebrities, but to safeguard all individuals from this insidious form of exploitation.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Mezhar Alee
Mezhar Alee
Mezhar Alee is a prolific author who provides commentary and analysis on business, finance, politics, sports, and current events on his website Opportuneist. With over a decade of experience in journalism and blogging, Mezhar aims to deliver well-researched insights and thought-provoking perspectives on important local and global issues in society.

Latest Post

Related Posts

x