Academics and researchers are addressing the evolving challenges posed by deepfake technology.

Over 95% of deepfakes are pornographic. As a notable example, a blatant deepfake image of Taylor Swift was circulated online earlier this year. This “photo of Swift was reportedly viewed 47 million times before it was removed.”

As digital technology evolves, so does the risk of deepfake technology. Deepfake is a term derived from “deep learning” and “fake” and refers to a believable digital manipulation that superimposes an individual's face or body onto an existing image or video without their consent.

This new form of “image-based sexual abuse” poses unprecedented challenges. In 2021, the United Nations declared this form of violence against women and girls a “shadow pandemic.”

As deepfake technology evolves rapidly, current laws are struggling to keep up. Although some jurisdictions recognize the non-consensual distribution of intimate images as a criminal offense, the specific phenomenon of deepfakes often goes unpoliced.

Additionally, traditional legal frameworks designed to address privacy and copyright violations lack the nuance to effectively combat deepfake-related fraud. The use of deepfake technology violates privacy, causes significant psychological harm to victims, damages reputations, and fosters a culture of sexual violence.

Reform advocates argue that current law needs to be expanded to explicitly include deepfakes under the umbrella of “image-based sexual abuse.” Such reforms would include recognizing the creation and distribution of deepfakes as a distinct form of abuse that undermines the sexual autonomy and dignity of individuals. To combat deepfake abuse, experts recommend a multifaceted approach, including strengthening victim support services, raising public awareness of the impact of deepfakes, and fostering cooperation between technology companies, legal experts, and law enforcement. approach is recommended.

Additionally, reform advocates are pushing social media platforms and content distribution networks to introduce stricter procedures for detecting and removing deepfake content, and to help individuals safely navigate the complexities of online spaces. It calls for promoting literacy.

However, navigating the complex landscape of deepfake regulation presents significant challenges and requires a nuanced approach that balances privacy protection and freedom of expression with the need to combat online abuse and exploitation. For example, the global nature of the Internet presents significant challenges, allowing deepfake content to cross national borders and complicating law enforcement issues. Human rights activists point to the need for international cooperation and unification of laws to protect victims across borders.

this week's Saturday seminarIn , researchers and scholars examine the current state of deepfakes and sexual violence, as well as attempts to regulate this new technology.

  • Non-consensual deepfakes are an “immediate threat” to both individuals and public figures, Justice Clerk Benjamin Saslavich argues in a newspaper article. Albany Law Journal of Science & Technology. Suslavich said deepfake technology produces lifelike videos of subjects in just one image, but this is often exploited to create non-consensual pornographic content. Point out. He argues that current legal protections are insufficient to provide redress to victims. Suslavic calls for the introduction of a legislative and regulatory framework that would allow individuals to reclaim their identity on the internet. Specifically, Saslavich recommends reducing legal protections for internet service providers (who currently have blanket immunity) if they are unable to quickly remove identified non-consensual pornographic deepfakes. are doing.
  • In the article New European Journal of Criminal Law, Carlotta Rigotti from Leiden University and Claire McGlynn from Durham University have announced a “groundbreaking” move to combat “image-based sexual abuse” by criminalizing the non-consensual distribution of intimate images. Discuss the European Commission's proposals for Directives. Ligotti and McGlynn explain that this form of abuse involves creating, filming, sharing, and manipulating intimate images and videos without consent. They say the commission's proposals are ambitious, but criticize their narrow scope of protection. To better protect women and girls, Ligotti and McGlynn urged the committee to remove restrictive language in the proposal and add broader terminology that encompasses the evolving technological landscape. , urges us to revise our approach to online violence.
  • Deepfake porn may constitute a form of image-based sexual abuse, claims practitioner Chidera Okolie in an article. International Journal of Women's Studies. Like other types of legally sanctioned sexual abuse, deepfake porn causes psychological and reputational damage to victims, Okolie stressed. Many countries have moved to regulate deepfake porn, but O'Curry criticized recently enacted laws as being too broad and encompassing too much legitimate content. To address this ambiguity, O'Curly suggests that lawmakers enact legislation that would target techniques and practices specific to deepfake pornography. She also calls on governments to enforce laws already in place to protect victims of sexual violence.
  • A collective international effort is needed to combat the global spread of deepfake pornography, argues practitioner Yi Yang in an article. Brooklyn International Law Journal. Yang argues that efforts to regulate deepfakes on an international scale will be ineffective because deepfakes are fragmented. Instead, Yang argues, states should focus on extraterritorial jurisdiction and cooperation between nation-states to target deepfake technology. As a first step, Yang suggests that countries adopt language into international law that explicitly criminalizes AI-generated revenge porn, but he is currently silent on the issue.
  • Rather than relying on a patchwork of state laws, legislators should implement federal laws that penalize technology-facilitated publications of sexual abuse, suggests a paper by Kweilin T. Lukassoff Mars Hill University. . victim and perpetrator. Even though most states have enacted laws to curb non-consensual pornography, deepfakes are exempt from existing regulations because the victim's own nudity does not appear in such videos. and creators of deepfake porn do not intend to harm or harass victims, explaining that their intentions can also avoid punishment under existing state revenge porn laws. Lucas points out. To prevent people's images from being manipulated, Lucas suggests that federal law should make it punishable by non-consensual publishing of deepfakes that humiliate or harass victims or incite violence.
  • in British Journal of CriminologyIn this article, Monash University's Asher Flynn and several co-authors interviewed survivors of online image-based violence to identify whether specific groups are targeted for exploitation. Flynn's team is investigating the harm that the spread of non-consensual sexual images has on specific groups. Flynn and his co-authors found that individuals with mobility needs, members of the LGBT+ community, and racial minorities were more vulnerable to image-based abuse. Victims have reported experiencing severe trauma and significant changes in their lives, including limiting their online and public interactions, the Flynn team noted. Efforts to prevent image-based sexual violence must consider factors such as racism, ableism and heterosexism to better protect groups that are disproportionately targeted, Flynn said. suggests the co-author.



Source link