Empower Change with Open Letters.

Sign public letters that rally people around the world’s most critical challenges. Signatures appear immediately upon signing and then undergo peer review to ensure authenticity.

Disrupting the Deepfake Supply Chain

A call for new laws and regulations to protect everyone from the harms of deepfakes

February 21, 2024


1,565 signatures

Sign this letter

Context: Many experts have warned that artificial intelligence (“AI”) could cause significant harm to humanity if not handled responsibly[1][2][3][4]. The impact of AI is compounded significantly by its ability to imitate real humans. In the statement below, “deepfakes” refers to non-consensual or grossly misleading AI-generated voices, images, or videos, that a reasonable person would mistake as real. This does not include slight alterations to an image or voice, nor innocuous entertainment or satire that is easily recognized as synthetic. Today, deepfakes often involve sexual imagery, fraud, or political disinformation. Since AI is progressing rapidly and making deepfakes much easier to create, safeguards are needed for the functioning and integrity of our digital infrastructure:

Statement


Deepfakes are a growing threat to society, and governments must impose obligations throughout the supply chain to stop the proliferation of deepfakes. New laws should:

  1. Fully criminalize deepfake child pornography, even when only fictional children are depicted;

  2. Establish criminal penalties for anyone who knowingly creates or knowingly facilitates the spread of harmful deepfakes; and

  3. Require software developers and distributors to prevent their audio and visual products from creating harmful deepfakes, and to be held liable if their preventive measures are too easily circumvented.

If designed wisely, such laws could nurture socially responsible businesses, and would not need to be excessively burdensome.

Reasons: Not all signers will have the same reasons for supporting the statement above, and they may not all agree on the content below. Nonetheless, at least some early signers were motivated by one or more of the following points:

  • Nonconsensual pornography: AI-generated pornography is a rapidly growing industry[5][6][7][8], and many targets are minors[9][10][11]. One report[12] found that deepfake pornography makes up 98% of all deepfake videos online, following a 400% increase in deepfake sexual content from 2022 to 2023, reaching monthly traffic exceeding 34 million in 2023, with 99% percent of those targeted being women. This follows a pre-existing trend in technology-facilitated gender-based violence, where 58% of young women and girls globally have experienced online harassment on social media platforms[13], with disproportionate impact experienced on the basis of gender, race, ethnicity, sexual orientation, religion, and other factors[12][13].

  • Fraud: Deepfake fraud for impersonation and identity theft is a threat to both individuals and businesses. AI can make convincing deepfake videos of private individuals using as little as one photo[14]. Deepfake fraud reportedly increased by 3000% in 2023[15].

  • Elections: With half of the world’s population facing elections soon, the widespread creation and proliferation of deepfakes is a growing threat to democratic processes around the world[16]. True-to-life deepfakes of celebrities[17][18][19] and political figures[20][21] are already spreading rapidly.

  • Practicality: On a positive note, it is possible for cameras to generate tamper-proof digital seals on unaltered photographs and videos of the real world, using cryptographic signature techniques similar to website certificates and login credentials. If broadly employed, these seals would allow anyone to use open-source authentication apps to verify that a properly signed photo or video is authentic[22][23]. Device manufacturers, software developers, and media companies should work together and popularize these or similar content authentication methods.

  • Urgency: Unprecedented AI progress is making deepfake creation fast, cheap, and easy. The total number of deepfakes has grown by 550% from 2019 to 2023[24].

  • Inadequate laws: Current laws do not adequately target and limit deepfake production and dissemination[5], and even requirements on creators — who are often underage — are ineffective. The whole deepfake supply chain should be held accountable, just as they are for malware and child pornography.

  • Mass confusion: For a modern society to function, people need to have access to believable, authentic information. Misleading the public through the use of AI should be regulated and enforced through specific, formalized laws. It’s becoming harder and harder to know what is real on the internet, and lines need to be drawn to protect our ability to recognize real human beings.

  • Performers: As audience members, we delight in the feats of real human performers in dance, film, magic, music, sports, and theater. If broadcast entertainment becomes saturated with deepfakes, the connection between audience and performers will erode, and deepfakes will unfairly displace the people whose works were used to “train” AI in the first place.

How our peer verification works

Why Every Signature Counts.

Signers help verify each other — so every name on our letters earns its place.

01

Sign with Accountability

We ask for your name and verify your email and location. You can also link your X (formerly Twitter) account to strengthen your authenticity.

02

Evaluate Your Peers

After signing, you’ll help verify other signers: Are they real? Are they notable?

03

Reviewed

Just like you’re evaluating others, they’re evaluating you. That’s how we keep the system honest.

04

Reviews Strengthen the Network

Each time you review a peer, your input helps strengthen the letter’s integrity. Behind the scenes, we use proven models to identify trustworthy voices—like pairwise ranking for notability and trust graphs to check for authenticity.

05

The Verified List Is Public

Signers appear on the public list immediately; subsequent peer reviews then improve each signer’s trust ranking.

Why we exist?

Built for the Public Good.

OpenLetter.net is a project by Survival & Flourishing Corp, a Public Benefit Corporation funded by Jaan Tallinn. This site exists to help humanity coordinate on issues that matter most—nothing more, nothing less.

Survival & Flourishing Corp

Why verification matters?

Restoring Signal to Public Petitions.

In a world flooded with bots, fake accounts, and performative outrage, OpenLetter.net restores signal to public petitions. Our goal is for every name to undergo peer review to ensure authenticity and strengthen the impact of our letters.

About Us

Have a Letter that Matters?

We focus on a small number of global-scale letters at a time to ensure maximum reach and integrity. If you have a cause worthy of collective attention, send us an email.

partnerships@openletter.net