Deepfake Laws: Global Overview and Emerging Regulations

Deepfake Laws
Author Image
Copywriter

How do you even know what’s real online today? The line between reality and deception is getting thinner than ever. AI-generated deepfakes are everywhere, convincingly mimicking people’s appearance, voice, and actions. Deepfakes distort reality and affect the critical social areas, such as elections and personal lives, by spreading false information.

But in 2025, governments around the world have had enough, because the sheer scale of the threat has outpaced technology. Deepfake incidents surged by 257% in 2024, and the first quarter of 2025 alone saw 19% more incidents than the entire previous year. In response, governments are finally moving from debate to decisive legal action to tackle AI-generated deception.

In this article, we will analyze the deepfake laws and regulations that are being adopted globally, why they matter, and how new rules aim to protect society at large.

What are Deepfake Laws?

A “deepfake law” isn’t a single rule. Rather, it’s a patchwork of legal frameworks designed to address specific harms caused by AI-generated synthetic media – visuals or audio manipulated to impersonate real people or events. The key harmful areas that the deepfake law targets are:

  • Identity fraud and likeness misuse – blocking the non-consensual use of a person’s image or voice.
  • Misinformation and election interference mandating disclosure for AI content that could deceive voters.
  • Non-consensual intimate imagery (NCII) – criminalizing the creation and sharing of sexual deepfakes.

Put simply, these laws make it illegal to create or distribute AI-altered content with the intent to deceive or harm.

For example, back in 2019 Texas became the first US state to prohibit the creation and distribution of deepfake videos intended to harm candidates for public office or influence elections. The law defines a “deep fake video” as any video “created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality.”

In a nutshell, deepfake laws require AI-created content to be labeled so people know it’s fake, by updating existing rules about fraud, harassment, and election tampering to cover digital fakes. It’s important to note, though, that these laws try to stop harmful deepfakes while still protecting free speech: for example, jokes and parodies are usually allowed if it’s clear they’re not real.

Why Deepfake Legislation Matters

Deeepfake medium distribution: video 46%, image 32%, audio 22%

Source

The proliferation of convincing deepfakes, as a new phenomenon, poses a new type of risk: it’s more than just humanistic, it’s existential, and relates to the way people perceive none other than truth itself. Unlike obvious photoshops or crude frauds, today’s AI fakes can be scarily believable, threatening to erode the fundamental trust in what we see and hear.

Let’s zoom in on the key concerns that the new deepfake regulations address:

Threats to democracy and trust

Spewing misinformation and lies, deepfakes can sway public opinion or incite unrest. For example, a fabricated video of a political candidate making a scandalous statement, like declaring war, released just before an election, can cause irreversible damage. Without regulation, AI fakes might manipulate campaign messages or create false scandals that undermine free elections. In particular, deepfakes can irreversibly erode public trust in social media and the news.

Targeted harassment and privacy abuses

One of the most widespread uses of deepfakes is to sexually exploit and harass women. Did you know that a staggering 96% of deepfake videos online are non-consensual porn – nearly all featuring women as victims? This is a horrifying new form of gender-based abuse that uses AI to strip women of privacy and dignity.

Most of the time, for the sake of hype, the victims are celebrities and otherwise known personas. Yet, ordinary people can fall prey to this trend too and find it nearly impossible to get these fake explicit videos removed from the internet. That’s why deepfake laws are crucial to give these victims legal recourse and to classify such acts clearly as crimes.

Data protection regulators note that a person’s facial image or voiceprint is sensitive biometric data, and using it in deepfakes may violate privacy laws without explicit consent.

Corporate exposure and financial scam

Businesses face massive financial risks. According to Deloitte, generative AI fraud in the US alone is expected to hit $40 billion by 2027. And the situation is grim everywhere else. For example, a Hong Kong-based company in 2024 lost a staggering $25 million when an employee was tricked by a deepfake voice of an executive into making 15 fraudulent transfers.

More so, businesses can be exposed to brand impersonation. For example, a deepfake of a company’s founder making false claims could tank stock prices or damage reputations overnight. For example, in May of 2023, an AI-generated fake photo of an explosion at the Pentagon went viral and even caused a brief stock market dip.

Undermining national security

Deepfakes also pose a threat to countries’ national security, because foreign adversaries can use them to spread malicious propaganda and confuse citizens. Because manipulated videos can be a potent weapon to stir social unrest or incite a diplomatic conflict, adopting laws and regulations is seen as a defense against such threats. 

Deepfake Laws in the United States

Geneative AI fraud in the US is expected to hit $40 billion by 2027

Source

The US approach is a patchwork of state-level laws, with federal efforts still evolving. As of mid-2025, over 45 states have enacted some form of deepfake legislation.

Tennessee replaced its Personal Rights Protection Act with the Ensuring Likeness, Voice and Image Security Act of 2024 (ELVIS Act), which explicitly grants every individual a property right in the use of their name, photograph, voice, or likeness.

States like California, Texas, Virginia, and New York have laws specifically targeting deepfakes in the context of elections and non-consensual pornography, often allowing civil lawsuits for damages and injunctive relief. In particular, these state laws dictate that a video qualifies as a prohibited deepfake if it is so realistic that a reasonable person would believe it depicts the identifiable individual engaging in a sexual act.

When it comes to federal laws, the TAKE IT DOWN Act (Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act) was enacted on May 19, 2025, after President Trump signed it into law. The Act criminalizes the knowing publication or threat to publish NCII, including both authentic and AI-generated deepfakes. 

Crucially, the Take It Down Act requires covered online platforms, including social media, to establish a process for victims to report NCII and remove the content within 48 hours of a valid notice. Enforcement is handled by the Federal Trade Commission Act (FTCA).

There are also a few active federal law proposals that are considered:

  • The DEEPFAKES Accountability Act aims to protect national security against the threats posed by deepfake technology and to provide legal recourse to victims of harmful deepfakes
  • The DEFIANCE Act (Disrupt Explicit Forged Images and Non-consensual Edits) and the Protect Elections from Deceptive AI Act aim to create a federal civil cause of action for victims of sexual deepfakes and ban deceptive election-related deepfakes, respectively.
  • The AI Labeling Act aims to require a clear disclosure whenever content is AI-generated, i.e., compelling developers of generative AI (and those who publish AI content) to include obvious labels or watermarks on AI-created images, videos, deepfake audio, and even chat interactions.

Deepfake Regulations in the European Union

The European Union is tackling AI and deepfakes through multiple laws rather than one single regulation. The EU uses the AI Act, Digital Services Act, and GDPR to create a comprehensive approach focused on transparency and accountability for AI-generated content.

The EU AI Act

This regulation came into force in mid-2025, and it’s central to the EU’s strategy. The Act imposes a binding transparency requirement (Article 50), requiring deployers (users) of AI systems that generate deepfakes to disclose (label) that the content has been artificially generated or manipulated. And the fines for serious violations can reach up to 6% of a company’s global turnover

For example, if a company uses an AI voice clone of a celebrity to narrate an advertisement, they must clearly state that the audio is synthetic. However, there are exceptions for evidently artistic, creative, satirical, or fictional content, allowing for a balance between freedom of expression and the need for regulation.

Digital Services Act (DSA)

The DSA fights deepfakes by making platforms accountable for harmful content. Major platforms like Facebook, YouTube, and X must identify and reduce “systemic risks”, including fake news and manipulated media, especially during elections. This law doesn’t require labeling every deepfake (the AI Act handles that), but it forces platforms to notify users, quickly remove illegal deepfakes, and adjust algorithms to stop deepfakes. 

Platforms face large fines if they don’t promptly remove illegal deepfakes like defamation or hate speech. The DSA also includes a voluntary Code of Practice where platforms agree to label deepfakes and work with fact-checkers. Overall, the DSA makes both creators and platforms handle deepfakes responsibly through labeling, moderation, and risk checks.

GDPR

Even though the EU’s GDPR doesn’t specifically mention deepfakes, it still offers important protections. When a deepfake uses someone’s face or voice, that counts as sensitive personal data under GDPR. Using this data without consent, like creating a sexual deepfake, violates GDPR’s privacy rules. EU citizens can use GDPR’s “right to erasure” to demand the removal of unauthorized deepfakes showing them. Companies that misuse someone’s likeness for AI can face GDPR fines. So, while GDPR isn’t a deepfake law, it gives Europeans privacy rights to combat harmful deepfakes by treating them as a misuse of personal data.

Deepfake Rules in the United Kingdom

Naturally, the post-Brexit UK is not under the EU regulations, but it has been developing its own approach to deepfakes through both new laws and regulatory guidance. 

Online Safety Act (OSA)

The OSA targets harmful AI-generated content and deepfakes in particular by criminalizing non-consensual deepfake pornography, with penalties up to 2 years in prison. It requires social media platforms to assess deepfake risks, proactively protect users, and prevent harmful synthetic content from spreading, like fake threats or otherwise misleading content. Platforms and websites that fail to comply face fines up to £18 million or 10% of global revenue and could even be blocked in the UK.

ICO (Information Commissioner’s Office) guidance

The UK’s data regulator (ICO) treats deepfakes as a privacy issue under the UK’s GDPR. Using someone’s image or voice in a deepfake without consent likely violates data protection laws on fairness and accuracy. The ICO has published AI governance guidance and is consulting on Generative AI rules. The UK is also developing a voluntary AI Code of Practice encouraging labeling and watermarking of AI-generated content.

Overall, the UK emphasizes safety by design, requiring platforms to adopt deepfake detection and user-reporting tools while ensuring AI respects privacy and accuracy.

Deepfake Legislation in Asia-Pacific

in APAC region, deepfake-related fraud increased by 1,530% from 2022 to 2023, with 88% of which targeted the crypto sector

Source

The APAC region features some of the most stringent and technically prescriptive deepfake regulations globally. The laws in this region prioritize government control and content traceability.

China

The country’s ‘deep synthesis” regulations require that AI-content, like deepfake images and video, be clearly labeled with watermarks. The rules also forbid the use of deepfakes for illicit purposes such as fraud or economic disruption. Platforms must verify users’ real identities to prevent anonymous misuse. Deepfakes that harm national security or reputation are banned, with penalties including fines and detention. China’s approach is strict, focusing on controlling misinformation while requiring content moderation and labeling of all AI output.

Singapore

Singapore uses two laws to fight deepfakes. First, the Protection from Online Falsehoods and Manipulation Act (POFMA) forces platforms to label or remove false deepfake content, especially regarding elections or security, with fines up to $1 million for non-compliance. Second is Singapore’s Penal Code of 2020 amendments that criminalized “synthetic intimate images”, i.e. non-consensual deepfake porn, with penalties up to two years imprisonment and fines. Together, these laws address both public harms (fake news) and individual harms (pornographic deepfakes).

South Korea 

After a recent surge in deepfake pornography, South Korea strengthened deepfake porn laws in 2023, raising maximum prison sentences from five to seven years. Uniquely, even possessing or watching non-consensual deepfake porn is illegal (up to three years in jail or ₩30 million fine). The law includes mandatory minimums: one year for blackmail using deepfakes, three years for creating sexual deepfakes for distribution. The government must also help victims remove deepfake content online. South Korea has one of the world’s toughest approaches, treating deepfake porn as a serious digital sex crime.

To sum up, the Asia-Pacific legislation has a two-pronged strategy. First, strong criminalization of deepfake sexual exploitation to protect individual victims. Second, regulation of AI misinformation to protect societal interests.

Deepfake Governance in Other Regions

The volume of deepfake content is projected to increase by 900% annually

Source

In many other parts of the world, deepfakes laws are also being rapidly developed. Let’s briefly survey Canada, Australia, and the Middle East, noting how each balances innovation vs. regulation. 

Canada

Canada has no specific deepfake law as of 2025, but existing laws offer partial protection. For example, the Criminal Code bans sharing non-consensual “intimate images” (up to 5 years in prison), which could apply to deepfakes. Some provinces, like British Columbia, allow victims to sue and get removal orders. However, there’s a gap: non-sexual deepfakes (like defamatory videos) may not be clearly covered. The government introduced Bill C-63 (Online Harms Act) in 2024 to address harmful online content, including deepfakes, and is studying AI transparency rules. 

Currently, victims rely on general laws like defamation, impersonation, or intellectual property rights. Experts say Canada urgently needs deepfake legislation, but lawmakers are taking a cautious approach to balance protection with AI innovation.

Australia 

In 2023, Australia passed the Criminal Code Amendment (Deepfake Sexual Material) Act that made it a federal crime (with up to 6 years imprisonment) to create or share realistic fake intimate images without consent. And Australia’s eSafety Commissioner under the Online Safety Act can issue takedown notices to websites hosting non-consensual deepfakes, although the enforcement is challenging for content hosted abroad.

Australia’s regulator has pushed social media to label deepfake content, especially during elections, and considered laws against deepfake election misinformation. Unlike the EU, Australia hasn’t mandated labeling of all AI content, instead issuing voluntary “best practice” guidance on watermarking and authentication tools. Australia’s approach balances targeted laws for serious harms with industry self-regulation to avoid over-regulating tech companies.

The Middle East 

Middle Eastern countries use existing cybercrime and media laws to address deepfakes while developing new AI initiatives. The UAE has no specific deepfake law but can prosecute malicious deepfakes as false news or fraud under its cybercrime law. In 2021, the UAE published a “Deepfake Guide” to educate the public on identifying and reporting deepfakes, emphasizing awareness over broad bans. 

Saudi Arabia’s Anti-Cybercrime Law covers deepfakes that threaten public order or spread misinformation. In 2023, Saudi authorities released AI Ethical Principles and consulted the public on deepfake regulations. Using deepfakes in false advertising is already a criminal offense in Saudi Arabia, punishable by fines or jail. Both countries aim for “smart governance” balancing detection technology, public education, and law enforcement without stifling innovation. As deepfake incidents surge (up 600% in Saudi Arabia in early 2024), these governments are preparing stronger responses.

In all these regions, we see a common theme – finding the line between protection and progress. Policymakers don’t want to hamper the positive uses of AI in film, education, business, etc., but they recognize that without rules, the “worst actors” will cause outsized damage.

What Deepfake Laws Mean for Businesses

For any company operating internationally, deepfake laws are a legal compliance obligation. The message is clear: businesses must get proactive about AI content governance by amending their policies and practices. Here are some implications for businesses to consider:

  • Content labeling. Companies must implement AI content labeling mechanisms, content disclaimers or watermarks, for all publicly facing synthetic media to comply with regulations like the EU’s AI Act and China’s Deep Synthesis Rules.
    EXAMPLE: An online retailer using an AI model to create product review videos should include a caption like “This video is AI-generated” to stay ahead of regulations.
  • Consent policies and identity checks. Many deepfake laws require consent to use someone’s face, voice, or persona in AI content. Companies must get explicit written consent before using people’s images or voices, and verify that users have rights to uploaded content. Some countries (like China) require ID verification for deepfake apps. That’s why platforms should consider identity checks to prevent anonymous abuse of their AI services.
    EXAMPLE: A platform offering AI face-swap tools should double-check if the user actually owns the source images or has rights, to prevent misuse of strangers’ photos.
  • Employee awareness. Employees must be aware of deepfake phishing scams, like fraudulent voice calls requesting wire transfers, and their internal responsibility when using generative AI tools for work. That’s why security training should cover deepfakes, teaching employees to verify unusual requests through secondary channels and be skeptical of urgent messages. More so, companies should update phishing protocols for AI-generated fraud and create AI usage policies that prohibit using company resources to create abusive deepfakes.
    EXAMPLE: A company publishes a clear AI usage policy that spells out what is acceptable, like using AI for creative mockups, versus what is prohibited, like AI to impersonate someone or create explicit content. 
  • Content moderation and incident response plans. Companies should prepare deepfake incident response plans, similar to data breach protocols. This includes training moderation teams and using AI detection software to spot deepfakes on platforms targeting their brand. Businesses should monitor for fake audio or video of executives used in scams and be ready to quickly deny and debunk damaging deepfakes. Under laws like the EU’s DSA, large platforms must filter deepfake disinformation, making robust content moderation essential for brand protection.
    EXAMPLE: A bank should monitor for fake audio of its executives that scammers might use in phishing calls. 
  • Policy engagement and future-proofing. Businesses should monitor evolving deepfake laws and anticipate compliance requirements. Legal counsel should track new regulations, like advertising standards on AI labeling. Companies can join industry consortia on content authentication to help develop technologies that regulators endorse, shaping reasonable regulations while gaining consumer trust.
    EXAMPLE: An advertising firm should track if the US’s FTC or UK’s ASA (Advertising Standards) issue rules on labeling AI ads.

In essence, deepfake laws mean that ignorance is no excuse for businesses when it comes to AI content.

Like all regulations, deepfake laws raise serious legal and ethical questions regarding free expression, privacy, and technological limitations. Key juxtapositions or “gray areas” are free expression vs. protection, proving intent and harm, jurisdiction issues, and the risk of over-correction. 

Gray areaEssence Example 
Free speech vs. censorshipThe challenge of outlawing malicious deepfakes without violating constitutional guarantees of satire, parody, and political expression (First Amendment in the US). This means – laws must be narrowly tailored.A satirical video of a politician singing poorly. Could a broad election deepfake ban (like those in Texas or California) inadvertently sweep up this protected political commentary? Such laws have already faced legal challenges over free expression concerns.
Defining “harmful” deepfakesThe difficulty in proving malicious intent (“with intent to harm, defraud, or mislead”) and establishing a clear threshold of harm for non-sexual deepfakes.A creator posts a fake video of a celebrity endorsing a scam product, claiming it was just a “joke”. Authorities must infer intent by checking for profit or deceptive context.
The “Liar’s Dividend”A phenomenon where bad actors falsely claim authentic footage is “just a deepfake” to evade accountability and sow general distrust in true media.A politician caught on a real, incriminating video denies its authenticity, leveraging public knowledge of deepfake technology to cast doubt on genuine evidence and evade scandal.
Jurisdiction and enforcement limitsThe challenge of enforcing local laws against a global internet phenomenon. A perpetrator in Country A posts a deepfake harming a victim in Country B, and authorities in B cannot easily reach or prosecute the individual.A deepfake crime is traced to an anonymous server in a country with no equivalent AI law. This cross-border enforcement gap allows perpetrators to easily “jurisdiction-hop” to escape punishment. 
Attribution and evidenceCourts need to verify the authenticity of audiovisual evidence in legal proceedings; and the technical difficulty lies in tracing an anonymized deepfake back to its creator.A murder trial relies on a key video clip. The defense claims it’s a deepfake, requiring extensive, time-consuming digital forensics and expert testimony to verify the truth.
Overreach and innovation stiflingLaws that are too broad could discourage or criminalize beneficial, non-malicious uses of AI, such as special effects, digital archiving, or assistive technologies.Film studios using deepfake tech (with consent) for special effects or to create a digital double of an actor could face excessive licensing burdens or liability risks, stalling innovation.

As you can see, these gray areas demonstrate that deepfake governance involves policy fine-tuning, advancing detection tech, and even public education on media literacy. But the key ethical principle is clear: do no harm to fundamental rights while minimizing harm from such technologies. And it’s difficult to balance, we can expect court battles and adjustments in the future.

The future of the AI content (specifically deepfake regulation) seems to lean towards a more coordinated global model that will be based on the new technological advancements. Let’s speculate the 4 trends that are likely to become prominent in the immediate future: 

  1. First, it seems like countries will push for a closer collaboration on deepfakes through global agreements and by adopting universal standards. For example, G7 and UNESCO are already discussing AI ethics principles, including content labeling. Future UN conventions or regional compacts could ensure deepfakes illegal in one country are recognized as illegal elsewhere. International law enforcement cooperation will grow, with agencies like INTERPOL potentially creating dedicated synthetic media crime units that will track deepfakes. 
  2. The next tendency gaining momentum is digital watermarking of AI-generated content; i.e. embedding an invisible, hard-to-remove marker in audio-visual content that indicates its source or AI origin. The future legislation is likely to mandate mandatory inclusion of AI watermarking. The EU AI Act and China’s rules already push for this, and major US tech companies voluntarily committed to watermarking in 2023. Soon, distributing unwatermarked AI content may be illegal. Combined with detection algorithms and content authentication systems, watermarking will help identify AI-generated media, though it’s not foolproof.
  3. The third tendency involves a conscious move towards better fake detection technology. On the tech front, there’s already heavy investment in the next-gen deepfake detection. Future regulations may require platforms to use approved detection algorithms and report deepfake prevalence regularly. Also, research is developing ways to trace deepfakes back to their source AI models through unique “fingerprints”. So, in the future regulators might mandate AI companies to cooperate with law enforcement by providing these fingerprints. Real-time verification could eventually be built into consumer devices like smartphones.
  4. The fourth trend concerns AI developers’ liability and responsibility. Future laws will likely hold AI tool developers liable if their products are frequently misused for illegal deepfakes. The EU is considering making it easier to sue AI developers for the harms their systems cause. Proposed measures include requiring AI generators to keep creation logs for tracing criminals, with liability for non-compliance. This shared accountability could drive AI-related harm insurance and risk assessment industries.

The market for AI detection tools is growing at a compound annual rate of around 28-42%. Source

BONUS Trend: Countries may also agree to extradite and prosecute individuals for serious deepfake crimes across borders. This means that nations would not only assist each other in investigations but also require platforms to geofence illegal content in specific jurisdictions.

It seems that we’re moving toward a world where we would be able to trust digital content by default again, because it will be common to see labels like “AI-generated” on media and know that strong laws back up that label. 

To Sum Up: Accountability is Catching up With AI 

The era of “anything goes” for AI content is coming to an end. Transparency, consent and responsibility are again reclaiming their rightful legal status in the virtual world. 

All around the world, governments are stepping in to require clear labeling of AI-generated media and to prohibit the deceptive or harmful use of synthetic content and digital forgeries that almost destroyed peoples’ trust in the online information. We’re gradually guarding ourselves up against manipulated videos that could threaten everything from democratic elections to personal lives. 

Of course, deepfakes won’t die. After all, they are natural fruits of our augmenting creativity and a great source of entertainment. But we can use them responsibly! Hopefully, deepfake laws will reshape how we engage with AI content, and such words as truth, trust, and consent will come back to online content. 

FAQ

Deepfakes are not illegal in every jurisdiction, but their misuse can breach privacy, defamation, or fraud laws. Many countries regulate AI-generated or technological representations that impersonate others in a public or commercial setting. For example, the EU’s AI Act and several US state laws, alongside one federal law – the Take It Down Act, specifically address synthetic video recordings or electronic images that could deceive viewers. Whether a deepfake is lawful depends on context and intent: artistic parodies or motion picture films may be protected, while deceptive or harmful uses can attract civil penalties or criminal sanctions.
Penalties vary widely. In the US, states impose fines or jail time for intimate visual depictions or election-related deepfakes. The EU’s forthcoming AI Act introduces multimillion-euro fines for unlabeled AI-generated content. China requires identity verification and fast removal of such images. When covered platforms or online services host non-consensual authentic intimate visual depictions, they may face civil penalties for negligence. The severity of punishment often depends on whether the act was substantially dependent on intent to deceive or cause harm.
Online services and covered platforms are now required to detect, label, and remove misleading technological representations of individuals. Under the EU Digital Services Act and similar US measures, companies must offer user reporting tools and ensure transparency around sound recordings, videos, or electronic images that could harm reputations. Non-compliance may result in substantial fines or temporary suspension of platform operations. These rules aim to hold intermediaries accountable for curbing deceptive synthetic media in public or commercial settings.
Deepfake regulations aim to protect individuals from identity misuse, intimate visual depictions shared without consent, and misleading video recordings. Victims can request takedowns, seek damages, or pursue civil penalties against offenders. The laws recognize a person’s reasonable expectation of privacy, even if their likeness was voluntarily exposed online. Enhanced labeling and watermarking obligations seek to restore public confidence and digital dignity.
Businesses should implement AI labeling, digital watermarking, and content verification measures. Legal teams need to monitor AI regulation developments, especially where operations involve motion picture films, sound recordings, or electronic images. Updating consent clauses and privacy notices ensures compliance with deepfake and intimate visual depiction laws. Proactive governance and clear internal review processes help reduce litigation risk and bolster consumer trust.
Expect greater international coordination and harmonization by 2026. Governments are exploring AI liability and transparency frameworks, mandatory digital watermarking, and universal standards for labeling technological representations. New rules may also extend protections to authentic intimate visual depictions and clarify when sharing such images crosses a reasonable expectation of privacy. Future laws are likely to cover online services hosting synthetic video recordings or sound recordings used to exploit sexual desire, manipulate media, or influence elections.