Deepfakes vs AI content: How the world is trying to regulate a growing threat

As synthetic deception grows more convincing, India and the US offer sharply different models for regulating the threat

Ai Deepfake and Cybersecurity - Shutterstock

For your daily dose of medical news and updates, visit: HEALTH

“Is it AI?” has become one of the most frequently asked questions across comment sections and Twitter feeds today—a phrase that emerged naturally in the GenAI era of content consumption. But long before this wave of AI-generated material, another term had already entered public discourse: deepfakes. The words 'deepfake' and 'AI-generated content' are routinely used as synonyms. However, they are not the same thing, and the difference has direct consequences for how harm unfolds. 

What are deepfakes and what aren't?

AI-generated content is the broader category. It covers any media produced by an artificial intelligence system: a synthetic stock image, a rap video with sci-fi visuals, an animated rom-com between two vegetables, none of which depict a real person, and serve entirely entertainment or depictional purposes.  

Deepfakes are a specific subset. A portmanteau of 'deep learning', a form of machine learning that is utilised in reading and classifying the content it consumes. Fake refers to the kind of output designed to impersonate a real, identifiable person or real event. Their defining feature is deceptive intent: the viewer is meant to believe they are seeing or hearing something authentic when they are not.  

A generic AI 'doctor' promoting an unproven supplement is misleading. A deepfake video of a named cardiologist from AIIMS endorsing the same product - using their real face, a cloned version of her real voice, and her real institutional affiliation - is categorically more dangerous. 

How deepfakes are harming health

Scammers are using deepfake videos of real doctors to promote unproven treatments and sell unapproved supplements. As was seen when a video of Dr Hillary Jones, a famous general practitioner from the UK, went viral, showing him promoting a health supplement.

This video, however, is a deepfake and was debunked by the BMJ. Easy indicators that help analysts predict a potential deepfake target are prominent in this case. Dr Hillary Jones is popular for his TV appearances to address the audience's medical concerns, which makes him a trustworthy figure in the eyes of the public; it is this trust that is capitalised on by bad actors. The problem was so common that the doctor had to post an advisory on his website warning viewers. 

According to the McAfee Scam Report: Deepfake Surge 2025, India encounters 2.5 times higher number of deepfakes on average in a day as compared to global standards.  

The mental health toll of deepfakes is quieter but deeply damaging. A scoping review published in Springer Nature in 2025, analysing 28 empirical studies, found that victims of deepfake image-based sexual abuse - a category that overwhelmingly targets women - reported symptoms including depression, intrusive memories, psychosomatic illness, and inability to leave home, reflecting patterns consistent with PTSD. Psychology Today has documented what researchers call 'doppelgänger-phobia': lasting anxiety, paranoia, and loss of identity triggered by seeing one's face weaponised without consent. The Lancet Psychiatry has classified deepfake victimisation as a distinct form of 'digital trauma'.  

NDAA Vs IT Rules Amendment 2026- India's regulatory response

The United States has approached deepfake regulation through a layered, incremental framework. The National Defense Authorization Acts of 2020 and 2021 were the first federal laws to address deepfakes, though primarily through a national security lens. These provisions, however, carried no penalties and imposed no duties on private individuals or platforms.

The DEEPFAKES Accountability Act, a more ambitious standalone bill, sought to go further by requiring creators of deepfakes to embed disclosure watermarks, obligating online platforms to detect and flag such content, and creating private rights of action for victims. Sadly, it was never passed into law.  

India, by contrast, has moved more directly to confront the problem, catalysed by high-profile incidents such as viral deepfakes of celebrities like Rashmika Mandanna.  Through the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 — most recently updated on February 10, 2026 — India has built one of the most structurally comprehensive deepfake regulatory frameworks in the world.  

The rules define "synthetically generated information" as any audio, visual, or audio-visual content that is artificially or algorithmically created or altered "in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event." This is a considerably wider net than anything in US federal law, though the rules carve out sensible exceptions for routine editing, accessibility features, and good-faith educational or research use.

Platform obligations under the Indian framework are extensive. Intermediaries that allow users to create or share synthetic content must deploy automated technical tools to prevent the generation of unlawful material — including non-consensual intimate imagery, false documents, content related to weapons or explosives, and any material that falsely depicts a real person's identity, voice, actions, or statements. Any synthetic content that does not fall into a prohibited category must be prominently labelled and embedded with permanent metadata — including a unique identifier — so it can always be traced back to its origin. 

Significant social media platforms face additional duties. Before any synthetic content goes live, it must require users to declare whether it is AI-generated, use technical tools to verify that declaration, and clearly label the content if it is confirmed to be synthetic. 

Where the US framework remains largely reactive, fragmented across states, and anchored in the narrow context of intimate imagery and national security, India's approach is more proactive and sweeping in scope, imposing affirmative obligations on platforms, mandating content labeling, and providing enforceable penalties, though critics have raised concerns that its broad definitions of synthetic content risk undermining the fundamental right to free expression by collapsing the distinction between harmful deception and legitimate creativity such as digital art, parody, satire, and political commentary. Oscar Wilde believes “to define is to limit”- time will tell whether that is true for the regulations targeting synthetic content.

This story is done in collaboration with First Check, which is the health journalism vertical of DataLEADS