×

OPINION | AI ethics in the age of narrative warfare

If an environment rewards propaganda, intimidation, selective truth and plausible deniability, then systems built and tuned inside that environment drift towards those outputs, because those outputs “work

India to host the AI Impact Summit from 16th to 20th February at Bharat Mandapam, New Delhi

For more defence news, views and updates, visit: Fortress India

India is about to host the India–AI Impact Summit 2026 in New Delhi (16–20 February 2026) – a five-day programme spanning policy, research, industry and public engagement, anchored on “People, Planet, Progress.” The Summit will focus on themes such as employment and skilling, sustainable and energy-efficient AI, and economic and social development, duly supported by thematic working groups expected to propose deliverables such as AI commons, trusted AI tools, shared compute infrastructure, and sector-specific use-case compendiums. Media reports also suggest participation by global technology leaders such as Sundar Pichai, Sam Altman, Jensen Huang and others, along with high-level political attendance.

The author has written frequently about human ethics and institutional character in India, particularly how everyday ethical compromise becomes systemic weakness and ultimately becomes a national security problem. It is therefore appropriate that the issue of AI ethics be highlighted as modern technology moves from generative AI towards “conscious machines”. AI has made large-scale deception cheap, fast, believable and scalable to an entire population. This article highlights three main issues, deliberately narrower than the otherwise sprawling “AI ethics” discourse, because these are most relevant to India’s security environment now:

  • Narrative warfare is the frontline of AI ethics and AI security.
  • Weak human ethics can be transported into increasingly capable systems, teaching them the wrong lessons.
  • In a nuclearised subcontinent, AI-fuelled disinformation is not merely social harm; it can worsen crisis stability and shape warfighting outcomes.

Narrative warfare is already active and is the most immediate AI security risk. Generative AI collapses the cost of producing persuasive content: text, voice, images and videos can be generated quickly, translated instantly and adapted to local idioms and grievances. As a result, influence operations no longer require large propaganda bureaucracies, and smaller actors can flood the information space with convincing content at scale. The pattern is already documented:

  •  Wartime synthetic leadership messaging: Reuters reported the March 2022 deepfake purporting to show Ukraine’s President Zelenskyy calling for capitulation. This is strategically instructive because it previews more sophisticated synthetic deception.
  • Democratic disruption via impersonation: the U.S. Federal Communications Commission (FCC) levied a $6 million fine against a political consultant for illegal robocalls using a deepfake, AI-generated voice message impersonating President Biden to influence voter behaviour.

This is why the US National Institute of Standards and Technology treats misinformation and disinformation as a core generative-AI risk. Synthetic content can erode trust in valid evidence and information, with downstream effects that are societal and strategic. The most widespread strategic impact of advanced AI is industrial-scale persuasion and deception – narrative warfare as a capability.

The deeper ethical problem: AI learns what humans reward

AI does not absorb morality by default; it absorbs patterns and incentives. If an environment rewards propaganda, intimidation, selective truth and plausible deniability, then systems built and tuned inside that environment drift towards those outputs, because those outputs “work”.

This is the practical meaning of the global “alignment” debate: the challenge is not only making systems more capable, but ensuring that increasingly capable systems stay anchored to human values rather than merely optimising what they are rewarded for. The World Economic Forum’s work on AI value alignment underscores that keeping AI aligned with evolving ethical standards requires sustained human engagement throughout the lifecycle, precisely because values differ across cultures and can shift over time. The strategic warning is sharp: if truth becomes negotiable in public life, if institutions normalise “convenient truth” or pressure systems towards politically safe outcomes, then AI becomes a force multiplier for ethical erosion. The larger harm is systemic: once citizens learn that “reality can be manufactured”, societies enter a permanent trust deficit. This is where the article’s second claim matters: weak human ethics do not merely corrupt outputs, but can shape the trajectory of more autonomous systems. If the reward function is “persuade”, “protect the narrative” or “avoid discomfort”, then advanced systems will excel at those tasks, even when they conflict with truth and public safety.

Why this becomes more dangerous in a nuclearised subcontinent

In South Asia, narrative warfare is not merely a governance challenge but a crisis stability challenge. The Bulletin of the Atomic Scientists warns that disinformation and deepfakes can intensify India–Pakistan crises by increasing misperception and compressing decision time. A realistic escalation pathway is easy to sketch: synthetic “evidence” or a viral deepfake accelerates outrage; leaders face time pressure; verification lags; and misreading the adversary becomes more likely. In a nuclearised dyad, even a low probability of such misperception matters because the cost of error is extreme. This is where the ethics-to-AGI link needs to be stated plainly: if future systems are more autonomous and more persuasive, then weak human ethics, and truth-bending for convenience, can create AI tools that are optimised for instability rather than stability.

The warfighting link

Narrative warfare becomes operative warfare through four practical mechanisms.

Command trust attacks: synthetic voice or video can be used to attempt false pause orders, false ceasefires or fabricated leadership instructions, achieving effects through confusion and delay even if later debunked.

Intelligence contamination: generative tools can flood analysts and leaders with plausible but false documents, imagery and analysis, overwhelming verification capacity at the worst moment.

Morale and legitimacy operations at scale: the same impersonation logic demonstrated in the FCC deepfake robocall case can be repurposed for psychological operations against forces and publics.

Escalation by speed: AI’s advantage is not only realism but tempo – fabricate fast, target fast, amplify fast – thereby compressing the time available for verification and restraint. This is why “Skynet” is the wrong mental model: the danger is not machine rebellion but ethical erosion, synthetic reality and compressed decisions.

India’s exposure and China’s advantage

This is a strategic threat, not a social media nuisance, as India’s vulnerability is amplified by scale (a vast, diverse information ecosystem) and by a strategic competitor with a significantly higher capacity to deploy AI-enabled influence as an instrument of national power. China’s advantage is not only better models and scale but its organised deployment – mobilising data, compute, platforms and aligned actors towards state objectives. The Australian Strategic Policy Institute describes “persuasive technologies” as systems designed to shape attitudes and behaviours by exploiting cognitive vulnerabilities, with clear national-security implications when used at scale.

For India, the strategic threat is therefore grey-zone pressure that can be timed with border stand-offs or internal flashpoints to compress decision time, generate outrage, weaken trust in institutions and inject uncertainty into crisis communication and command trust.

Counter-moves that matter

These steps will not eliminate deception, but they can prevent deception from becoming cheap, effortless and operationally decisive.

  •  Fast and trustworthy response is key, because the battlefield is all about speed and trust.
  •  “Trust channels” – authenticated government and military crisis channels – need to be credible and fast. Simple challenge-response routines for sensitive instructions, and deepfake-awareness drills that treat sudden audio or video “orders” as untrusted unless verified.
  •  Build rapid rebuttal capacity: a standing inter-agency capability to detect viral synthetic content, coordinate platform action and issue authenticated rebuttals fast – because early hours shape belief.
  •  Raise the cost of deception: policy moves that require clearer labelling, traceability and platform responsibility add friction, reduce plausible deniability and support faster attribution.

Conclusion: what the Summit should foreground

The India–AI Impact Summit is framed around “People, Planet, Progress”, and is seeking common standards and practical deliverables. India should use this moment to foreground a proposition that is ethically simple and strategically hard: information integrity is both a development issue and a national-security capability. Because narrative warfare is the frontline, and if weak human ethics get carried into increasingly capable systems, the most dangerous AI impacts will not wait for AGI; they will appear earlier as degraded trust, distorted crisis decisions and higher escalation risk in a nuclearised subcontinent.

The opinions expressed in this article are those of the author and do not purport to reflect the opinions or views of THE WEEK.