The First War India Fought Twice: What Operation Sindoor Revealed About Communication in the Age of AI

Operation Sindoor was India's most decisive military action in decades. It was also the first South Asian conflict where AI-generated disinformation played a central role. India won the kinetic war in 88 hours. The information war is still being fought. Here is what that means for every government communicator.

ARTIFICIAL INTELLIGENCELEADERSHIPTRUSTCRISISBRANDINGCOMMUNICATIONFOREIGN POLICY COMMUNICATIONINDIAGOVERNANCEINFORMATION WARFAREMISINFORMATIONCREDIBILITY

Tushar Panchal

2/27/202616 min read

Image depicting distortion of reality by AI and misinformation
Image depicting distortion of reality by AI and misinformation
India won the kinetic war in 88 hours. The information war took weeks. And even then, the outcome was contested. For a political communication practitioner, the lessons are not what you think.

I have written several times before about what I consider the most important structural weakness in the Modi government's otherwise formidable communication machinery. In August 2025, I argued that the government's crisis communication had shifted from its strongest asset to a significant liability. I proposed an institutional reform framework that included a Golden Hour Protocol and a National Crisis Communication Cell. In February 2026, I examined the announcement of the India-US trade deal. I argued that India's foreign policy communication playbook, designed for a pre-Trump, pre-social-media world, needed urgent updating. I identified a 90-minute window: the time India had to shape a narrative before it was shaped for them.

I was wrong. Not about the diagnosis. About the timeline.

Operation Sindoor proved that 90 minutes is a luxury. In the age of AI-generated disinformation, the window has shrunk to about 9 minutes. And the adversary is no longer just the country across the border. It is a technology that can manufacture unlimited false content at virtually zero cost, distribute it at the speed of a share button, and overwhelm even well-prepared communication systems through sheer volume.

This piece is not about what India should have done differently during Operation Sindoor. It is about what Sindoor taught us about the nature of the communication battlefield itself, and why every recommendation I have previously made, while still necessary, is no longer sufficient.

What India Got Right

Credit where it is due. India's communication during Operation Sindoor was markedly better than in any previous military engagement with Pakistan.

The operation was launched after midnight on 7 May 2025. Within hours, Foreign Secretary Vikram Misri was before the cameras. Flanking him were Colonel Sofiya Qureshi of the Indian Army and Wing Commander Vyomika Singh of the Indian Air Force. The choice was deliberate and sophisticated on multiple levels.

The operation was named Sindoor, a reference to the vermillion powder worn by married Hindu women, a direct reference to the widows of Pahalgam. A Muslim woman officer and a Hindu woman officer stood together to brief the nation. The message was layered: grief, unity, precision, restraint. No Indian military communication has ever carried this much symbolic density in a single visual frame.

The armed forces released targeting footage from onboard systems and surveillance drones, showing direct hits on Lashkar-e-Taiba's headquarters in Muridke and Jaish-e-Mohammed sites across Pakistan and Pakistan-occupied Kashmir. The messaging was consistent: focused, measured, and non-escalatory. The Press Information Bureau stood up its fact-checking operation and debunked over 60 false claims during the crisis.

The initial briefing was proactive, not reactive. India spoke first. India showed evidence. India set the frame.

By every metric I had previously used to judge Indian crisis communication, this was a quantum leap. The Golden Hour was respected. The narrative was set from the top. The visual communication was masterful.

And it still was not enough.

The Nine-Minute Detonation

Within hours of Operation Sindoor, the information environment detonated. What followed was not a single disinformation campaign but an ecosystem-wide collapse of verifiability, with content flowing in every direction, from every actor, at speeds that overwhelmed every institution designed to manage it.

A deepfake video of Pakistan's Prime Minister Shehbaz Sharif appeared online, apparently conceding defeat and lamenting a lack of support from China and the UAE. The original video, from 7 May, showed Sharif commending the Pakistan Air Force. AI voice cloning and lip-sync technology had been used to fabricate an entirely different speech. ElevenLabs, the voice AI company, confirmed a 98 per cent probability that its platform had been used to generate the synthetic audio. Hive Moderation scored it 99.9 per cent likely to contain AI-generated content. The video went viral before any of this analysis was complete.

A deepfake video of US President Donald Trump appeared, apparently declaring he would "destroy Pakistan." Indian fact-checkers caught this one quickly. Its impact was contained. But the pace of production was the story: the conflict was hours old, and fabricated presidential statements were already circulating.

Deepfake videos of Prime Minister Modi and External Affairs Minister Jaishankar appeared, apparently admitting defeat and apologising to Pakistan. The Deepfakes Analysis Unit of the Misinformation Combat Alliance confirmed that synthetic audio had been spliced into real footage of Jaishankar. These were shared across X and Facebook by accounts that Blackbird.AI later identified as exhibiting high levels of anomalous activity.

An AI-generated satellite image of a "bombed Rawalpindi stadium" accumulated 9.6 million views. It was not labelled. The platform did not fact-check it. It simply existed in the information ecosystem as visual evidence of something that never happened.

Recycled footage compounded the chaos. A video of the 2020 Beirut port explosion was shared as an Indian airstrike on Pakistani targets. Footage from Israel's Iron Dome intercepting rockets was aired on Indian television as real-time footage from Jaisalmer. A Turkish military rescue photograph from 2016 was presented as evidence of a captured Pakistani pilot. A Chilean wildfire video was passed off as Pakistan bombing Amritsar. These were not AI-generated. They were old-fashioned disinformation. But they were amplified by the same ecosystem that distributed the deepfakes, and in the fog of a live conflict, distinguishing between real, recycled, and fabricated footage became nearly impossible.

Blackbird.AI's comprehensive analysis identified over 180,000 posts generating more than 3 million engagements across competing narrative clusters. The anomalous activity rate around certain narratives reached 33.9 per cent, indicating coordinated campaigns rather than organic conversation.

The Vivekananda International Foundation concluded that India initially experienced a "strategic communications setback" in which "Pakistan effectively captured the global narrative." RUSI was blunter: the Indian government and military, while attempting to counter the disinformation, "were often caught flat-footed. It was frequently left to individual fact-checkers and organisations to swiftly retort."

The Credibility Trap

Now, here is where I need to say something that will be unpopular with a section of my readers.

The nationalist instinct during Operation Sindoor was to celebrate every piece of content that showed India winning and Pakistan losing, regardless of whether it was real. The deepfake of Pakistan's DG ISPR apparently admitting the loss of two JF-17 jets? Shared nearly 700,000 times on X. The fake stadium image? 9.6 million views. These felt like victories. They were not.

They were the most dangerous thing that happened to India's communication credibility during the entire crisis.

Let me explain this with the logic of the craft. Bellingcat investigated the DG ISPR deepfake and found that several major Indian media outlets, including NDTV, Firstpost, The Free Press Journal, and The Statesman, had picked it up and run with it as genuine news. Professor Hany Farid of UC Berkeley confirmed it was a deepfake. NDTV and The Statesman later quietly deleted their reports without any clarification.

Think about what this means from a communication standpoint. On 7 May, Colonel Qureshi stood before the cameras and presented genuine targeting footage of Indian strikes on terrorist camps. That footage was real. That evidence was authentic. That briefing was one of the finest pieces of military communication in Indian history.

On 8 May, Indian media outlets ran a deepfake as news. They ran it because it told a story Indian audiences wanted to hear. And when it was exposed as fabricated, they quietly deleted it.

Now put yourself in the position of an international journalist, a foreign government analyst, or a neutral observer trying to assess what actually happened. On one hand, you have India's official briefing with targeting footage. On the other hand, you have the Indian media running fabricated videos. How do you distinguish the real evidence from the fake? You cannot, not without deep technical verification. And so the credibility of all Indian claims is diminished, including the true ones.

This is the credibility trap. When your own information ecosystem amplifies fabricated content that flatters your narrative, it does not strengthen your position. It contaminates your legitimate evidence. Every fake video of Pakistan "admitting defeat" that goes viral makes the real video of Indian strikes less believable to the people who matter most: international observers, neutral governments, and the foreign press corps that shapes global perception.

RUSI identified exactly this problem, noting that India's case was "further undermined by the Indian electronic media, whose coverage was far from exemplary, with several English and Hindi TV news channels broadcasting false claims about the Indian military campaign."

The patriot who shares a deepfake thinks he is helping India. He is, in fact, handing ammunition to everyone who wants to dismiss India's legitimate achievements. When everything in the information environment looks like propaganda, nothing looks like evidence. The genuine footage of BrahMos missiles hitting terrorist camps becomes, to the sceptical international eye, just another unverified claim in a sea of fabrications.

This is not a moral argument. It is a strategic one. And it is the argument that every government communicator needs to internalise before the next crisis.

The Asymmetry That Changed Everything

The deeper lesson of Sindoor is not about any particular deepfake. It is about the fundamental economics of the information battlefield.

In the old model, creating convincing false content required significant resources: production facilities, editing software, trained operators, and distribution networks. The cost was high enough that a well-resourced government communication operation could identify and debunk false claims at roughly the same pace they were being produced.

AI demolished that cost structure.

During Operation Sindoor, fabricated content could be produced by anyone with access to widely available tools. A deepfake video that would have required a professional studio five years ago could be generated in minutes. Someone with no technical training could create an AI-generated satellite image that would have required specialised cartographic knowledge. The marginal cost of producing the next piece of false content approached zero.

The marginal cost of debunking it did not.

Every false claim still required a human fact-checker to identify, verify, trace the source, produce a rebuttal, and distribute the correction. The PIB debunked over 60 claims during the crisis, an extraordinary institutional effort. But against 180,000 posts generating 3 million engagements, 60 debunked claims are a rounding error.

This is the asymmetry that should keep every government communicator awake at night. The attacker produces content at machine speed and zero marginal cost. The defender verifies at human speed and significant marginal cost. In any conflict where this asymmetry holds, the defender loses the volume war by default.

The nine-minute window I mentioned at the start is a consequence of this asymmetry. In my earlier piece, I identified a 90-minute window between the moment a claim is made and the moment the global headline is set. My deeper analysis of communication patterns during Operation Sindoor rendered that near-irrelevant. A deepfake with high emotional charge can be shared in just 9 minutes across WhatsApp groups, reposted on X, picked up by television channels monitoring social media, and embedded in the information environment as something people have "seen." The correction, when it comes, competes not against the false claim but against the memory of having seen it. Behavioural research is unambiguous: corrections rarely travel as far or as fast as the original claim, and mere exposure creates a residual impression that persists even after debunking.

India did not lose the first phase of the information war because it was incompetent. It lost it because the economics of that war have been reshaped by technology, rendering traditional communication doctrine, even when well executed, structurally inadequate.

The Suppression Reflex

There are two possible responses to the structural asymmetry I have just described. One is to build the institutional capacity to compete in a saturated information environment. The other is to try to suppress what you cannot control.

The Modi government has, instinctively and consistently, chosen suppression.

During Operation Sindoor, the government ordered X to block over 8,000 accounts, including those of international news organisations and Pakistani media outlets such as Dawn and GeoNews. X complied reluctantly, stating it did so to avoid "significant fines and imprisonment of the company's local employees," but publicly protested the orders as censorship. X's Global Government Affairs account, before being itself blocked in India, published the blocking orders publicly. The internet in parts of Jammu and Kashmir was restricted. The Ministry of Information and Broadcasting directed OTT platforms to discontinue the streaming of Pakistani content.

This was not an aberration. It was doctrine.

In October 2024, the Ministry of Home Affairs launched the Sahyog portal, a centralised system for automating government takedown notices to social media platforms. Between its launch and October 2025, over 2,300 blocking orders were sent to 19 online platforms. The Wire, which accessed the portal's never-published user manual, reported that orders are unilateral, that there is no independent review process, and that the manual's definition of "stakeholders" excludes journalists and content creators entirely. In February 2026, the government compressed the takedown window from 36 hours to three hours. As the Internet Freedom Foundation's Apar Gupta noted, the timelines are now so tight that meaningful human review becomes structurally impossible at scale.

I want to be very precise about why I am critiquing this approach. I am not making a civil liberties argument. Others are doing that work, and doing it well. I am making a strategic communication argument. Suppression does not work, and in the specific context of AI-saturated information warfare, it actively undermines the government's own communication objectives.

Here is why.

First, suppression is a domestic instrument applied to an international problem. You can block 8,000 accounts on X within India. You cannot block the same accounts in Washington, London, Brussels, or Geneva. The narrative that matters during a military crisis, the one that determines whether the international community backs your position, is not shaped by what Indian users see on their phones. It is shaped by what the BBC, Reuters, the New York Times, and the Washington foreign policy establishment are reporting. Blocking Dawn inside India does not prevent Dawn's coverage from shaping the Pakistan narrative in every capital that matters. It simply means India's communicators do not know what narrative they are competing against.

Second, the blocking itself becomes the story. When X published the Indian government's blocking orders, the international headline shifted from "India strikes terrorist camps" to "India censors social media during military conflict." Press freedom organisations, Western media, and civil society groups framed the blocking as censorship. India's legitimate military communication, the precision strikes, the targeting evidence, and the measured briefing were now competing not just against Pakistani disinformation but against a self-generated narrative about Indian authoritarianism. The government created a second information front while trying to manage the first.

Third, and this is the point that should matter most to every government communicator reading this, suppression signals insecurity. A government that has just conducted the most successful military operation in decades, with video evidence, precision targeting data, and a briefing that set a new standard for Indian military communication, does not need to block 8,000 accounts. The evidence speaks for itself. When you suppress, you tell the world that you do not trust your own evidence to compete in an open information environment. You tell sceptical international observers that your claims cannot withstand scrutiny. You tell the global press corps that the story you do not want them to see is more interesting than the story you are telling.

The three-hour takedown window introduced in February 2026 intensifies this problem. The stated purpose is to combat AI-generated disinformation, and that purpose is legitimate. But the mechanism does not distinguish between a deepfake of the Prime Minister and a satirical post critical of government policy. When the takedown infrastructure designed for synthetic content is used, as it inevitably will be, against legitimate political commentary and unfavourable journalism, it does not protect the government's narrative. It confirms every critic's argument that India's information controls are about political management, not national security. And that confirmation erodes the very credibility that the government will need when the next crisis demands that the world believe its evidence.

The suppression reflex is not unique to this government. Most governments, when confronted with a hostile information environment, reach for the delete button before they reach for the microphone. But most governments have not just demonstrated, in the Sindoor briefing, that they are capable of world-class proactive communication. The tragedy of the suppression approach is that it coexists with genuine institutional capability. India proved in the first hours of Sindoor that it could compete. It then spent the following weeks demonstrating that it preferred to control.

This brings me to what I consider the central question for practitioners. The think tanks critique the suppression on strategic grounds. The rights organisations critique it on constitutional grounds. The fact-checkers focus on detection. The academics recommend adopting labelling standards and bilateral mechanisms. Each offers valuable analysis. No one asks the question that a political communication practitioner asks when sitting in a government war room: if 180,000 posts are being generated against you, your debunking capacity is 60, and your suppression tools only work within your own borders, what exactly is the doctrine?

What follows is an attempt to answer that question.

The Five Layers of Narrative Resilience

The old model of government crisis communication aimed at narrative control: set the story, maintain the story, correct deviations from the story. This model assumed a manageable information environment in which the government's voice was among a limited number of authoritative sources.

That model is dead. Operation Sindoor killed it.

The new model must aim for narrative resilience: the ability to maintain credibility and coherence even when the information environment is saturated with false content that cannot all be individually addressed. Narrative resilience is not about winning the information war. It is about ensuring that when the war is over, your legitimate claims are still believed.

I propose a five-layer framework.

Layer One: Pre-Crisis Credibility Architecture. The single most important investment in crisis communication happens before the crisis. Colonel Qureshi's briefing worked not because it was fast but because it was credible. The evidence footage had the texture of authenticity. The spokespersons had professional authority. The frame was consistent and measured. This kind of credibility cannot be manufactured in the moment. It must be built through sustained institutional behaviour: transparent communication in peacetime, consistent messaging across events, and a track record of accuracy that gives the government's voice an evidentiary advantage over fabricated content. The government that is trusted in peacetime earns the benefit of the doubt in crisis. The government that deploys propaganda in peacetime finds its legitimate evidence dismissed as more of the same.

Layer Two: Ecosystem Discipline. This is the hardest layer and the one that Operation Sindoor failed most visibly. Ecosystem discipline means ensuring that your own information allies, media outlets, social media amplifiers, party networks, and friendly commentators do not undermine your credibility by amplifying unverified content. When NDTV ran the DG ISPR deepfake, it was not a Pakistani victory. It was a self-inflicted wound on India's information credibility. Building ecosystem discipline requires pre-crisis protocols with major media partners on verification standards during conflict, real-time guidance from the government's communication cell on what has been verified and what has not, and the institutional courage to publicly disown false claims that favour your own side. This is the recommendation that no government wants to hear. It is also the one that matters most.

Layer Three: Real-Time Detection and Triage. When 180,000 posts are flooding the information environment, you cannot debunk them all. You should not try. The strategic question is not "how do we debunk this?" but "which false claims, if left uncorrected, will materially affect the strategic outcome?" India has begun building detection tools. Vastav AI, developed by the cybersecurity firm Zero Defend Security, represents an indigenous capability. The PIB's fact-checking operation demonstrated institutional commitment. But detection without triage is a resource trap. What is needed is a classification system that categorises incoming disinformation by strategic impact, not by volume or virality. Some viral claims are strategically harmless. Some low-volume claims, if unchallenged, can reshape the entire narrative arc. The war room must learn to distinguish between the two.

Layer Four: Counter-Narrative Positioning. The traditional approach to counter-narrative is reactive: a false claim appears, and you produce a rebuttal. In an AI-saturated environment, this approach fails because the volume of false claims always exceeds the capacity to rebut them. The alternative is to establish what I call "narrative anchors": a small number of clearly articulated, evidence-backed claims that serve as the reference points against which all competing narratives are evaluated. During Sindoor, India had an excellent narrative anchor in the initial briefing: nine terrorist camps, precision strikes, non-escalatory intent, and video evidence. The problem was that this anchor was not reinforced systematically as the crisis evolved. It was drowned out by the noise. Counter-narrative positioning means identifying three to five core claims at the start of a crisis, backing each with the strongest available evidence, and returning to them relentlessly in every subsequent communication, regardless of what the adversary or the information environment is producing. You do not chase every false claim. You make the truth louder than the noise.

Layer Five: Post-Crisis Narrative Consolidation. This is the layer that India has historically neglected entirely, and it may be the most consequential. The information war does not end when the ceasefire begins. During Sindoor, Pakistan's narrative continued to evolve for weeks after the fighting stopped. Trump's mediation claim gained traction after the crisis, not during the conflict itself. The opposition's critique crystallised months later. Post-crisis consolidation means treating the weeks after a crisis as a distinct communication phase with its own strategy, its own messaging, and its own resource allocation. It means commissioning and releasing definitive accounts, data, and evidence that establish the authoritative record. It means engaging international media, think tanks, and academic institutions to ensure the factual record is not shaped by those who acted fastest during the chaos.

The Bigger Problem

I want to be clear about why I am writing this now, nine months after the crisis.

Operation Sindoor was fought against Pakistan. The information warfare capabilities Pakistan deployed, while damaging, were relatively unsophisticated by global standards. The deepfakes were detectable. The recycled footage was traceable. The coordination patterns were identifiable. India's fact-checking ecosystem, while overwhelmed, managed to catch and debunk the most consequential fabrications within days.

Now consider what happens when the adversary is more capable.

Grok, X's integrated AI chatbot, became a dangerous amplifier during Sindoor, confidently misidentifying footage from Sudan's Khartoum airport as a missile strike on Nur Khan air base, labelling a building fire in Nepal as "likely" showing Pakistan's military response, and declaring a confirmed deepfake had "no evidence suggesting it is AI-generated." Users treated its responses as verification and shared them as proof. This was not a Pakistani information operation. It was a structural feature of the platform itself. And it will be present in every future crisis.

The implications extend beyond military conflict. Every major government communication challenge operates in the same information environment. If AI-generated disinformation can distort the narrative of a military conflict where India has clear evidence and decisive outcomes, it can certainly distort the narrative of a trade negotiation where the facts are ambiguous, an economic policy where the data is contested, or a state election where the stakes are local but the disinformation infrastructure is national.

Four states are voting this year. The BJP's message will focus on national security and Modi's record. The information environment in which that message must compete is the same one that nearly overwhelmed India's communication machinery during Sindoor. The tools are cheaper. The capabilities are more widely distributed. And the platforms have fewer, not more, safeguards than they did nine months ago.

The Question

I have spent nearly three decades advising governments on communication. I have seen playbooks that worked for years suddenly stop working because the environment changed. The Modi government's domestic communication playbook stopped working internationally when Trump changed the rules of engagement. Now, the traditional crisis communication playbook, even the upgraded version I have been advocating, has met its own moment of obsolescence.

The five-layer framework I have outlined above is not theoretical. Each layer requires specific institutional capacity, trained personnel, pre-established protocols, and technology infrastructure that must be built before the crisis, not during it. The governments that build this architecture now will be the ones that maintain narrative credibility in the next crisis. The ones that do not will find themselves fighting the next war with the tools of the last one.

Operation Sindoor was won on the battlefield in 88 hours. On the information battlefield, it is still being fought. The war India fought twice was the first of its kind in South Asia. It will not be the last.

The question is whether India will be ready.