We Have the Tools. We Have the Data. Now All We Need Is NAMO.
India transfers ₹7 lakh crore in welfare annually. Algorithms decide who receives it. When systems fail, widows starve, and nobody answers. We have built world-class digital infrastructure. We have the talent. What we lack is accountability. I think that NAMO could change everything.
ARTIFICIAL INTELLIGENCEPOLITICSLEADERSHIPGOVERNANCEDIGITAL INDIA


Four years ago, I met with a Chief Minister who was concerned about his welfare scheme losing money. Officials blamed middlemen. The middlemen blamed the system. The system blamed no one. I suggested we use technology to find the answer.
I suggested using an algorithm to check beneficiary lists against property records, vehicle registrations, and tax returns. The goal was not to catch poor people cheating, but to see if the system was cheating itself. The Chief Minister agreed, but his officials did not. The idea was dropped in a conference room.
Last year, another state government tried this approach. In six months, they found 340,000 fake beneficiaries. Some were dead people getting pensions, government employees taking BPL rations, and families with two cars getting subsidised gas. They recovered ₹890 crore.
The technology itself was not new. Any skilled data scientist can cross-check databases. What made the difference was the decision actually to use it. This highlights the main issue with AI in Indian governance: we have the tools, data, and talent, but we lack a system that holds someone responsible for using them properly.
This article is not about whether India needs AI in governance. That argument is over. This article is about how to build it, what to build first, and what mistakes to avoid. I write it as someone who has spent decades advising governments on communication and strategy, and the past several years watching AI transform what is possible.
Fifteen Ministries, Zero Accountability
Let me explain the current situation as clearly as possible.
The IndiaAI Mission is part of MeitY and has a ₹10,300-crore budget over five years. The government has set up 38,000 GPUs in five data centres, which researchers can use for ₹65 per hour. The infrastructure is impressive, but IndiaAI cannot control how ministries use AI in their citizen services. It builds the roads, but does not direct the traffic.
NITI Aayog produces strategy documents and coordinates across ministries, but coordination without implementation power is theatre. Its National Strategy for Artificial Intelligence, released in 2018 and updated periodically, remains advisory. Advisory documents do not feed the hungry or catch the corrupt.
Each ministry builds its own AI projects in isolation. Health has telemedicine, agriculture has crop advice systems, defence has DRDO projects, and the Ministry of Electronics has digital governance schemes. They do not share infrastructure, data standards, or what they have learned. Each one tries to solve the same problems on its own, often not very well.
State governments are also running their own experiments. Telangana built Samagra Vedika for welfare checks. Karnataka developed an AI system to handle complaints. Andhra Pradesh set up a real-time governance dashboard. Tamil Nadu has its own digital projects. These efforts are isolated, with no way to share successes across the country.
What does this fragmentation mean for real people? Bismillah Bee, a widow in Telangana, experienced it firsthand. Her husband, Syed Ali, was a rickshaw puller, but the Samagra Vedika system mixed him up with a car owner named Syed Hyder Ali. For three years, she could not get rations. The algorithm saw a car, but it did not see a widow with hungry children.
From 2014 to 2019, Samagra Vedika wrongly left out 1.86 million people from food support. In Jharkhand, eleven-year-old Santoshi Kumari died of starvation in 2017 because Aadhaar authentication failures stopped her family from getting rations. Over five years after Aadhaar became required, there were 27 recorded starvation deaths in the state. Justice Chandrachud, in a key judgment, wrote: "Dignity cannot be subjected to algorithms."
This was not a failure of technology. The system did what it was built to do. The real failure was in governance. No one was held responsible. No one was accountable. No one faced consequences.
The Proof Already Exists
Before I suggest solutions, let me show what is already possible. These are not just theories; they are real systems that work and save money and lives. I will share three domains to show what is possible: catching thieves, saving lives, and guarding borders.
Brazil Did It. Ukraine Did It. Why Not Us?
Brazil created a system called Alice that monitors government purchases in real time. It looks for unusual contract patterns, flags suspicious bidders, and automatically pauses questionable deals for review. If Alice finds a tender with only one bidder, a winning bid very close to the reserve price, or the same company winning too often, the process stops before any money is spent.
Ukraine’s ProZorro system made government purchasing more transparent and uses AI to spot collusion among bidders. The European Court of Auditors uses machine learning to study lobbying patterns in EU institutions. Spain’s Social Security agency uses AI to catch benefit fraud as it happens.
India loses about 3% of its GDP to corruption each year, according to reliable estimates. We are ranked 93 out of 180 countries on Transparency International’s Corruption Perceptions Index. The World Bank says that open contracting and digital monitoring can cut procurement losses by 30%. For India, where government procurement accounts for 20-25% of GDP, this means saving vast amounts of public money, not just in theory but in practice.
I have seen Indian officials brush this off, saying, "Our situation is different" or "Our procurement is too complex." What they really mean is that accountability would be uncomfortable.
The Doctor Who Never Arrives
India has about 160,000 Health and Wellness Centres in rural areas, but most do not have specialist doctors. A farmer in Vidarbha with chest pain cannot see a heart doctor. A tribal woman in Bastar with pregnancy problems cannot reach an obstetrician. Many die, travel far at great cost, or suffer in silence.
AI-powered diagnostic tools can help. They will not replace doctors, but will support paramedics and ASHA workers in the field. These systems can check symptoms, spot urgent cases, suggest treatments for common illnesses, and warn about possible disease outbreaks. The e-Sanjeevani platform has already done millions of telemedicine consultations. The next step is to use AI to sort patients before video calls, help with clinical decisions, and flag cases that need specialists before they become emergencies.
The technology is already here. Stanford researchers have shown that AI can match or beat specialists in spotting skin cancer, diabetic eye disease, and lung problems from images. Google DeepMind’s AlphaFold has changed how we predict protein structures. The real challenge is getting these tools into use: training health workers, protecting data privacy, and building trust in AI advice. These are issues of governance, not technology.
When Stakes Are High, We Act Fast
After Pahalgam, border security has become a bigger topic. AI is already helping. The Indian military uses over 140 AI-powered surveillance systems along the borders. DRDO and BEL have built computer vision tools that review drone footage in real time and spot patterns humans might miss. The Integrated Unmanned Roadmap 2024-2034 plans for full drone coverage of sensitive areas, with AI handling the enormous amounts of data these systems collect.
High-altitude drones now watch over the Line of Control. AI-powered anti-drone systems have stopped 80% of hostile drones in recent incidents. The defence budget for 2025-26 is ₹6.81 lakh crore, with about ₹100 crore each year for military AI projects. More than 1,000 defence-tech startups are working on local solutions.
This shows what AI can achieve when there is strong political will. In defence, the risks are high, so action happens quickly. The real question is whether we can act as urgently for citizens facing hunger, disease, or neglect.
NAMO: The Missing Institution
If the Modi government truly wants to use AI for better administration, here’s what I suggest.
Set up an independent National AI Management Office (NAMO) that reports directly to the Prime Minister’s Office. MeitY doesn’t have the authority to work across ministries, and NITI Aayog can coordinate but not implement. NAMO should have legal backing through an Act of Parliament.
Why NAMO? Branding is essential in politics, and a Prime Minister who shares the acronym may appreciate the opportunity to leave a lasting legacy. More importantly, “management” reflects what’s needed: not just innovation or strategy, but the practical work of making things run smoothly.
NAMO would focus on four main functions.
First, NAMO should set mandatory standards for the use of AI in government services; not just guidelines but enforceable rules. If a ministry uses an algorithm that affects people’s access to benefits, it must meet NAMO’s requirements for transparency, auditability, and human oversight. No exceptions.
Second, NAMO should regularly audit important systems, including welfare, law enforcement, taxation, and credit allocation, as well as any areas where algorithms affect people’s lives. These audits should be done by independent experts, not the original developers. Systems should be open source and open to review by global experts.
Third, NAMO should run a public registry of AI incidents. When systems fail, those failures need to be recorded, analysed, and shared openly. Cases like Bismillah Bee’s exclusion should be studied, not ignored. Santoshi Kumari’s death should lead to change, not be forgotten. Transparency helps prevent repeated mistakes.
Fourth, NAMO should certify AI vendors for government contracts. Right now, ministries buy AI systems on their own, allowing unqualified or dishonest vendors to win contracts again and again. A central certification and blacklisting process would create real accountability.
NAMO should be staffed with technologists who know coding, policy experts, and experienced civil servants. The chairperson should serve a fixed five-year term and be removed only through a process similar to that for the CAG or the Chief Election Commissioner. Independence is essential.
The budget needed is about ₹1000 crore a year. For a government that spends lakhs of crores on welfare and is often affected by algorithm errors, this is a small amount. The benefits are huge and clear.
What Ministries Must Do
NAMO gives central oversight, but that alone isn’t enough. Each major ministry should have its own AI Cell that reports directly to the Secretary.
This isn’t just more bureaucracy. The AI Cell translates ministry goals into technical requirements, ensures AI projects align with policy aims, monitors for issues such as bias or failures, and helps citizens when algorithms make mistakes.
Each AI Cell should have five to seven people: two technical specialists in coding and machine learning, two ministry experts in policy, and support staff for documentation and communication. Across twenty ministries, this would cost about ₹100 crore a year, the same as one mid-sized government ad campaign.
The AI Cell’s main job isn’t to innovate; consultants can do that. Its real job is to make sure new ideas actually work for citizens when put into practice.
What States Must Do
India is a federation, and states handle most services for citizens. Any national plan that leaves out states isn’t realistic.
Set up State AI Coordination Units under each Chief Secretary. These teams should adapt national plans to local needs but keep systems compatible. For example, a system in Tamil Nadu should work with one in Gujarat. Data standards must match, and states should share what they learn.
The idea is to let states innovate while following common standards. Their new solutions should connect to the national system. If not, we’ll just repeat today’s problems on a bigger scale.
Some states are already leading. Telangana has shown ambition, even if Samagra Vedika had issues. Karnataka’s tech sector is a significant advantage, and Tamil Nadu has strong governance. These states can be testing grounds, but their lessons should be shared with others in a structured way.
Where to Start
Governments often try to change everything at once, but those plans usually fail. Here’s a step-by-step approach that considers impact, feasibility, and what’s politically possible.
Year One: Follow the Money
India sends over ₹7 lakh crore each year through DBT schemes. Even cautious estimates say 5-10% is lost to fake beneficiaries, duplicate accounts, and fraud. If AI cuts leakage by just 2%, that saves ₹14,000 crore a year; more than the whole IndiaAI Mission budget.
We already have the technology and data. What’s missing is connecting databases, protecting privacy, and setting up a strong appeals process for mistakes. Start with three states that have different governance styles, run the program for a year, see what works, and then expand nationwide.
The most crucial step is to set up the appeals process before launching the detection system. People like Bismillah Bee need a way to challenge decisions before algorithms have control.
Year Two: Catch Them Before They Sign
Use AI to monitor e-procurement portals and identify unusual patterns before contracts are issued. Look for factors such as single bidders, last-minute changes, repeat winners, or price clustering. Other countries, such as Brazil and Ukraine, have successfully done this. India can too.
This approach is easier politically than welfare reform because it fights corruption, not beneficiaries. If you present it as an anti-corruption step, there’s less pushback. Even those involved in corruption rarely oppose such systems openly.
Year Three: From Efficiency to Foresight
After building the basics, start using AI for prediction. This means spotting infrastructure problems before they happen, finding districts at risk of malnutrition, identifying students likely to drop out, and predicting crop stress using satellite images.
This is when AI shifts from just making things efficient to truly transforming them. But it only works if the earlier groundwork is in place. If you skip steps, you end up like Telangana: advanced technology without proper accountability.
The Laxman Rekha That Must Be Drawn
Let me be clear about an issue that most policy documents and political commentators avoid.
AI surveillance is advancing faster than oversight. Facial recognition can track people across cities. Predictive policing can flag people as likely offenders before anything happens. Social media analysis can spot dissent before it grows. These tools are real, and many are already used in India.
The V-DEM Institute’s 2024 report on digital repression is troubling. India is often cited as an example of a democracy that uses technology for surveillance without sufficient safeguards. The same government that creates inclusive payment systems also builds tools to track protestors.
I don’t believe in absolute civil liberties. Surveillance technology can be used for good, like catching terrorists, finding missing children, or monitoring borders. But for it to be legitimate, there must be transparency and oversight.
Every government AI surveillance system should be registered with NAMO, audited every year, and reported to a parliamentary committee. There should be clear rules for data retention and access. Intelligence agencies may resist, as they often do, but instead of exempting them, set up a separate oversight process, maybe through Parliament’s Intelligence Committee.
If a government uses AI to catch benefit fraud but avoids audits of its own spending, it shows where its priorities lie. Using facial recognition to find missing children is one thing, but using it to monitor protests crosses a line that needs to be clearly defined.
Regulate Harm, Not Speech
India doesn’t have a separate AI law yet. The government relies on existing rules, such as the Digital Personal Data Protection Act, the IT Act, and sector-specific regulations. This makes sense for now. The EU’s AI Act, with its detailed risk categories, might be too strict for such fast-changing technology.
However, the IT Rules changes proposed in October 2025, which make platforms responsible for synthetic content, could lead to more than just overregulation; they risk becoming censorship disguised as safety.
When the Bombay High Court struck down the government's fact-checking unit provisions as unconstitutional, it sent a clear message: content regulation cannot be captured by the executive. AI governance must learn from this. Deepfake regulation belongs in election law and criminal law, not in a framework governing AI in public administration.
Focus on regulating harm, not speech. Ask for disclosure, not approval. Build accountability, not permission systems. If AI governance starts controlling content, it becomes a way to silence voices we don’t want to lose. We already have too many tools for that.
Four Failures I Have Seen Repeated
After years of advising political leaders, I’ve seen that technology projects usually fail because of people, not technical problems.
It’s a common problem: pilot projects succeed, but rarely expand nationwide. Pilots work because they get attention, dedicated staff, and secure funding. To scale up, you need standard procedures, trained people everywhere, and budgets that last through government changes. Plan for scaling from the start, or don’t start at all.
Vendor capture is a widespread issue. Big IT vendors make themselves essential by building systems only they can maintain and staffing projects with their own people. They write contracts that make switching expensive. The answer is open standards, required code escrow, and competitive bidding to avoid being locked in. If the government can’t replace its vendor, it’s no longer in control.
The dashboard delusion afflicts every Chief Minister I have known. They love dashboards. Real-time data. Colour-coded alerts. Impressive visualisations. But dashboards are not governance. Governance is what happens when the dashboard shows red. Who acts? With what authority? With what accountability? Build the response system before building the dashboard.
The announcement-implementation gap is the most persistent failure. A minister announces an AI initiative. The press covers it. The ministry issues a tender. A vendor is selected. A system is built. Then nothing. No training for frontline staff. No change in processes. No monitoring of outcomes. The system exists on paper while governance continues unchanged. Close this gap by making implementation milestones public and tying them to official performance reviews.
The Machines Are Ready. Are We?
India has built a remarkable digital public infrastructure. Aadhaar covers 1.4 billion people. UPI processes billions of transactions. DigiLocker stores hundreds of millions of documents. GSTN has transformed tax compliance. This infrastructure is admired globally and studied as a model.
But having infrastructure isn’t the same as good governance. Rails don’t move trains, and pipes don’t deliver water by themselves.
The next step is to build institutions that can use AI responsibly, spot problems before they hurt people, and hold systems accountable when things go wrong. This takes political will to invest in oversight, not just technology. We must see that algorithms affecting the vulnerable are policy tools, not neutral machines, and should be held to account.
I’ve seen the Modi government build impressive digital systems, but also resist the checks that would make them safe. This tension is key right now. A strong state without limits isn’t strength; it’s a risk.
NAMO is a way forward. It has a brand that the Prime Minister may support, builds an institution that lasts beyond one government, and offers accountability and oversight without slowing things down.
Machines are starting to govern. The real question is whether we will stay in control of them.
Chanakya said a king who can’t protect his people doesn’t deserve to rule. Today, protection is more than armies or walls. It means making sure the algorithms that decide who gets food, who is free, and who is watched serve the people, not just the state.
We face that choice now. The tools are ready. The real question is whether we are.
