AI in health care needs direction, not just speed 

It is not enough to know to drive a car, neither is how fast you can drive the most pertinent issue; what matters is how responsibly you navigate to a desired destination. The same corollary applies to AI in health care

AI-in-Healthcare - 1

Technology has a role in advancing humanity, extending from improving lives, reducing suffering, and the well-being of humans. Each major leap of technology, from antibiotics to medical imaging to telemedicine, has transformed health care. India’s health care system is now navigating such a decisive journey in the context of Artificial Intelligence (AI).  

With over 1.4 billion people and mounting public health challenges from rising Non-communicable Diseases (NCDs), persistent communicable disease risks, overstretched health care infrastructure and existing regional disparities, the health care system faces a complex mix of constraints and challenges. Yet, within this complexity and challenges lies opportunity. By embracing Artificial Intelligence (AI) and digital public infrastructure, India is on a pathway to integrate  AI to transform health care.  

Yet the real question is not whether AI will impact health care. The key question is what kind of health care it will create. 

India’s AI-in-health landscape: Policy momentum

Before exploring what kind of health care AI will ultimately shape, it is important to understand the policy momentum and market drivers already driving adoption.  According to NITI Aayogthe potential of AI to transform India's health care sector is immense. The AI in the Indian health care market generated a revenue of USD 974.6 million in 2024 and  is expected to reach a projected revenue of USD 8 billion by 2030. A compound annual growth rate of 42.1 per cent is expected in the Indian AI health care market from 2025 to 2030.  

Possibly understanding this potential, the Government of India launched the IndiaAI Mission in March 2024, allocating Rs 10,300 crore over five years to strengthen national AI capabilities and accelerate responsible adoption across priority sectors, including health care. The government is digitalising the health care system via the National Digital Health Mission, which aims to create a unified health ID for every citizen, linking their health records and enabling seamless sharing of health data.  

For your daily dose of medical news and updates, visit: HEALTH

The Ayushman Bharat Digital Mission is working to digitize more than 500 million patient records, improving long-term care and enabling predictive health modeling. This initiative is expected to generate vast amounts of structured data, creating a fertile ground for the growth of AI applications in health care.

Leveraging AI in health care  

With this policy momentum and enabling drivers, it is important to examine how AI is reshaping different health care domains. Fundamentally, AI is the machine's ability to perform human-like problem-solving tasks. AI feeds on the large dataset that uses machine learning, Large language models (LLMs) and deep learning to decipher the pattern.  

In the health care ecosystem, it offers a promising transformation. AI models are fed large volumes of health data—such as medical images, public health data, lab results, prescriptions and clinical notes—and are trained to recognise patterns.  

Once trained, these systems can support prevention and disease surveillance, improve screening and diagnosis (especially in radiology and oncology), and aid in treatment decisions. Its application also extends beyond hospital walls through telemedicine and remote monitoring, while streamlining documentation, triage, communication and hospital workflows so health care professionals/doctors can focus more on patients.  

Across the health care value chain, recent examples show how AI is moving from experimentation to practical, scalable use. Voice-first assistants such as Microsoft Dragon Copilot are being positioned to reduce administrative burden by streamlining clinical documentation, surfacing relevant information and automating routine tasks.  

Health systems are also deploying AI to improve operational efficiency— Apollo Hospitals, for example, is using AI as a lever to manage staff workload and optimise throughput in clinical care. Moreover, AI is increasingly applied to earlier and more accurate detection—illustrated by Google’s AI licensing to support screening for diabetic blindness and broader research suggesting AI can flag early signals across a wide range of diseases, such as cancer diagnosis.  

Firms such as Qure.ai are strengthening their capabilities through structured, regulated training for lung cancer AI. Beyond diagnosis, machine learning can potentially accelerate drug discovery and translate insights into improved patient care pathways. In public health, initiatives such as Wadhwani AI’s “Cough Against TB” initiative demonstrates AI-enabled Tuberculosis screening trained on large cough/audio datasets to support population-scale screening.  

Finally, AI is widening access to care through telemedicine and language-enabled services: platforms such as Practo emphasise vernacular teleconsultation, while government-led services like eSanjeevani extend remote OPD-style consultations.

Anticipating the unintended consequences

The range of AI applications in health care makes its potential hard to ignore. Yet the key policy question before us is not only how AI is transforming health care, but for whom it is transforming it, and what kind of health system it will ultimately shape. This is where ethics and governance become pivotal: who designs and owns these technologies, whose data and lived realities they reflect, and whether we are being sufficiently thoughtful and critical about unintended consequences, potential risks and harms.  

A major concern is data bias. Technology is often assumed to be neutral, but it is human beliefs and (mis)understandings that introduce bias into technology. AI models are trained on data created by humans, so they consist of values and are subject to bias.  

They can also sometimes produce inaccurate results. An AI system is only as good as the data it is trained on. Many widely used AI models rely on datasets that disproportionately represent certain countries, cultures and languages, which can skew outputs. Each data feature may carry associated risks and prejudices.  

For instance, AI without safeguards can amplify gender bias. Male-skewed data, women’s underrepresentation in trials and design, and biased algorithms can distort diagnosis and treatment. An LSE study found AI tools can underplay the health concerns of women, while simultaneously amplifying those same issues when they affect men. 

Such concerns are brought to the forefront when not just trained doctors, but patients are also turning to AI tools and chatbots for health-related concerns. Recently, OpenAI unveiled ChatGPT Health, saying 230 million users ask about health each week, and Anthropic has also introduced Claude for health. These chatbots offer a dedicated space for users to have conversations about their health. This has sparked concerns and debate among the health fraternity about using AI chatbots for medical advice. 

These platforms without guardrails can come with risk. They can create a situation where the power of consultation and even diagnosis rests not with clinicians, doctors or health care professionals, but in the hands of the public when they are confronted with a health uncertainty and seek immediate guidance from AI Chatbots.  

Large language models (LLMs) like ChatGPT operate by predicting the most likely response to prompts are also prone to hallucinations and may lead to misdiagnosis. This also brings a larger question of ”moral outsourcing,” i.e., the delegation of ethical decision-making to algorithms, artificial intelligence, and automated systems, leading to a quiet erosion of human responsibility when we let machines decide.   

This raises a concern: who will be accountable if people are misdiagnosed based on suggestions made by LLMs? It brings a key question of reimagining health care governance to put clear accountability and liability at each decision point in the deployment of AI technologies in health care.

In health care, decisions carry a moral weight, where outcomes can affect human lives at every step. If an AI system is biased, poorly governed, or deployed without accountability, the consequences can be disproportionately high: missed diagnoses, delayed treatment, exclusion of vulnerable groups, or erosion of public trust in health care. Responsible adoption therefore requires robust governance frameworks—covering safety, fairness, transparency, privacy, human oversight and defined liability—so that innovation improves care without creating new risks or widening existing inequities.  

Navigating and reimagining the AI in health care

The real challenge is not that AI brings risks and concerns—those are inevitable with any transformative technology—but whether our health system is prepared to anticipate and govern them. The risks and ethical concerns associated with AI should not be treated as barriers to innovation, but as essential reflections that help us anticipate and address unintended consequences. In other words, the question is not simply how quickly we can deploy AI, but whether we can deploy it safely, fairly, and in ways that strengthen trust in health care. 

The Indian health care ecosystem, therefore, needs a robust, contextual ethical framework that can guide design, deployment and accountability across the AI lifecycle. Such a framework should act as a torchbearer: ensuring AI’s benefits are distributed fairly, that harms are identified early, and that vulnerable groups are not further marginalised by biased data or opaque decision-making.  AI should not be treated as a standalone product but part of a wider health ecosystem—data pipelines, clinical workflows, regulatory oversight, and patient trust. To realise AI’s full potential, Indian health care must strengthen existing public health data infrastructure — data quality, interoperability and representativeness— along with robust ethical infrastructure.

It is not enough to know how to drive a car; neither is how fast you can drive the most pertinent issue; what matters is how responsibly you navigate to a desired destination. The same corollary applies to AI in health care. Speed without direction can lead to harm, but with clear goals, robust governance, and human oversight, AI can be steered towards public good that matter: better diagnosis, improved access, lower costs, and more equitable care. 

Deepak Gautam is the Public Policy Lead at DataLEADS, where he works at the intersection of public policy, health, and emerging technologies, including AI and misinformation resilience

This story is done in collaboration with First Check, which is the health journalism vertical of DataLEADS.

TAGS