Should you trust AI for medical advice? The hidden risks in health care

There is no dedicated regulatory framework in India that governs AI-based medical tools

AI-in-Healthcare - 1

India is moving fast on artificial intelligence in health care. Some would say too fast. The Ayushman Bharat Digital Mission has built a nationwide digital backbone that links health IDs, records and patient data. It is a big shift. Within this system, AI is already finding its place in diagnostic tools, symptom checkers and patient apps.

Its presence is visible in everyday decisions now, from how people look up symptoms to how they act on that information. What remains unclear is whether the system is ready for it.

Speed without safeguards

There is no dedicated regulatory framework in India that governs AI-based medical tools. The Digital Personal Data Protection Act talks about data rights in a general sense, but it does not address what happens when a large language model generates a plausible-sounding but clinically wrong answer to a health query.

The Central Drugs Standard Control Organisation (CDSCO), which oversees medical devices, has begun to acknowledge AI diagnostics under its ambit. But enforcement on the ground is uneven, and guidelines are still catching up with how quickly these tools are being used.

For your daily dose of medical news and updates, visit: HEALTH

A 45-year-old man in New Delhi was recently hospitalised in critical condition after he administered HIV post-exposure prophylaxis medication to himself based on advice from an AI chatbot.

He developed Stevens-Johnson Syndrome, a severe and potentially fatal drug reaction marked by painful rashes, blistering, and peeling skin. He had purchased a full 28-day course over the counter and taken it for seven days following a high-risk sexual encounter. The incident is not an anomaly. It is a preview.

What AI cannot do

The clinical problem with patient-facing AI is that a chatbot does not know a patient's age, comorbidities, or medication history unless explicitly told. Even then, it cannot weigh that information the way a trained clinician would. It cannot examine a patient. It cannot order tests when a picture is unclear. It cannot flag the difference between a symptom that is benign in isolation and dangerous in combination.

The World Health Organisation (WHO) has explicitly reported that data used to train AI can be biased, generating misleading or inaccurate information that poses risks to health, equity, and inclusiveness.

Large language models generate responses that appear authoritative and plausible to an end user, even when those responses are entirely wrong, the WHO has cautioned.

A study published in Nature Medicine, led by researchers from the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences, found that AI chatbots often provide a mix of good and bad information that users cannot reliably distinguish.

While these tools now perform well on standardised medical knowledge tests, their use as a real-world diagnostic aid poses measurable risks to people seeking help with actual symptoms.

A blog in the BMJ Journal of Medical Ethics published in November 2025 points to another concern. AI-generated health summaries now appear by default in search results, and in many cases, they are not reliable. The blog notes that such systems can produce incorrect responses in up to 48 per cent of cases. At the same time, they reduce traffic to peer-reviewed sources by 40 to 60 per cent, which means fewer people reach verified medical information.

There are already examples of how this plays out. In one case, a major search engine’s AI recommended unproven dental practices by mixing them with standard oral hygiene advice. For a user, there is little to separate what is tested from what is not.

India's data reality is both the problem and the promise

India presents a particular challenge for AI training data. The country's disease burden is heterogeneous. A diabetic patient in rural Tamil Nadu and one in urban Punjab present differently, eat differently, and have access to different care.

Most AI models in clinical use globally were trained on data that does not reflect this diversity. The result is that models may produce recommendations calibrated to a different population entirely, and patients or even providers may never know.

This is also, however, where India's scale becomes an asset. The volume and variety of health data generated across this country, if collected ethically and with consent, represent a foundation that could make AI genuinely more accurate for underserved populations worldwide.

Startups building India-specific models have begun recognising this, though the absence of India-specific validation standards means there is no common benchmark against which these claims can be tested.

Where doctors fit in

None of this is an argument against AI in medicine. Used as a clinical support tool, it has real value. Physicians who understand its limits can use AI to flag drug interactions, review imaging patterns, or manage documentation.

The distinction that matters is between AI as an instrument in the hands of a trained clinician and AI as a first-opinion service for patients who may not know what they do not know.

The choices India makes over the next few years will shape how this plays out. The regulatory framework has to be clear on a few basics. AI tools should be properly tested before they reach patients. There must be firm rules on how health data is used and whether people have given consent. And there has to be accountability when AI-led medical guidance causes harm.

This cannot be designed by technologists and investors alone. Doctors, ethicists and patient groups need to be part of these decisions from the start.

The infrastructure of Indian digital health is extraordinary. What it needs now is not more speed. It needs a foundation that can hold the weight of what is being built on it.

(Dr Sabine Kapasi is the CEO at Enira Consulting Pvt Ltd, Founder of ROPAN Healthcare and Nipun Vaid Mehta, Consultant at Enira consulting Pvt Ltd)

The opinions expressed in this article are those of the author and do not purport to reflect the opinions or views of THE WEEK.