Over the past few years, AI in healthcare has been a hot topic—no longer just a concept, but something people are actively trying out in their everyday lives.
On one side, there are reports highlighting the benefits - the ease of access, faster responses, and the possibility of personalised care at scale. On the other, there are equally strong concerns around limitations, accuracy, data privacy, and the loss of human judgment in something as critical as health. Between these two sides, the question becomes sharper and more urgent: are we really ready for an AI-enabled healthcare system?
A new study by Boston Consulting Group (BCG), which “is a global consulting firm that partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities,” attempts to unpack this debate. In what it describes as the first cross-national survey of consumer use of AI in health, BCG surveyed over 13,000 internet-connected adults across 15 countries to understand how people are using AI for their health, how comfortable they are with it, and what they expect from it in the near future.
AI is becoming the first step in health journeys
For your daily dose of medical news and updates, visit: HEALTH
The study reveals a clear behavioural shift; AI is increasingly becoming the first point of contact for health-related queries.
Globally, 57% of respondents said they have used AI tools for personal health, compared to 43% who have not. But this number rises sharply in emerging economies. In India, as many as 85% of respondents reported using AI tools, one of the highest among all countries surveyed. Similar trends were seen in Brazil (67%) and China (62%).
In contrast, usage in developed markets is significantly lower. Only 50% in the US, 44% in Germany, 43% in the UK, and just 34% in Japan reported using AI for health-related purposes.
The study also notes that “GenAI tools such as ChatGPT and Google Gemini are now the first destination for many patients who wish to access health advice quickly and conveniently.” This shift marks a move away from traditional web searches toward interactive, conversational health guidance.
The type of AI tools people use also reveals how deeply integrated these technologies have become. Around 33% of respondents use AI chatbots for general health advice, making them the most common entry point. This is followed by 19% using AI-enabled wearables such as smartwatches and fitness trackers. Other uses include AI nutrition and fitness coaching (13%), tools that explain medical results (12%), personal health dashboards (12%), sleep tracking (12%), health apps (11%), and even AI-based mental health tools (9%).
The appeal lies in accessibility and convenience. These tools are often free or low-cost, available 24/7, and capable of providing personalised responses by integrating data such as medical reports or wearable device inputs.
However, the study notes that current usage is still largely limited to preliminary stages of care - checking symptoms, understanding reports, and seeking general advice, rather than making critical clinical decisions.
Younger users lead adoption, but trust gaps persist
One of the most striking patterns in the study is the generational divide in AI adoption.
Younger users are significantly more comfortable using AI for health. Among Gen Z (18–27 years), a striking 78% reported using AI tools, compared to 71% of Millennials (28–43 years). Usage drops sharply among older groups - only 50% of Gen X, 31% of Baby Boomers, and just 16% among those aged 78 and above. Yet, despite this high adoption, trust remains a critical issue.
The study points out that “while more consumers are using GenAI tools to understand their health needs, many remain cautious.” The biggest concerns revolve around data privacy, security, and the reliability of AI-generated advice. Many users are unsure whether the recommendations are accurate or sufficiently personalised.
At the same time, the findings challenge the assumption that AI and doctors are competing alternatives. Instead, most users prefer a hybrid model, where AI supports but does not replace human clinicians.
Patients recognise the strengths of both. AI offers speed, convenience, and data-driven insights, while doctors provide context, empathy, and accountability.
This balance is already visible in clinical settings. Around 16% of respondents said they are aware that their doctors use AI, particularly for reviewing test results and suggesting diagnoses or treatments.
The study describes this as an ideal combination - “the speed and personalised information of AI agents and GenAI as well as the reassurance of human oversight and empathy.”
Expectations are rising faster than systems can adapt
If current usage reflects the present, expectations reveal the future, and they are rising rapidly.
The study shows that consumers expect AI to take on more advanced roles in healthcare within the next two years. The most in-demand function is administrative efficiency - 55% of respondents want AI to help book appointments and manage referrals.
Other expectations include flagging dangerous drug interactions (42%), recommending treatments based on personal data (40%), and diagnosing conditions with high accuracy (36%).
There is also growing interest in AI handling insurance-related tasks, with 30% expecting help in managing claims or interacting with insurers.
However, when it comes to critical care, trust drops significantly. Only 22% of respondents are comfortable with AI handling emergencies like triaging heart attacks, and just 18% believe AI could replace humans for basic care—highlighting that while adoption is growing, full trust is still far from established.
This gap between usage and trust presents both a challenge and an opportunity for healthcare systems.
The study emphasises that healthcare providers must act quickly to integrate AI responsibly. “There is a clear urgency for health care providers and systems to develop and deploy proprietary AI solutions to keep up with their customers’ expectations,” it states.
If healthcare systems fail to provide reliable AI tools, patients may continue relying on generic platforms that are not aligned with clinical guidelines, potentially leading to misinformation or inappropriate care.
At the same time, the potential benefits are immense. AI can improve access, reduce costs, enhance patient engagement, and streamline administrative processes.
But all of this hinges on one key factor - trust.
“If health system leaders can address concerns around trust, reliability, and personalisation, then AI offers the opportunity to fundamentally reshape how health systems and clinicians provide care,” the study notes.
Delays in adoption could mean missed opportunities, while premature implementation without safeguards could deepen mistrust. The path forward, therefore, lies in careful but decisive integration.
‘AI is here to assist, not replace clinical judgement’: Expert
Dr Rajiv Kovil, Head of Diabetology and Weight Loss Expert at Zandra Healthcare, believes that AI is already deeply embedded in healthcare, even if systems are still catching up. He pointed out that the pace of AI adoption has far exceeded regulatory readiness.
“AI is growing at the speed of pattern recognition and operational efficiency that the government is not able to keep up with,” he said, adding that the technology is “no longer futuristic, it is already embedded in pockets of care, and many times, in a large majority of care.”
However, he flagged a key challenge in the Indian context - data quality and interoperability. According to him, healthcare data in India remains fragmented and inconsistent, making it difficult to integrate AI systems effectively. This also raises concerns around privacy and security.
“The problem in India is that healthcare data is very messy. It is not interoperable, and that creates issues of security and reliability,” he explained.
Dr Kovil noted that patient behaviour is already shifting, with many individuals turning to AI tools before consulting doctors. “AI has now become the first consultation, and the doctor is the final confirmation,” he said.
He emphasised that AI has strong potential in managing chronic diseases, particularly through risk assessment and early detection. Tools that can analyse patterns and predict disease progression could significantly improve long-term care.
“In chronic care, AI can help with risk scoring, risk stratification, and even mapping the trajectory of diseases. It can give patients a more realistic expectation of what lies ahead,” he said.
At the same time, he cautioned against over-reliance on AI, stressing that it should be seen as a screening tool rather than a diagnostic authority.
“Patients need to understand that AI is a screening tool, not a diagnostic authority,” he said. “It can miss context, and sometimes it oversimplifies complex conditions.”
He gave the example of metabolic health, where AI may incorrectly classify pre-diabetes as normal, overlooking underlying risks. He also highlighted the growing issue of self-medication and misinterpretation, driven by AI-generated advice.
“The biggest problem with AI is self-medication and misinterpretation,” he said, adding that users often rely on incomplete or poorly framed queries. “People also need to learn how to ask the right questions. AI responds to the prompt you give; it can amplify bias if the question itself is biased.”
Dr Kovil also pointed out that while AI can assist with technical aspects such as drug interaction alerts, it cannot replace clinical judgement.
“A doctor cannot remember interactions of millions of drugs. If AI flags a strong drug interaction, it can be useful,” he said. “But it should not be trusted prematurely or used without context.”
He further stressed that AI is particularly effective in areas like diagnostics and pattern recognition. “In radiology, diabetic retinopathy, ECG interpretation, and even gait analysis, AI can significantly enhance accuracy and efficiency,” he noted, citing examples where AI tools are already improving patient outcomes.
However, when it comes to decision-making, empathy, and responsibility, he was unequivocal - these remain human domains.
“AI is very good at prediction, pattern recognition, and process efficiency. But clinical reasoning, patient communication, and decision-making must remain with doctors,” he said.
He added that healthcare cannot be reduced to data alone. “Clinical medicine is not just data. Without a human in the loop, AI is quite illiterate,” he remarked.
Ultimately, Dr Kovil believes AI will become an integral part of medical practice, but as a support system, not a substitute.
“AI will help us a lot in diagnosis and efficiency, but it can never replace responsibility, empathy, and the emotional understanding required in patient care,” he said.
This story is done in collaboration with First Check, which is the health journalism vertical of DataLEADS