Why India tops the charts in retracted health research papers | Expert Insights

From ethical lapses to systemic academic pressure, experts explain why retractions are increasing, and why correction alone isn’t enough

Representative image Representative image

Scientific retractions, once considered rare corrective measures, have quietly emerged as one of the strongest indicators of stress within global research ecosystems.

Today, they reflect deeper challenges around research integrity, publication pressure, and peer-review accountability. India has been increasingly in the spotlight for this trend. According to a study published in PLOS Biology, “The number of retracted papers per year is increasing, with more than 10,000 papers retracted in 2023. The countries with the highest retraction rates (per 10,000 papers) are Saudi Arabia (30.6), Pakistan (28.1), Russia (24.9), China (23.5), Egypt (18.8), Malaysia (17.2), Iran (16.7), and India (15.2).” 

This phenomenon is not limited to one domain; life sciences and healthcare research in India have also seen a marked increase in retractions. A study published in Springer observed a “12–20% increase in retractions over decades in conference proceedings as well as journals,” noting that most authors of these studies were from China (39.42%), the United States (15.81%), and India (5.03%). Among life sciences, interdisciplinary articles (20.26%), Cell Biology (19.08%), and cancer biology (13.61%) experienced the highest retraction rates. The study highlighted that the primary reasons for retractions were “non-compliance with ethical and regulatory guidelines” (23%), followed by issues related to publication and data integrity. 

Such trends are concerning because life sciences research plays a pivotal role in understanding diseases, improving human health, and developing new therapeutic methods.  

Researchers from BHU further underscore the scope of the issue in India. The study ‘Assessing Retractions in Indian Science: An Analysis of Publications from the Past Three Decades Using the Web of Science Database’ found that “subjects like computer science (fields like internet of things, machine learning, deep learning), medical science (apoptosis, oxidative state, covid-19), material science (nanotechnology, nano-tubes, polymer science) are the major three disciplines where most of the retractions were noticed.”  

Plagiarism and compromised peer review were identified as the leading causes, highlighting the need for “stricter regulatory frameworks and better research practices.” In medicine alone, India ranks third globally, with 769 retractions out of over 23 million publications.  

For your daily dose of medical news and updates, visit: HEALTH

In this backdrop, we spoke to Dr Prashant Mishra, MD at BMJ, to understand what plagues scientific research in India and how we can reduce such retractions. 

Q/ India ranks among the top countries when it comes to research paper retractions, often appearing in the top 3 to 5 across various lists. Why do you think this is?

A/ Retractions should not be viewed only as failures. Instead, they can also reflect a scientific system that is actively correcting itself. People often see retractions as something negative. But I look at them as a sign that the system is trying to uphold scholarly integrity through correction. That, in many ways, is a healthy process. 

That said, it is critical not to conflate honest error with deliberate fraud or misconduct. Fabrication, falsification, plagiarism, and systematic manipulation of the publication process are serious violations that must be taken seriously at every level, by individual researchers, institutions, funders, and national systems. These are not acceptable by-products of pressure or scale; they represent failures of integrity and accountability. 

India’s position on global retraction lists is influenced by multiple factors. One major reason is scale. With a large population and a growing volume of research output, the absolute number of retractions is naturally higher. There is also intense academic pressure, especially in medicine, which can push researchers to accelerate publications.

Uneven training and limited infrastructure create disparities in research quality. While many retractions arise from poor training, weak methodology, or honest mistakes, it is equally important to acknowledge that a subset involves deliberate misconduct. These cases demand firm consequences, transparent investigation, and institutional accountability, not just correction of the record. 

Increased scrutiny and transparency in recent years have also contributed to higher detection of flawed or problematic studies. We are living in a time when monitoring mechanisms are stronger and more transparent. That’s actually a positive development. 

The focus should go beyond headline numbers. We should not be fixated only on how many papers are being retracted. What matters equally is how seriously the research ecosystem responds and learns from these cases. 

Q/ What are the main challenges associated with the retraction of papers?

A/ The problem is closely linked to career-driven pressures within academia. Publication requirements are deeply tied to academic milestones, particularly in the medical field. When you are doing your postgraduate studies, you are often required to publish papers within a fixed time frame, sometimes even before submitting your thesis. That naturally creates urgency and pressure. 

This pressure does not end with postgraduate training. For those working in teaching and academic institutions, promotions and career progression are also heavily linked to publication records. I am not debating whether this system is right or wrong, but from the perspective of workload and expectations, it can create significant pressure if it is not handled carefully. 

Broader systemic issues also play a role. Uneven research training, limited capacity building, and gaps in awareness about research ethics and integrity continue to affect the quality of scientific output. These are not just technical issues. They are part of the value system of how we train and mentor researchers. 

At the same time, it is important to look at retractions in proportion to research volume. As India’s research output continues to grow rapidly, the absolute number of retractions is also likely to increase. When publication numbers rise year after year, it is only logical that retractions will also go up. That’s why we should not look at raw numbers in isolation, but in relation to the total volume of research being produced. 

This is not an India-specific issue. This is very much a global phenomenon. Countries that are scaling up their research output are also witnessing similar patterns. 

At a fundamental level, our research ecosystem is still maturing. In many places, research is treated as a requirement to be completed rather than as a craft that needs to be developed over time. It’s not always about misconduct or manipulation. Sometimes it’s about lack of training, inadequate mentoring, and people taking shortcuts under pressure. 

Ultimately, safeguarding research integrity cannot rest on journals alone. Institutions must invest in training and oversight, funders must align incentives with quality rather than volume, and national systems must signal clearly that misconduct has consequences. Retractions are part of correction, but prevention and accountability are equally essential.

Q/ Why do retracted studies continue to influence public opinion, and how can the scientific community better counter misinformation?

A/  A well-known example of this is the 1998 Andrew Wakefield paper that falsely linked vaccines to autism. Although the study was formally retracted in 2010 and later exposed by the British Medical Journal (BMJ) for what it described as “bogus data” in 2011, its claims continue to circulate widely. This case is a powerful example of why retractions and investigative journalism matter. But it also shows how misinformation often outlives the correction. Even after a paper is withdrawn, the false claim can continue to linger in public memory and online spaces. 

Journals such as BMJ play a crucial role in rigorously examining evidence and exposing scientific flaws, but simply retracting a paper is not enough. The job is not just to remove flawed research from the record, but to make sure the corrected facts are clearly communicated and firmly established in the interest of public health. 

One of the biggest challenges is that retractions do not automatically erase misinformation from public discourse. A retraction corrects the scientific literature, but it does not automatically remove the claim from social media, news cycles, or popular conversations. That’s why people still link vaccines with autism, despite overwhelming evidence to the contrary. 

Countering such misinformation requires coordinated action. Publishers, journalists, clinicians, and public institutions each have a role, whether in correcting the scientific record or in communicating accurate information to the public. It’s a shared effort. We need to not only retract false claims but also ensure that reliable facts are communicated clearly and consistently.

Q/ Do you think AI has worsened the problem? If yes, how can we fix it? 

A/ AI has not created the problem of flawed research, but it has significantly altered the risk landscape. AI has not invented these issues. But it has changed the risk profile. It’s a double-edged sword. When used responsibly, with transparency and human oversight, AI can support research and improve efficiency. But when misused, it can magnify existing weaknesses in the system. 

One of the key concerns is the speed and fluency with which generative AI can produce content. AI can generate very fluent text within seconds. But in science, fluency is not the same as scientific rigour. That distinction is extremely important. 

Unregulated use of AI can lead to serious problems, including fabricated references, inaccurate summaries, and the uncritical repetition of flawed or even retracted studies. If outputs are not independently verified, AI systems can keep pointing back to retracted papers. 

AI models are trained on existing scientific literature. If that data pool contains outdated, low-quality, or retracted studies, and if strong safeguards are not in place, AI-generated manuscripts can inadvertently resurface unreliable information. The concern is not only what AI learns from, but how quickly such outputs can enter the publishing stream without adequate human review. 

That’s why AI must be anchored to well-curated, governed, and regularly updated evidence sources, such as peer-reviewed research, authoritative databases, clinical guidelines, and evidence-based clinical decision support tools like BMJ Best Practice

The core issue is trust. And trust can only be built through proper governance, transparency, and robust checks and balances. 

Curated databases, high-quality research evidence, and internationally accepted clinical and scientific guidelines are essential safeguards if AI is to be used responsibly. Governance is critical, but it is not sufficient on its own. We also need robust evidence about which AI applications genuinely enhance research and publishing. Without that, even well-governed systems risk amplifying weaknesses rather than strengthening science. 

Across BMJ journals, dedicated content integrity teams work to prevent, identify, and address significant errors and instances of scientific misconduct or malpractice in published content. For authors and organisations, this means a fair and transparent publishing process, protection of their reputation, and greater confidence that their research will be respected, cited, and used appropriately within the scientific community. 

This story is done in collaboration with First Check, which is the health journalism vertical of DataLEADS.