Can AI identify you from medical images? Study raises privacy concerns

AI promises faster diagnosis but raises new questions about how patient data is protected

AI-in-Healthcare - 1

AI foundation models used in medical imaging could pose privacy risks by potentially enabling patient re-identification, even as they promise major advances in disease detection and diagnosis, according to a new study published in the journal npj Digital Medicine. 

The study warns that while these models offer “transformative healthcare potential”, they may also retain sensitive information that could threaten confidentiality if safeguards are not strengthened. 

The researchers write: “Foundation models for medical imaging offer transformative healthcare potential but risk patient re-identification through latent demographic and identity signals retained during training.” 

Re-identification risk highlighted

The study notes that some models demonstrated very high identification capability in research settings. 

It states: “A recent study… reports re-identification rates up to 94 per cent in retinal imaging using FMs.” However, the authors caution that this risk should not be overstated without proper context. 

They clarify: “It is critical to distinguish the prediction of broad demographics such as age, biological sex, and ethnicity from the actual re-identification of a specific individual.” 

For your daily dose of medical news and updates, visit: HEALTH

The researchers further note that identifying a person typically requires much more data. They explain: “This typically requires the linking of these features to a much wider range of identifiers, leading to the singling out of a specific individual from the population.” 

Innovation versus privacy

The paper calls for a balanced approach between innovation and patient protection. It states: “Urgent technical safeguards and policy frameworks are needed to balance innovation with individual privacy and protect confidentiality.” 

The authors also warn against misinterpreting the findings in ways that could slow medical AI development. They write: “Such findings should not be misinterpreted as suggesting that demographics alone constitute identifiable information.” 

Instead, they advocate safeguards while continuing development. The study says: “This comment advocates for a balanced approach that advances technical safeguards… while ensuring that ethical oversight protects patients without unintentionally stifling innovation in medical AI.”

Major health care potential

The researchers highlight that foundation models represent a major technological shift. They describe them as: “artificial intelligence (AI) systems trained on massive datasets… adaptable across diverse tasks.” 

According to the study, these tools could significantly improve healthcare outcomes.  It notes they offer: “unprecedented performance in clinical practice, from the detection of incipient disease to tailored treatment planning.” 

Privacy risks are not entirely new

The paper also argues that such privacy concerns are not unique to foundation models. It states: “It is important to recognize that such concerns are not entirely new.” The real risk, researchers say, comes from combining AI outputs with other data sources. They write: “The primary privacy risk lies not in demographic inference itself, but in the potential combination of such features with external data to reconstruct identity.”

Regulatory and ethical concerns

The study highlights growing regulatory implications as AI adoption increases in health care. It says policy frameworks should include: “pre-deployment risk audits, transparency in feature representations, and mandatory privacy disclosures.” 

The researchers also call for stronger governance structures. They write: “Cross-disciplinary collaboration among regulators, researchers, and clinicians is essential for establishing best practices in AI-driven health care.” 

Need for safeguards

The authors propose technical solutions, including privacy-preserving AI design. They recommend: “technical safeguards… including FM-tailored differential privacy mechanisms and federated pretraining frameworks.” They also highlight architectural solutions. The study notes: “Methods like feature disentanglement may be used to separate useful clinical patterns from identifying details.” 

Why it matters

The study comes as hospitals increasingly adopt AI tools and governments consider new regulations on health data and artificial intelligence. Researchers emphasise that trust remains essential. They conclude: “The future demands neither naïve adoption nor reflexive rejection, but rather cautious optimism grounded in clear-eyed risk assessment and management.” 

The paper adds that protecting privacy while enabling innovation will be key to the future of AI in medicine. It states the goal should be: “to foster transparent development of purposeful models that respect patient privacy without forgoing their profound socioeconomic benefits.” 

This story is done in collaboration with First Check, which is the health journalism vertical of DataLEADS