×

RUSH 2026: How can artificial intelligence be made safe for children?

Policymakers from India and France debated bold measures — from banning social media access for under-15s to redesigning AI algorithms around parental consent

From banning social media access for under-15-year-olds to redesigning algorithms around parental consent, policymakers and AI experts at the Rencontres Universitaires et Scientifiques de Haut Niveau (RUSH) 2026 in New Delhi outlined competing but converging strategies to make artificial intelligence safer for children.

The high-level Indo-French academic and scientific forum, held at the All India Institute of Medical Sciences on February 18 and 19 and coordinated by the Embassy of France in India, brought together policymakers, scientists, philosophers and technology leaders to deliberate on artificial intelligence and its expanding influence across sectors, including health. 

Among the most closely watched discussions was a special session titled 'Children Protection and AI', moderated by Clara Chappaz, French Ambassador for AI at the Ministry of Europe and Foreign Affairs. The session examined how AI systems are reshaping childhood, from early cognitive development to adolescent mental health, and what governance frameworks are required to safeguard young minds. 

The keynote speakers included Anne Le Hénanff, French Minister for Digital and AI; Prof. Frédéric Worms, Director of the École normale supérieure (ENS-PSL); Gaurav Aggrawal, Chief AI Scientist at Reliance Jio; Adrien Abecassis, Political Affairs Director at the Paris Peace Forum; and Amit A. Shukla, Deputy Secretary, Ministry of External Affairs, India. The discussion moved beyond the conventional framing of AI as either purely beneficial or dangerous, and instead situated children at the centre of a deeper philosophical, scientific and regulatory debate.

‘The relationship between children and AI is emblematic’

Speaking at the session, Prof. Frédéric Worms, Director of ENS-PSL, described the relationship between children and artificial intelligence as deeply symbolic of broader societal tensions. “To me, the relationship between children and AI is emblematic,” he said, explaining that it is not merely 'a face-to-face' or a simple duality of risks versus opportunities. Instead, he framed it as a question of how human subjectivity is constructed over time, from infancy to adolescence, within family, school and democratic institutions.

While acknowledging the dangers, he also underlined the promise AI holds for child health. AI, he noted, “can be a progress for health in general, can give massive data to child health and mental health and physical health.” It could bring new tools to enhance child care and education, potentially improving early diagnosis, cognitive research and mental health interventions. Yet, he cautioned that this technological optimism must be grounded in a deeper understanding of human development. 

For your daily dose of medical news and updates, visit: HEALTH

He broke down childhood into three stages: the baby, the child, and the adolescent. Beginning with infancy, he stressed that babies may not directly interact with AI systems, but they are indirectly affected through their environments. “The baby doesn’t have, fortunately, a social network activity,” he remarked, but parental care is deeply shaped by the digital ecosystem.

He argued that parents need a protected space, what psychologists call “primary relationships”, to nurture children’s subjectivity. AI, if used responsibly, could free up time and resources, enabling families to focus on care. “We have to build this space for the family, space for care,” he said, describing it as a fragile but foundational stronghold against future algorithmic pressures. 

In the second stage, the school-going child, he emphasised the relationship to knowledge. AI must not replace human teaching but be critically integrated. There should be a separation between school spaces and network spaces, he argued, while also teaching children how to engage with AI responsibly. “Teachers have to use AI critically to teach AI use to students,” he said, stressing the importance of building a distinct human-led learning environment that fosters critical thinking, scientific temper and resilience. 

Finally, in adolescence, he highlighted the construction of social and political identity. This is where exposure to unregulated digital environments can intersect with vulnerabilities related to depression, anxiety and even suicide. Just as families and schools need protective spaces, teenagers need “a democratic and ethical and universal space with values” that enables safe self-expression and identity formation. He concluded that societies must not only regulate technology but actively build safe spaces, within families, schools and democracies, to protect the developmental arc of childhood.

‘Adolescents' brains are not for sale

Speaking at the session, Guest of Honour Anne Le Hénanff, French Minister for Digital and AI, delivered a firm message on the commercial exploitation of children’s attention and emotions. Quoting French President Emmanuel Macron, she emphasised, “Adolescents’ brains are not for sale. Their emotions are not for sale.” 

She clarified that France’s evolving regulatory approach is not about restricting rights but about restoring them. As French lawmakers prepare to vote on a law banning social media access for children under 15, she emphasised that the measure “is not an authoritarian ban targeting young people.” Instead, “it is about defending them, protecting them and restoring their freedom, returning to them their real lives, the ones that exist beyond screens, conversational AI and the virtual world that surrounds us.” 

Her remarks focused strongly on the health implications of excessive and poorly regulated digital exposure. She acknowledged the benefits of generative AI but warned that its rapid and mass adoption represents “an unprecedented technological shift spreading faster than any previous innovation.” When models are not properly calibrated, she noted, they can expose children and adolescents to new dangers. 

France’s policy trajectory, she explained, has been evidence-based. In 2022, the country required parental controls on electronic devices. In 2024, it mandated age verification on pornographic websites to prevent minors from exposure. In 2026, France plans to require social media platforms to verify users’ ages, ensuring that children under 15 cannot access networks “that were never designed for them.” 

As Minister of AI, she said she recognises its immense potential in health, education and innovation. However, she is “acutely aware of the ethical challenges and risks it poses.” France will rely on scientific evidence and expert consensus, she added, applying the same rigorous approach to generative AI that it used for regulating social media.  

Who decides what children see online? 

Amit A. Shukla, Deputy Secretary at India’s Ministry of External Affairs, framed AI primarily as a developmental tool, but cautioned that its design and deployment must prioritise safety, especially for children. 

He explained that for India, “digital technologies, including artificial intelligence,” are tools for governance and solving developmental challenges. Citing examples such as Aadhaar and digital payment systems, he noted how scalable technological architecture has triggered financial and social transformation. However, he stressed that such transformation is sustainable only when systems are “safe and trusted,” calling this principle foundational to all other AI initiatives. 

Turning specifically to children and health concerns, Shukla acknowledged that the impact of AI-driven platforms on the brain, particularly young minds, is a serious area of study. Referring to social media ecosystems, he pointed out that when users scroll through platforms, “what comes next is determined by an algorithm,” and that algorithm is often trained “to maximise commercial gains.” In the process, he warned, there is an impact “on the human being’s brain, primarily the children’s brain also.” 

While age verification offers a technological layer of protection, he questioned whether regulation alone is enough. With hundreds of millions of mobile users in India, implementation remains a challenge. Instead of relying solely on universalised solutions, Shukla proposed empowering users, especially parents. “In case of children, it is going to be their parents,” he said, advocating for consent-based systems and techno-legal solutions. 

Drawing from India’s consent-based financial data-sharing model, he suggested similar frameworks for social media and AI platforms. “Why can’t the user have the freedom of choosing the algorithm?” he asked. Rather than a single commercially optimised feed, parents could select algorithms aligned with educational or developmental priorities. This, he argued, would introduce a democratic layer into digital ecosystems, shifting control from profit-driven design to child-centred well-being. 

This story is done in collaboration with First Check, which is the health journalism vertical of DataLEADS