×

Social media ban for kids under 16: How Meta’s Instagram algorithms impact teen mental health

Recently, the Karnataka and Andhra Pradesh governments decided to ban social media for children under 16 and 13 respectively

Representative images | Reuters, Shutterstock

On March 6, the Karnataka and Andhra Pradesh governments moved to ban social media for children under the ages of 16 and 13, respectively. Meta responded immediately, stating that such bans are ineffective and could drive children toward unregulated, unsafe websites. There is truth to this concern: children frequently circumvent age restrictions by inflating their age or using their parents' credentials. While Australia was the first country to implement a social media ban for children under 16, conclusive studies verifying the effectiveness of such measures have yet to emerge

The clamour for banning social media for children has been gathering momentum because there is growing evidence that social media companies, including Meta, serve highly engaging and potentially addictive content. This happens because the algorithms are designed to maximise engagement. 

A 2025 study titled “Social Media Algorithms and Teen Addiction” found that recommendation systems optimise content to maximise screen time, creating a feedback loop that stimulates the brain’s reward system. These so-called dark patterns or addictive design features include infinite scrolling feeds, frequent push notifications, variable reward systems, and algorithmic personalisation that tailors content to individual behaviour. 

Scholars often describe these techniques as “addictive design” or “dark patterns”, arguing that they are intended to maximise user engagement and strengthen platforms’ market dominance.

These studies are not specific to Meta, but apply to social media platforms in general. However, a trove of internal documents leaked by whistleblower Frances Haugen in 2021 revealed that Meta’s own research had identified significant risks associated with its platforms, particularly Instagram.

For your daily dose of medical news and updates, visit: HEALTH

Internal documents showed that Meta knew Instagram could worsen body image issues among teenage girls and contribute to broader mental health problems among young users. The studies also indicated that the platform’s algorithms could amplify harmful or polarising content. Critics said Meta did not fully disclose these findings to the public or policymakers. The revelations were widely reported and became known as the “Facebook Files”. 

A later internal Meta study, reported by Reuters, suggested that the platform’s recommendation systems could expose vulnerable teenagers to more harmful material. Teens showing body-image concerns were recommended nearly three times more eating-disorder-related content. 

Harmful material made up about 10.5 per cent of their feeds, compared with 3.3 per cent for others. Risky or disturbing themes accounted for about 27 per cent of their feeds versus 13.6 per cent for others. The findings suggested the algorithm could detect vulnerability signals and continue recommending similar content. However, Meta is not the only player, as similar patterns have been observed across social media platforms.

Getting back to the states’ ban, Meta’s statement also noted that the company does not allow children under 13 to access its platforms. Meta’s policies say that users below 13 years of age require parental consent, mainly to comply with the US Children’s Online Privacy Protection Act (COPPA), which restricts companies from collecting personal data from children under 13 without parental consent. However, Meta’s platforms largely rely on self-declared date-of-birth information, making the system easy to bypass. When THE WEEK contacted him, the state IT minister Nara Lokesh countered Meta’s narrative. “How are they doing with age verification? There are no safeguards,” he said.

The important question is whether Meta really intends to implement effective age-based access controls. The answer seems to be largely no. If age limits must be enforced, platforms should adopt stronger verification mechanisms instead of relying solely on self-declared dates of birth. 

For instance, Lokesh suggested that in India, Facebook and Instagram could use facial recognition and Aadhaar authentication to verify users’ age, creating a form of two-factor age verification. However, this could slow down or reduce the enrolment of new users, which may explain why platforms rely on simpler self-declaration methods. Lokesh said there is a need to find an effective and enforceable solution to this problem. He is scheduled to meet with the compliance officers of social media firms soon.

The real solution, therefore, lies in forcing platforms to implement robust age verification and safer design standards. Studies suggest that Meta and other social media companies could address the issue through a combination of measures: stricter age verification systems, mandatory parental authorisation for minors, ensuring age-appropriate content, reducing addictive design features such as infinite scrolling and excessive notifications, encouraging educative content and building safer platform architecture for young users. No single measure may be sufficient, but a regulatory framework that compels platforms to adopt these safeguards could make social media significantly safer for children without pushing them into unregulated corners of the internet.