Dr Madeline G Reinecke says any policy focused on protecting children must broaden its scope beyond traditional social media platforms, while Alexandra Cocksworth says real connections are crucial. Plus a letter from Ali Oliver

The government’s consultation surrounding whether to ban social media for under-16s responds to widespread concern about digital harms (UK ministers launch consultation on whether to ban social media for under-16s, 19 January). We in the Neuroscience, Ethics and Society (Neurosec) team at the University of Oxford contend that such investigations should extend also to whether young people should have access to generative AI. In the case of social media, ministers and commentators have focused on features like addictive feeds and age limits; our research team’s work with young people shows that we must also reckon with such considerations – among many others – in an era of AI-driven technologies.

To be sure, concerns about mental health, social comparison and addictive design certainly apply when thinking about young people’s experiences online, but the digital world of 2026 includes far more than Instagram and TikTok. AI-based chatbots are increasingly present in young people’s lives across a host of domains, from education to companionship. And adolescence is a formative stage for developing social understanding, one’s sense of identity and so on. This raises urgent questions, such as: at what age should young people have access to AIs simulating friendship or intimacy? What safeguards are needed to protect young minds from manipulation and dependency grounded in artificial “connection”?

Continue reading…