The increased overreliance on chatbots for companionship and social media feeds for information is tearing apart our interpersonal relationships, with children being particularly vulnerable. Researchers found that high or increasing trajectories of addictive use of social media, mobile phones, or video games were not only common in early adolescents but were frequently associated with suicidal behaviors or worsening mental health.
Disturbing Trends
A 2025 report from Internet Matters reveals that 12% of children say they use AI chatbots because they have no one else to talk to (23% among vulnerable children). Moreover, 35% feel like they’re talking to a friend when engaging with chatbots (50% for vulnerable groups). Fifty-eight percent believe using a chatbot is better than searching for information themselves, and 40% have no concerns about following chatbot advice at all.
This trend is deeply troubling, thus, delaying children’s exposure to addictive digital experiences might be prudent so that they are better equipped to manage their emotional responses around chatbots and the internet at large.
Without proper safeguards, adult supervision, and emotional support, AI-powered social media sites or products can magnify children’s existing vulnerabilities and social isolation. Many platforms currently lack meaningful guardrails for young users, and parents are often unprepared to intervene. However, developing AI skills and AI literacy have already become vital requirements not only in the workforce, but also in everyday life.
With the rise of deepfakes, scammers, and AI-powered addictive algorithms, children need guidance more than ever. Social media has become the go-to place for connection and flow of information, which is why both social media literacy and AI literacy — for children, parents, and educators alike — must be a global priority. Children need to be empowered to understand how algorithms shape their online experiences, when to question advice, and how to protect themselves from manipulation and misinformation — whether it comes from a human or a machine.
The question becomes: how do we ensure that they are not harmed while taking advantage of the opportunities that technology provides?
Three Different Approaches
Opinions and approaches differ on this point. Countries such as the United States are promoting “early learning and exposure to AI concepts,” ultimately trying to increase AI literacy and proficiency in younger populations. This process must be done responsibly and with safeguards in place, which is why it is commendable that the proposed 10-year moratorium on the enforcement of state-level AI regulations in the U.S. was ultimately removed from the final One Big Beautiful Bill Act by the Senate. Despite this removal, the U.S. still largely wishes to focus on remaining a global leader in AI technology, taking a very pro-innovation and quite anti-regulation approach.
On the other end of the scale is the European Union, with a rich legal framework around AI, social media, and children. In fact, the European Commission recently released guidelines under the Digital Services Act (DSA) to better protect minors online. These include: setting children’s accounts to private by default, modifying recommender systems to avoid harmful content rabbit holes, empowering children to block users and control group adds, disabling exploitative features like “read receipts,” autoplay, and push notifications, prohibiting downloads/screenshots of content from minors, strengthening moderation, reporting tools, and parental controls, and using age assurance methods that are effective, but also non-invasive and fair.
While these initiatives are undoubtedly safety-focused, they are often hard to implement. Furthermore, AI systems enhance algorithmic feeds to an increasingly high degree, making it difficult for a child to control their own online experience.
Drawing on the Convention on the Rights of the Child, UNICEF’s policy guidance on AI for children offers a separate approach. Nine requirements are named for child-centered AI, including: supporting children’s development and well-being, ensuring inclusion of and for children, prioritizing fairness and non-discrimination for children, protecting children’s data and privacy, ensuring safety for children, providing transparency, explainability, and accountability for children, empowering governments and businesses with knowledge of AI and children’s rights, preparing children for present and future developments in AI, and creating an enabling environment.
UNICEF also provides implementation tools like policy roadmaps, an AI guide for teens, and design templates for developers. Real-world examples underscore the stakes — such as chatbots that mishandle disclosures of harm or automated systems that restrict access to social services. A case study involving social robots for autistic children shows how these principles can guide inclusive, ethical design.
The report’s key recommendations call on governments and tech providers to integrate a child-rights lens into every stage of AI development, involve children meaningfully as co-designers, not just users, conduct Child Rights Impact Assessments (CRIAs), improve international coordination and accountability, and expand research in underserved communities.
Conclusion: Family and Guidance First
While these approaches utilize different tools to navigate the increasing presence of social media and chatbots in children’s lives, their ultimate goal is the same: young children should not be left alone with AI systems. They need trusted adults, informed policies, and child-centered technologies that put their mental health, safety, and rights first.
I highly agree with America’s focus on AI literacy and competitiveness, but guidelines should exist to better inform platforms, AI developers, and stakeholders of what pro-children innovation can look like.
Today’s youth need both personal and systemic support to navigate the harms of chatbots, smartphones, and social media in ways that protect their mental health. These efforts should be grounded in family-centered policies that address the social, environmental, and economic foundations of well-being and resilience. As AI reshapes childhood, the question is no longer whether children should engage with these technologies — but how we ensure they do so safely, ethically, and with support.
AUTHOR
Monika Mercz
Monika Mercz, J.D.,is a visiting researcher at The George Washington University in Washington, D.C. A graduate of the University of Miskolc with a degree in law, she specialized as an English legal translator and holds a degree in AI and Law from the University of Lisbon. She is currently working for the Public Law Center of Mathias Corvinus Collegium and has previously worked for The National Authority for Data Protection and Freedom of Information, The Office of the National Assembly, and the Miskolc Regional Court.
EDITORS NOTE: This Washington Stand column is republished with permission. All rights reserved. ©2025 Family Research Council.
The Washington Stand is Family Research Council’s outlet for news and commentary from a biblical worldview. The Washington Stand is based in Washington, D.C. and is published by FRC, whose mission is to advance faith, family, and freedom in public policy and the culture from a biblical worldview. We invite you to stand with us by partnering with FRC.
The post Smart, Safe, and Supervised: Rethinking AI and Online Safety for Children appeared first on Dr. Rich Swier.
Click this link for the original source of this article.
Author: Family Research Council
This content is courtesy of, and owned and copyrighted by, https://drrichswier.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.