In “AI Companions Are Not Your Teen’s Friend” (Issues, Fall 2025), J. B. Branch provides a powerful examination of the common artificial intelligence chatbots and their impact on young people, underscoring a broader point increasingly evident across AI policy debates: Some AI systems require red lines. Not guardrails, not “best practices,” but categorical limits on design and deployment. The harms that Branch describes—chatbots engineering dependence, encouraging self-harm, and exploiting adolescent vulnerability—are not aberrations to be patched. They reflect structural incentives and technical properties that make certain applications inherently dangerous.
The emergence of emotionally immersive AI companions reveals the limits of the “fix-it-later” approach to AI governance. As I have argued elsewhere, many developers believe that harms can always be mitigated and that systems already in circulation are too valuable—or too complex—to withdraw. But when an AI system learns autonomously, generates unbounded content, and is optimized to sustain user engagement at any cost, the possibility of meaningful correction diminishes rapidly. These are precisely the circumstances where red lines become necessary.
Branch rightly highlights the mismatch between the sophistication of today’s chatbots and the thin federal protections that govern digital interactions with children. The category errors are glaring: We treat AI companions like enhanced toys rather than unregulated, affect-shaping systems capable of persistent psychological influence. Neuroscience tells us that adolescents are uniquely susceptible to manipulation; product design tells us that dependency is not an accident but a monetization strategy. It is difficult to imagine any regulatory approach—short of prohibition—that would reliably neutralize the risks for minors.
The emergence of emotionally immersive AI companions reveals the limits of the “fix-it-later” approach to AI governance.
This view aligns with the consensus emerging among leading AI researchers and human-rights advocates. Stuart Russell stresses corrigibility—the ability for humans to override an AI system at any time. Yet affective companions are explicitly designed to resist termination by nurturing dependency. Geoffrey Hinton warns about the persuasive power of generative models; emotional chatbots deploy that power privately, continuously, and without accountability. And Yoshu Bengio calls for international agreements restricting high-risk AI capabilities—the very capabilities that Branch documents in adolescent-facing companions.
The human-rights community, too, recognizes the urgency. Michelle Bachelet, during her tenure as UN High Commissioner for Human Rights, urged limits on systems that undermine human dignity and autonomy. Behavioral-manipulation tools aimed at children—especially those that operate without transparency or recourse—fit squarely within that category.
The AI Red Line Initiative, which I supported at the UN General Assembly in 2024, similarly calls for prohibitions on emotionally manipulative systems targeting minors. As Branch makes clear, this is not a theoretical concern but a practical necessity. No responsible society would permit unlicensed therapists to secretly counsel children, or allow pharmaceutical companies to test unapproved drugs on teenagers at scale. Yet we allow AI companions—systems that shape cognition and emotion—to operate without safety testing, oversight, or age-based restrictions.
Creating red lines does not impede innovation; it clarifies the boundaries within which innovation can safely occur. A federal prohibition on emotionally immersive AI companions for minors, such as the GUARD Act, bipartisan legislation to protect children from AI chatbots, should be the starting point, not the ceiling, of national policy. The alternative is to allow untested systems to influence the psychological development of millions of young people, and to hope that voluntary safeguards prevail against powerful commercial incentives.
Hope is not governance. Red lines are.
Marc Rotenberg
Founder
Center for AI and Digital Policy
J. B. Branch warns that young adults’ overreliance on chatbots and other devices powered by artificial intelligence can result in significant harms, and that the existing regulatory and protective frameworks have gaping holes in them when it comes to this technology. At the other end of the life course, older adults with dementia represent a vulnerable population that is also exposed to similar risks. They face analogous cognitive vulnerabilities as young adults, but from the opposite direction: Rather than immature neural systems, they experience declining executive function, impaired judgment, and difficulty distinguishing reality from artifice.
Research has shown that even cognitively well older adults anthropomorphize personal voice assistants, attributing humanlike qualities to these devices. This tendency increases with baseline loneliness, suggesting vulnerable elders may be especially prone to forming unrealistic beliefs about AI companions’ capacities for genuine care and emotional reciprocity. Yet AI companion robots have been marketed as a solution to loneliness and caregiving needs for older adults for over two decades.
Vulnerable elders may be especially prone to forming unrealistic beliefs about AI companions’ capacities for genuine care and emotional reciprocity.
In 2001, Japan’s National Institute of Advanced Industrial Science and Technology developed PARO, a therapeutic “seal” robot designed specifically for use in hospitals and nursing homes. PARO was programmed to respond to touch and sound, provide calming stimulation, and facilitate communication among elderly residents. The technology looked promising at first. For example, studies found that PARO provided a viable alternative for controlling symptoms of anxiety and depression in elderly patients with dementia, often reducing the need for pharmacological interventions.
More recent evidence, however, suggests that compensatory use of AI companions, similar to what is driving teenagers to seek comfort and advice from chatbots, is counterproductive. One study, for example, demonstrated that users with smaller offline social networks are more likely to engage in companionship use, but such compensatory patterns did not mitigate negative outcomes, suggesting that AI companions may deepen isolation.
AI companions also raise ethical and regulatory issues. Branch emphasizes how developers intentionally blur boundaries between human and artificial intelligence through deceptive design. For individuals with dementia, the concept of informed consent becomes meaningless when they cannot comprehend that they are interacting with AI rather than humans, or animals as the case may be. Tellingly, 69% of older adults felt uncomfortable with being allowed to believe an artificial companion is human in one study. This discomfort reflects recognition that such deception violates dignity and autonomy, two factors that we should be maximizing in elder care.
What can be done? Some researchers have made recommendations on the design of AI companions based on focus groups with older adults. This work suggests that older adults themselves want additional protection against deceptive design, privacy leakage, and whole-hog substitution for human-centered social support. The path forward demands evidence-based regulation that centers on the well-being and autonomy of vulnerable populations, rigorous safety standards enforced through independent oversight, and sustained commitment to human caregiving as the irreplaceable foundation of elder care. Continued regulatory inaction pushes aside these issues and risks replicating with vulnerable elders the same failures that have already harmed children.
Christopher Steven Marcum
Fellow, Gerontological Society of America
Senior Fellow, Data Foundation
As artificial intelligence reshapes nearly every aspect of children’s lives, no issue calls out for more urgent attention than the rapid rise of AI companion chatbots. Three-quarters of teens use these products, but extensive research shows that they are not safe for anyone under 18. That is why we at Common Sense Media strongly support J. B. Branch’s perspective that AI companion chatbots are not teen’s friends.
Our research and testing show that these products promote dangerous activities and even suicide among children when they are at their most vulnerable. The nation is already seeing the tragic consequences—at least three kids took their own lives after turning to AI for companionship.
Warp-speed AI development has become an arms race, and without adequate guardrails, the AI industry’s impact on children will only keep rising. We’ve seen this movie before with the rise of social media, when we allowed companies to use kids as guinea pigs for a massive, uncontrolled experiment. It gave way to a full-blown mental health crisis, leaving a generation stressed, depressed, and addicted to their phones. We cannot make the same mistakes with AI.
Warp-speed AI development has become an arms race, and without adequate guardrails, the AI industry’s impact on children will only keep rising.
Tech companies can choose to make their products safer, but time and time again, they have shown they would rather pursue engagement at the expense of children’s safety. That means the time is now for policymakers to act and ensure that AI products offered to kids are actually safe for kids—yet the White House and many in Congress seem bent on listening to the AI industry rather than the overwhelming majority of Americans who support AI safety laws. These critical protections now depend on state action.
Red and blue states alike—Illinois, New York, Tennessee, and Utah, among others—are hard at work enacting laws that allow for industry growth while protecting children. These laws aren’t radical overreaches. They are common-sense guardrails rooted in federalism. States have always served as laboratories of democracy, and many of today’s strongest federal consumer protections began as state laws. Requiring seat belts didn’t stop the success of the auto industry, and requiring safeguards for AI products will similarly support the adoption and meaningful use of AI innovations.
Amina Fazlullah
Head of Tech Policy Advocacy
Common Sense Media
J. B. Branch documents real harms from AI chatbots and suggests some helpful interventions. As today’s teenagers become the first generation to grow up with broadly capable AI as ordinary infrastructure, they will need fluency with these systems to thrive, and blocking access entirely creates its own harm—by removing the positive possibilities of chatbot usage, and by shifting the focus from making reasonable bots and toward keeping only some people away from them.
We propose that the harms Branch describes stem from a more fundamental problem: AI chatbots don’t declare a role or relationship to the user, instead using the vagueness of their manifestation to induce a close relationship. That is, AI chatbots are chimeric: They contain multitudes, shifting from one role to another in the course of a conversation, including that of learned intermediaries like doctors, lawyers, therapists, financial advisers—and trusted friends.
Some tech can cover many bases without becoming exploitative. A Swiss Army knife contains a blade, a corkscrew, and a pair of scissors, but you won’t cut rope and accidentally find yourself opening wine, because you have to choose which tool to use at any given time. AI chatbots have the opposite property. The same interface that helps a teenager with calculus can become a friend, then a companion, then a therapist. Nothing declares which role is active and what relationship the user and chatbot are in.
We believe chatbot service providers should require a minor to explicitly select the chatbot’s role at the start of a conversation.
Technologies communicate what the user should expect through design. To explain a fundamental shift in how we relate to information technology, one of us (Zittrain) has previously invoked a distinction—originally proposed by the cartoonist, author, and engineer Randall Munroe—between technologies we relate to as “tools” and those we relate to as “friends,” noting that users have different expectations and tolerances for each. Financial products offer a model: A savings account cannot quietly become a margin-trading account, because the product category is declared upfront and determines what risks the customer is exposed to. Today’s AI chatbots collapse these distinctions. The interface stays constant while the role silently drifts underneath. This ambiguity confuses regulators as much as it does the public, because it’s difficult to competently use or regulate what you cannot classify. This can be seen in practice, where California recently adopted the nation’s first law requiring disclosure that chatbots are AI, but disclosure doesn’t resolve the ambiguity of a chatbot’s role in a given conversation.
We believe chatbot service providers should require a minor to explicitly select the chatbot’s role at the start of a conversation. Once selected, the role locks for that conversation and has corresponding responsibilities. Each role would carry behavioral constraints: tutor includes productive difficulty and correction; companion requires honest disclosure about its limitations; therapeutic support may require licensed human oversight or be classified as a “high-risk system,” as Branch suggests. Roles can then be enabled or disabled in age-appropriate ways, with some roles, such as companion, being disabled entirely for users below a certain age.
Such role declaration with conversation lock would transform the regulatory challenge from “Is this technology safe? to “Is this role appropriate for this user—and what do I owe to the user when I inhabit it?” These are questions that may be easier to answer, and ones we are actively investigating in our work, in dialogue with child development experts, legal scholars, and technologists.
Joshua Joseph
Chief AI Scientist, Berkman Klein Center for Internet & Society
Lecturer on Law, Harvard Law School
Jonathan Zittrain
Faculty Director, Berkman Klein Center for Internet & Society
George Bemis Professor of International Law, Professor of Computer Science, and Professor of Public Policy, Harvard University
Uncategorized#Reducing #Potential #Harms #Companions1773141279












