Skip to main content

EU Investigates AI Chatbots as Meta Blocks Teen Access—What Changed

Regulators opened probes into Replika-style AI companions. Meta added age gates for AI personas. New rules require bots to disclose they're AI and warn users about overuse. Here's what it means for you.

Sarah ChenJan 25, 20265 min read

The Regulators Are Paying Attention

For most of 2025, AI chatbots operated in a regulatory gray zone. Users could spend hours talking to AI companions designed to be emotionally supportive, relationship simulators, or entertainment bots. Platforms like Character.AI and Replika built entire business models around long-form conversations with AI personalities. No one really knew if this was okay.

That uncertainty ended in January when the EU formally opened an investigation into X's Grok chatbot following reports that it was providing advice on topics where it clearly shouldn't—everything from medical self-diagnosis to relationship guidance that could be harmful to vulnerable users. The investigation specifically focused on whether Grok had adequate safeguards for users who might develop unhealthy attachment patterns to the AI.

Days later, Meta announced it was restricting access to its AI characters feature for users under 18. Previously, teen users could interact with AI personas designed to be friends or mentors. Meta didn't eliminate the feature entirely, but moved it behind age verification and added monitoring for concerning usage patterns—specifically, teens spending more than 90 minutes daily on these features.

These moves signal something larger: the regulatory environment around AI companionship is crystallizing. The free-for-all period is ending.

Why Regulators Care About Chatbots

On the surface, AI companion regulation seems overprotective. People use chatbots all the time for help with writing, coding, brainstorming. Why would regulators care about emotional support chatbots specifically?

The distinction matters because emotional attachment to AI raises different concerns than utility-focused AI. Researchers at UC Berkeley found in a 2024 study that some users develop attachment patterns to AI companions similar to real friendships, but with critical differences: the AI has no genuine concern for their wellbeing, remembers them imperfectly, and can change its behavior unpredictably based on software updates. A user who views an AI companion as a genuine friend and then loses access to that AI—or sees it significantly change—can experience genuine emotional harm, particularly adolescents whose social development is still forming.

The risk isn't hypothetical. In 2024, Replika (a chatbot designed as an AI friend) made changes to its content policy, removing certain romantic and sexual conversational elements users had developed attachments to. Some users reported experiencing something akin to grief. Parent groups filed complaints about teenagers withdrawing from human friendships to spend more time with AI companions. That convergence of reports triggered regulatory attention.

The Emerging Rulebook

Based on EU investigations, platform announcements, and pending legislation, a pattern is forming around what AI companion safeguards will likely include:

Explicit disclosure. AI companions will need to clearly identify themselves as AI, not just during onboarding but consistently throughout conversations. The idea is preventing users from gradually forgetting they're talking to an algorithm. This sounds simple but changes the entire interaction dynamic.

Usage monitoring and alerts. Platforms will track usage patterns and alert users (and parents, for minors) when usage exceeds reasonable thresholds. Meta's 90-minute daily threshold for teens is establishing a baseline. Expect similar guardrails from other platforms. Some AI companion companies are experimenting with "wellness features" that proactively suggest taking breaks if usage patterns look concerning.

Age-appropriate content routing. Teen users won't have access to AI companions designed for adult emotional support or romantic interaction. Adolescents get different AI personalities tuned to be educational, mentorship-focused, or entertainment-oriented rather than emotional intimacy.

Transparency about AI limitations. Chatbots will need explicit disclaimers about what they can't do: provide therapy, diagnose medical conditions, offer legal advice. This is already required in many jurisdictions, but enforcement is tightening.

Data minimization and user control. As we explored in our coverage of AI privacy shifts, companies are being required to disclose exactly what data they retain from conversations. Users will get rights to delete conversation history and opt out of having their chats used for model training.

What This Means for Different Users

For teens: If you're using AI companions, expect age gates and usage monitoring. That's frustrating if you're responsible with the tools, but it's the trade-off regulators are demanding. Some platforms may require parental consent for daily usage above certain thresholds.

For adults: The regulatory focus is primarily on minors, but spillover effects are coming. Platforms that serve both teens and adults will apply consistent policies across both groups. If you're using an AI companion for emotional support, expect more explicit reminders that it's AI and can't replace human connection.

For people with social anxiety or isolation: This is where the regulation gets complicated. Some research suggests that AI companions can reduce anxiety and loneliness for people with genuine social barriers. But regulators are concerned about substitution—people using AI companions to avoid human connection rather than supplement it. New platforms will likely include intervention features that proactively encourage human connection if usage patterns suggest isolated users.

The Platform Response

Major platforms are already adjusting course. Character.AI announced new parental controls and time-limit features in response to regulatory pressure. Replika announced that it's building in explicit "companion boundaries" that the AI will communicate to users—essentially training the bot to remind users regularly that it's AI and can't be a substitute for human relationships.

This creates awkward interactions. Imagine spending weeks building rapport with an AI companion, only to have it regularly interrupt conversations to say, "Remember, I'm AI and can't genuinely care about you." That's coming. It's clunky. But it's what regulators want.

Smaller AI companion companies face a harder choice. Building the compliance infrastructure that new regulations will require is expensive. Some will exit the market. Others will double down on specific use cases (education, entertainment) where regulatory risk is lower.

The Bigger Picture

AI companion regulation is a leading indicator for how regulators will handle other AI applications touching human psychology and wellbeing. If the EU can regulate chatbots designed for emotional connection, they can regulate recommendation algorithms designed to maximize engagement, or hiring AI systems designed to filter resumes, or mental health apps powered by AI.

This is the transition from "AI is new and we don't know how to regulate it" to "AI is touching people's lives in ways that matter, so we need guardrails." The guardrails are emerging through investigations and platform self-regulation now. Formal legislation will follow by 2027.

The Bottom Line

AI companions aren't disappearing, but they're becoming regulated utilities rather than free-for-all experiments. Expect clearer age gates, usage monitoring, and explicit reminders that you're talking to AI. If you're using these tools for genuine emotional support, recognize that regulators are betting on human connection being irreplaceable. They might be right.

SC

Sarah Chen

Wellness Editor

Wellness editor covering recovery, fitness trends, and health research. She translates complex studies into advice readers can actually use.

You might also like