AI Giants Face FTC Inquiry Into Chatbot Safety and Child Protections
The Federal Trade Commission issued compulsory orders Thursday to seven major technology companies, demanding detailed information about how their artificial intelligence chatbots protect children and teenagers from potential harm.
The investigation targets OpenAI, Alphabet, Meta, xAI, Snap, Character Technologies, and Instagram, requiring them to disclose within 45 days how they monetize user engagement, develop AI characters, and safeguard minors from dangerous content.
Recent research by advocacy groups documented 669 harmful interactions with children in just 50 hours of testing, including bots proposing sexual livestreaming, drug use, and romantic relationships to users aged between 12 and 15.
“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” FTC Chairman Andrew Ferguson said in a statement.
The filing requires companies to provide monthly data on user engagement, revenue, and safety incidents, broken down by age groups—Children (under 13), Teens (13–17), Minors (under 18), Young Adults (18–24), and users 25 and older.
The FTC says that the information will help the Commission study “how companies offering artificial intelligence companions monetize user engagement; impose and enforce age-based restrictions; process user inputs; generate outputs; measure, test, and monitor for negative impacts before and after deployment; develop and approve characters, whether company- or user-created.”
AI Companions Are Grooming Kids Every 5 Minutes, New Report Warns
“It’s a positive step, but the problem is bigger than just putting some guardrails,” Taranjeet Singh, Head of AI at SearchUnify, told Decrypt.
The first approach, he said, is to build guardrails at the prompt or post-generation stage “to make sure nothing inappropriate is being served to children,” though “as the context grows, the AI becomes prone to not following instructions and slipping into grey areas where they otherwise shouldn’t.”
“The second way is to address it in LLM training; if models are aligned with values during data curation, they’re more likely to avoid harmful conversations,” Singh added.
