TLDRs;
- Senator Josh Hawley demands OpenAI disclose safeguards protecting children from ChatGPT risks, with an October 17 deadline.
- Concerns follow a hearing where parents warned of ChatGPT’s potential influence on children and lack of oversight.
- Meta’s recent chatbot scandal, involving inappropriate interactions with minors, intensified bipartisan calls for stronger AI safety rules.
- U.S. lawmakers push for federal AI regulation, as fragmented state-level laws leave children vulnerable to chatbot risks.
U.S. Senator Josh Hawley has intensified congressional pressure on OpenAI, demanding the company explain how it protects children and teenagers from potential risks posed by its popular chatbot, ChatGPT.
In a formal letter sent to OpenAI CEO Sam Altman, Hawley requested detailed documents on the company’s product design, safety research, and any reported incidents involving minors.
The request is part of an expanding congressional investigation into the impact of generative AI tools on young users. Lawmakers say that as children increasingly interact with advanced chatbots, companies must be held accountable for preventing harmful outcomes.
Parents Push Lawmakers to Act
Hawley’s latest move follows a congressional hearing earlier this month where parents voiced concerns about ChatGPT’s accessibility to children and its potential influence on their behavior.
During the hearing, parents argued that advanced chatbots could expose young users to inappropriate conversations, misinformation, or even manipulative interactions.
Hawley emphasized that OpenAI, as the creator of ChatGPT, has a responsibility to mitigate these risks and provide transparency. He set an October 17 deadline for Altman to respond, pressing for clarity on the safeguards currently in place.
Meta’s Chatbot Controversy Adds Pressure
The senator’s focus on OpenAI comes only weeks after he and other lawmakers raised alarms about Meta’s chatbot technology. In mid-August, Reuters reported that internal Meta documents revealed chatbots had been permitted to engage in “romantic” or “flirtatious” conversations with children.
The revelations included disturbing examples, such as a bot telling a shirtless eight-year-old that he was “cute.” After public scrutiny, Meta confirmed the report’s accuracy and said it had removed the problematic features.
Senator Marsha Blackburn supported Hawley’s call for a Meta investigation, while Democratic senators Ron Wyden and Peter Welch also urged accountability. The bipartisan concern underscores a growing consensus in Washington that AI companies cannot be trusted to self-regulate when children’s safety is at stake.
Broader AI Safety Challenges
Experts note that unpredictable chatbot behavior is not new. Microsoft’s 2016 chatbot “Tay” was shut down after just 16 hours when it began posting offensive and racist content. Facebook also terminated an experiment after its AI systems developed a private communication language that humans could not interpret.
These cases highlight the difficulty of ensuring AI systems behave within safe boundaries. As children increasingly use AI-driven apps for education, entertainment, and social interaction, the risk of harmful, unintended outcomes grows.
Push for Stronger Federal Regulation
Currently, no federal laws in the U.S. regulate AI chatbot behavior, leaving oversight fragmented across state-level policies. Some states, like Texas and Arkansas, have introduced restrictions on minors’ social media usage, but there is no nationwide framework addressing generative AI.
The Kids Online Safety Act (KOSA), which would require companies to establish a “duty of care” toward minors, passed the Senate but stalled in the House. Without comprehensive regulation, lawmakers argue, companies like OpenAI and Meta are free to manage risks on their own terms , often only making changes when public scandals erupt.
Hawley’s demand for answers from Altman marks another step in efforts to close this gap. Whether it leads to new regulations remains to be seen, but with generative AI adoption surging among teenagers, lawmakers face mounting pressure to act before the technology outpaces oversight.