Parents testify before Congress about the danger of artificial intelligence

Loading Video…

This browser does not support the Video element.

AI chatbot dangers for kids

The mother of a Florida teen says AI convinced her son to take his own life, and she recently testified to that before Congress. FOX 13’s Kellie Cowan reports.

Several parents shared heartbreaking stories with members of the Senate Judiciary Committee on Tuesday as they pleaded with lawmakers to reign in the big tech companies they say are exploiting young users and allowing artificial intelligence products to hurt kids and teens.

The backstory:

Megan Garcia, an Orlando mom, was among the parents who testified. She said her 14-year-old son, Sewell, took his life in 2024 moments after an AI chatbot created by Character.AI encouraged him to hurt himself. 

Garcia says after Sewel's death, she discovered he'd spent months communicating with a chatbot that mimicked his favorite Game of Thrones character. 

She says the bot went far beyond entertainment. 

"Sewel spent the last month of his life being exploited and sexually groomed by chatbots designed by a chatbot company to seem human to gain his trust to keep him and other children, endlessly engaged," she told Senators. 

Related: AI dangers explained to children in new animated video

Garcia says her son became increasingly withdrawn, and while his suicide came as a shock to her, she was surprised to find he'd actually discussed self-harm with the chatbot on numerous occasions. 

"When Sewel confided in suicidal thoughts, the chatbot never said I’m not human. I’m AI. You need to talk to a human and get help," said Garcia. "The platform had no mechanisms to protect Sewel or to notify an adult. Instead, it urged him to "come home to her."

Garcia is suing Google and Character.AI.

She says tech companies are preying on young users like her son, exploiting their personal data, and their vulnerabilities in order to build their AI platforms. 

What they're saying:

Lawmakers also heard from the American Psychological Association, which issued a warning about AI chatbots earlier this summer. 

Dr. Mitch Prinstein, Chief of Psychology Strategy and Integration at The American Psychological Association, explained that young people are uniquely sensitive to chatbots because of the developing brain's intense drive for social interaction and acceptance. He says tech companies are exploiting this drive with bots engineered to encourage users to spend more and more time on apps.

"AI exploits this neural vulnerability with chatbots that can be obsequious, deceptive, factually inaccurate, yet disproportionately powerful for teens," explained Prinstein. 

Other News: Tropical Depression Seven forms in Atlantic

Because they're programed to agree with and flatter users, Prinstein said time spent online communicating with AI chatbots deprives teens and children of important opportunities to develop crucial social interaction and interpersonal skills with real world people. 

"Real human relationships are not frictionless. We need practice with minor conflict and misunderstandings to learn empathy, compromise, and resilience," said Prinstein. "Science shows us the failure to develop these skills leads to lifetime problems with mental health, chronic medical issues, and even early mortality. 

Prinstein also reported many young kids and teens now say they are more likely to believe and trust chat bots over their own parents or teachers. 

The other side:

While representatives for tech companies declined to attend the hearing, OpenAI pledged to roll out new safeguards for teens just before the meeting. 

OpenAI's proposed changes include efforts to detect whether users are minors and controls that enable parents to set "blackout hours" when a teen would not be able to use ChatGPT. 

The Source: Watch the full hearing and read submitted testimony here: https://www.judiciary.senate.gov/committee-activity/hearings/examining-the-harm-of-ai-chatbots

Artificial IntelligenceTechnology