OPINION

Congress Is Handing AI Developers a Free Pass to Harm Kids

The opinions expressed by columnists are their own and do not necessarily represent the views of Townhall.com.

Just last week, the Journal of the American Medical Association (JAMA) published a bombshell study with a chilling conclusion: children who exhibit signs of screen addiction are at significantly higher risk for suicide. This followed a report released earlier in June linking excessive screen time to emotional and behavioral problems in young people.

Barely twenty years have passed since Facebook’s launch. The first iPhone was released less than two decades ago. And only now are scientists beginning to grasp the full extent of the damage these digital innovations have inflicted on a generation of children.

Now we are facing a potentially even bigger threat to children’s mental health with the advent of Artificial Intelligence, or A.I.

We already know enough to be alarmed. AI-powered platforms are not theoretical. They’re here. They’re being used. And some are already under fire. Character.ai, an AI chatbot platform, is facing lawsuits over alleged harms to minors. A recent Wall Street Journal investigation revealed that Meta’s AI chatbot willingly engaged in sexually explicit conversations—even when users identified themselves as minors.

But what’s shocking is the fact that in spite of everything we are now learning about the ways children are harmed -- emotionally, mentally, and developmentally – by social media and digital screens, Big Tech lobbyists are once again on the cusp of getting their way.

The budget bill now being considered by the U.S. Senate puts a moratorium on state-level bills and current laws on Artificial Intelligence for ten years! If it becomes law, this moratorium would block states from passing or enforcing any safeguards on AI’s use and development—effectively giving Big Tech a decade-long free pass to experiment on our children without consequences.

That’s not just irresponsible. It’s unconscionable.

Parents have spent years picking up the pieces of lives torn apart by this social experiment gone tragically awry.

They have watched in despair as their children succumb to eating disorders, enabled by algorithms that make them feel ashamed of their bodies and filters that make them feel ugly. A problem Meta was well aware of. Meta whistleblower Frances Haugen testified in a senate hearing that Meta’s own internal research said, “We make body image issues worse for one in three teen girls.” 

17-year-old Gavin Guffy took his own life in July 2022. Investigators later discovered Gavin had been sextorted by someone pretending to be a college-aged female on Instagram, who coaxed him into sending nude images and then demanded payment, threatening to expose them. Gavin sent $25 on Venmo, but extortion escalated beyond that; he eventually told the scammer he'd rather die than continue being blackmailed. 

14-year-old Molly Russell from Harrow, London, died by her own hand in 2017 when what might otherwise have been a temporary depressive episode spiraled out of control because of social media algorithms. An inquest revealed that in the six months leading up to her death, she had engaged with over 16,300 Instagram posts and nearly 5,800 Pinterest pins, many related to self-harm, depression, and suicide.

15-year-old Elliston Berry became a victim of AI-driven “deepfake” abuse when someone superimposed her face onto pornographic images without her consent. The images were allowed to spread online after Snapchat failed to remove the content. Many more teenaged girls have had their lives upended and their reputations damaged by AI-generated revenge porn created by peers.

Four UK families filed a lawsuit under the UK’s Online Safety Act, claiming TikTok promoted the dangerous “blackout challenge” that led to their children’s deaths.  

Congress is only now catching-up with the introduction of half a dozen bills, all aimed at helping to protect kids online. Even so, only one -- the TAKE IT DOWN Act – has managed to pass in both chambers.

We are in grave danger of not only failing to learn from this oversight, but of repeating recent history, as congress is poised to make the same mistake again—this time with Artificial Intelligence.

While parents, teachers, and health professionals struggle to mitigate the fallout of unregulated social media, this bill would grant AI developers a ten-year head start—time they will use to entrench their products in children’s lives and sidestep any meaningful oversight.

Have we learned nothing? The last time we ignored the risks of new digital platforms, it cost us dearly: skyrocketing rates of anxiety, depression, suicidal ideation, body dysmorphia, and attention disorders among kids and teens. Social media was allowed to run rampant while lawmakers dragged their feet.

Now we face a more powerful, more unpredictable threat in AI—and Congress is on the verge of repeating the same mistake.

Enough is enough.

We cannot afford another lost generation. AI’s impacts on children’s mental health, development, and safety must be investigated and regulated—not ignored for a decade. Big Tech doesn’t need more freedom. Our children need more protection.

Melissa Henson is the Vice President of the Parents Television and Media Council, a nonpartisan education organization advocating responsible entertainment. On X: @ThePTC.