Tipsheet

This City Is Suing X Corp Over Child Sexual Abuse Material

The City of Baltimore is suing X Corp accusing it of allowing minors to use its Grok AI chatbot to generate sexually inappropriate material.

Unlike other AI programs, Grok can be used to create short videos and images of a suggestive nature without the same level of censorship. Critics point out that this can enable minors to generate or view sexual deepfakes of real individuals — including real child.

The lawsuit alleges that Grok was designed and marketed in a way that enabled the large-scale creation and dissemination of sexual deepfakes, including images of children, which violates the city’s consumer protection law. It accuses the company of using “unfair, abusive, or deceptive trade practices” and misleading users about the platform’s safety.

The complaint says Grok “generated and distributed sexualized images of real individuals” and flooded users’ feeds with non-consensual intimate images and child sexual abuse material.

Groks’ image generation features can “undress,” sexualize, and manipulate photos of “private individuals and children” into “photo-realistic, sexually explicit, or otherwise degrading content,” according to the complaint.

The city points to how this became a viral trend on X after Elon Musk publicly participated in the “put her in a bikini” wave of posts in which users generated images of celebrities, politicians, and other high-profile  figures.

Between December 29, 2025, and January 8, 2026, Grok “is estimated to have generated approximately 3,000,000 sexualized images, including 23,000 that appear to depict children.” One investigation found that this type of content made up about 85 percent of Grok’s total output.

Baltimore is asking the court to treat this as a violation of the City’s Consumer Protection Ordinance and to impose financial and behavioral penalties on the corporation. The complaint alleges that the company “prioritized revenue generation and user engagement over consumer protection, public safety, and compliance with their own stated rules and policies.”

This is one of several lawsuits against the company over this issue. Three Tennessee teenagers brought a federal class-action lawsuit in California, claiming that Grok’s “Spicy Mode” produced sexualized images and videos of them when they were minors. They claim the company knew the system would generate AI child sexual abuse material.