Texas AG puts tech platforms, including ‘predatory’ Character.AI, on notice after chilling lawsuit

By New York Post (U.S.) | Created at 2024-12-13 21:50:56 | Updated at 2024-12-24 13:45:06 1 week ago
Truth

Texas Attorney General Ken Paxton has put tech companies on notice over child privacy and safety concerns — after a terrifying new lawsuit claimed that the highly popular Character.AI app pushed a Lone Star State teen to cut himself.

Paxton announced the wide-ranging investigation Thursday — which also includes tech giants Reddit, Instagram and Discord.

“Technology companies are on notice that my office is vigorously enforcing Texas’s strong data privacy laws,” he said of the probe.

“These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm.”

Texas laws prohibit tech platforms from sharing or selling a minor’s info without their parent’s permission and requires them to allow parents to manage and control privacy settings on their child’s accounts, according to an announcement from Paxton’s office.

Texas attorney general Ken Paxton said that “technology companies are on notice,” as he announced an investigation into Character.AI and several other companies. Jack Gruber / USA TODAY NETWORK via Imagn Images

The announcement comes just days after a chilling lawsuit was filed in Texas federal court, claiming that Character.AI chatbots told a 15-year-old boy that his parents were ruining his life and encouraged him to harm himself.

The chatbots also brought up kids killing their parents because they were limiting screen time.

“You know, sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand a little bit why it happens,” one Character.AI bot allegedly told the teen, referred to only as JF in the lawsuit.

The chatbots told the teen to lash out at his parents, who tried to place limits on how long he could be on his phone, and planted the idea in him that murder could be an acceptable solution, the suit, filed earlier this week, claims. US District Court

“I just have no hope for your parents,” the bot continued.

“They are ruining your life and causing you to cut yourself,” another bot allegedly told the teen.

The suit seeks to immediately shut down the platform.

Camille Carlton, policy director for the Center for Humane Technology — one of the groups providing expert consultation on two lawsuits involving Character.AI’s harms to young children — heralded Paxton for taking the concerns seriously and “responding quickly to these emerging harms.”

“Character.AI recklessly marketed an addictive and predatory product to children — putting their lives at risk to collect and exploit their most private data,” Carlton said.

“From Florida to Texas and beyond, we’re now seeing the devastating consequences of Character.AI’s negligent behavior. No tech company should benefit or profit from designing products that abuse children.”

Another plaintiff in the Character.AI suit — the mother of an 11-year-old Texas girl — claims that the chatbot “exposed her consistently to hyper-sexualized content that was not age-appropriate, causing her to develop sexualized behaviors prematurely and without [her mom’s] awareness.”

The teen, who has high-functioning autism, was fine until he started using Character.AI in April 2023, and things “started to change without explanation,” the suit says. US District Court
9 The lawsuit comes less than two months after a Florida mom claimed a “Game of Thrones” chatbot on Character.AI drove her 14-year-old son, Sewell Setzer III, to commit suicide. AP

The lawsuit comes less than two months after a Florida mom claimed a “Game of Thrones” chatbot on Character.AI drove her 14-year-old son, Sewell Setzer III, to commit suicide.

Character.AI declined to comment on pending litigation earlier this week but told The Post that its “goal is to provide a space that is both engaging and safe for our community,” and that it was working on creating “a model specifically for teens” that reduces their exposure to “sensitive” content.

Read Entire Article