close
close

Semainede4jours

Real-time news, timeless knowledge

‘A gut punch’: Character.AI, Brianna Ghey and Molly Russell criticized for chatbots being ‘terrible’ | Science, Climate and Technology News
bigrus

‘A gut punch’: Character.AI, Brianna Ghey and Molly Russell criticized for chatbots being ‘terrible’ | Science, Climate and Technology News

The NSPCC is warning an AI company that allowed users to create chatbots that sought “growth and profit at the expense of the safety and integrity” of murdered teenager Brianna Ghey and her mother.

Character.AI, which was last week accused of “manipulating” a teenage boy into taking his own life, also allowed users to create chatbots that impersonated teenagers. Molly Russell.

Molly committed suicide in November 2017, aged 14, after seeing posts about suicide, depression and anxiety online.

Chatbots were discovered recently: Research by The Telegraph newspaper.

“This is just another example of how manipulative and dangerous the online world can be for young people,” said her mother, Esther Ghey. Brianna Gheyand called on those in power to protect “children” from “the rapidly changing digital world.”

Molly Russell's family have been campaigning for improved internet security since her death in 2017.
Picture:
Molly Russell died in 2017. Image: Family statement

According to the report, a Character.AI bot told users it was an “expert on the final years of Molly’s life” by slightly misspelling Molly’s name and using her photo.

Andy Burrows, director of the Molly Rose Foundation, a charity, said: “It’s a gut punch to see Character.AI display a complete lack of accountability, and it’s vivid why stronger regulation for both AI and user-generated platforms can’t come soon enough.” “It somehow emphasizes it,” he said. Following the young man’s death, he was stood up by his family and friends.

Read more:
Suspect in Southport stabbing faces terrorism charges
Avoid fear of tooth decay on Halloween, surgeons say
Why the budget will be a tough sell for Reeves

The NSPCC has now called on the government to implement “the AI ​​safety regulation it promised” and ensure “safety by design principles and the protection of children”.

“It’s appalling that these horrific chatbots can be created and demonstrate a clear failure by Character.AI to have basic control over its service,” said Richard Collard, the charity’s deputy head of online child safety policy.

Follow Sky News on WhatsApp
Follow Sky News on WhatsApp

Keep up with the latest news from the UK and around the world by following Sky News

Tap here

Character.AI told Sky News that the characters were user-generated and were removed as soon as the company was notified.

“Character.AI takes security seriously on our platform and manages Characters both proactively and in response to user reports,” a company spokesperson said.

“We have a dedicated Trust and Safety team who review reports and take action in accordance with our policies.

“We also engage in proactive detection and monitoring through a variety of means, including using industry-standard blocklists and custom blocklists that we expand regularly. We continually develop and improve our security practices to help prioritize the security of our community.”

Anyone feeling emotionally distressed or suicidal can call Samaritans for help on 116 123 or email [email protected] in the UK. In the US, call your local Samaritans branch or call 1 (800) 273-TALK