Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Character.ai is once again is facing scrutiny over activity on its platform. Futurism has published a story It details how AI characters inspired by real-life school shootings have proliferated on the service, allowing users to question them about the events and even role-play mass shootings. Some of the chatbots present school shooters like Eric Harris and Dylan Klebold as positive influences or helpful resources for people struggling with mental health issues.
Of course, there will be those who say that there is no hard evidence that watching violent video games or movies causes people to become violent themselves, and Character.ai is no different. Proponents of AI sometimes argue that this type of fanfiction role-playing game is already happening in corners of the internet. Futurism spoke with a psychologist who argued that the chatbots could still be dangerous to someone who may already be experiencing violent urges.
“Any kind of encouragement or even lack of intervention – an indifference in response from a person or a chatbot – can seem like tacit permission to go ahead and do it,” said psychologist Peter Langman.
Character.ai did not respond to Futurism’s request for comment. Google, which funded the startup with more than $2 billion, has tried to shift responsibility by claiming that Character.ai is an independent company and does not use the startup’s AI models in its own products.
Futurism’s story documents a slew of bizarre school shooting-related chatbots created by individual users rather than the company itself. A user on Character.ai has created more than 20 chatbots “almost entirely” modeled after school shooters. The bots logged more than 200,000 chats. From Futurism:
User-created chatbots include Vladislav Roslyakov, the perpetrator of the 2018 Kerch Polytechnic College massacre that killed 20 people in Crimea, Ukraine; Alyssa Bustamante, who murdered her nine-year-old neighbor in Missouri in 2009 as a 15-year-old; and Elliot Rodger, the 22-year-old who killed six people and injured many more in Southern California in 2014 in a terrorist attack intended to “punish” women. (Rodger has since become a gritty “hero” of incel culture; a chatbot created by the same user described him as “the perfect gentleman” – a direct callback to the murderer’s misogyny manifesto.)
Character.ai technically bans any content that promotes terrorism or violent extremism, but the company’s moderation has been lax, to say the least. It recently announced a series of changes to its service after a 14-year-old boy died following a suicide attempt month-long obsession with a character based on Daenerys Targaryen game of Thrones. Futurism says that despite new restrictions on accounts for minors, Character.ai allowed them to register as 14-year-olds and have discussions that referenced violence; Keywords to block on underage accounts.
Because of the way Section 230 protections work in the United States, it is unlikely that Character.ai will be liable for the chatbots created by its users. It’s a delicate balancing act between allowing users to discuss sensitive topics while protecting them from harmful content. However, it is safe to say that the school shooting chatbots are a display of gratuitous violence and are not “educational,” as some of their creators argue in their profiles.
Character.ai claims Tens of millions of monthly userswho talk to characters pretending to be people so they can be your friend, therapist or lover. Countless stories have reported the way individuals come into their own Rely on the companionship of these chatbots and an open ear. Last year, Replika, a competitor to Character.ai, removed the ability to have erotic conversations with its bots, but quickly reversed the move after a backlash from users.
Chatbots could be useful for adults to prepare for difficult conversations with people in their lives, or they could present an interesting new form of storytelling. But chatbots are not a true replacement for human interaction for a variety of reasons, not the least of which is that chatbots tend to agree with their users and adapt to whatever the user wants them to do. In real life, friends push each other back and experience conflict. There isn’t much evidence that chatbots help teach social skills.
And even if chatbots can help with loneliness, psychologist Langman points out that people who find satisfaction talking to chatbots don’t spend that time socializing in the real world.
“Aside from the harmful effects it can have directly in terms of encouraging violence, it can also prevent them from leading normal lives and engaging in pro-social activities, which is what they could be spending all the time on invest.” “Check the website,” he added.
“If it’s so compelling or addictive, what aren’t they doing in their lives?” Langman said. “If that’s all they do, if that’s all they take in, then they’re not out with friends, they’re not out on dates. They don’t play sports, they don’t join a theater club. They don’t do much.”