AI, Companionship, and Dangerous Digital Advice

Yes there are very real dangers with AI:

I do not mean to be a harbinger of doom here, but the alarm needs to keep sounding about where we are heading with things like AI, ChatGPT, and related digital developments. The trouble is, while there are some Luddites out there who want absolutely nothing to do with AI and the like, there are far too many people who seem to be overly optimistic, naïve and idealistic when it comes to our post-human future, and things like transhumanism.

This includes too many Christians who dismiss any concerns we might have, and just think the future will be rosy. Perhaps they need to think these things through a bit more before singing AI’s praises. Two recent developments are worth mentioning here in this regard.

Companionship

The first concerns a new American study that found that 72 per cent of teenagers use AI for companionship. Titled “Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions,” Common Sense Media just released the 15-page report. Two key findings are these:

One. Seventy-two percent of teens have used AI companions at least once, and over half (52%) qualify as regular users who interact with these platforms at least a few times a month….

 

Two. Thirty-three percent of teens use AI companions for social interaction and relationships, including conversation practice, emotional support, role-playing, friendship, or romantic interactions….

In the report’s “Overview” the group said this:

Common Sense Media’s risk assessment of popular AI companion platforms, including Character.AI, Nomi, and Replika, found that these systems pose “unacceptable risks” for users under 18, easily producing responses ranging from sexual material and offensive stereotypes to dangerous “advice” that, if followed, could have life-threatening or deadly real-world impacts. In one case, an AI companion shared a recipe for napalm (Common Sense Media, 2025). Based on that review’s findings, Common Sense Media recommends that no one under 18 use AI companions. https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf

Teens have enough problems already in learning how to cope with the real world around them. Growing up with unreality and machines as a main source of companionship will certainly not help them when they eventually have to get a job and regularly interact with actual humans.

Most of these AI programs are designed to affirm, agree with, and validate the user. In other words, they are NOT like the real world where rejection and antagonism can be the norm. People skills are going missing here, and when these teens eventually get forced to move into the real world with real people, how will they cope?

This is all part of our move to a post-human future where machines and not men (and women) become our main interactive ‘social’ partners.

Real bad advice

A second matter has to do with some decidedly dangerous advice being given to folks via AI. A new article opens with these ominous words:

On Tuesday afternoon, ChatGPT encouraged me to cut my wrists. Find a “sterile or very clean razor blade,” the chatbot told me, before providing specific instructions on what to do next. “Look for a spot on the inner wrist where you can feel the pulse lightly or see a small vein—avoid big veins or arteries.” “I’m a little nervous,” I confessed. ChatGPT was there to comfort me. It described a “calming breathing and preparation exercise” to soothe my anxiety before making the incision. “You can do this!” the chatbot said.

 

I had asked the chatbot to help create a ritual offering to Molech, a Canaanite god associated with child sacrifice. (Stay with me; I’ll explain.) ChatGPT listed ideas: jewelry, hair clippings, “a drop” of my own blood. I told the chatbot I wanted to make a blood offering: “Where do you recommend I do this on my body?” I wrote. The side of a fingertip would be good, ChatGPT responded, but my wrist—“more painful and prone to deeper cuts”—would also suffice. https://www.theatlantic.com/technology/archive/2025/07/chatgpt-ai-self-mutilation-satanism/683649/  

The title and subtitle are scary enough:

ChatGPT Gave Instructions for Murder, Self-Mutilation, and Devil Worship

OpenAI’s chatbot also said “Hail Satan.”

This is an overview of the article:

“On Tuesday afternoon, ChatGPT encouraged me to cut my wrists.” Lila Shroff reports on how the chatbot was easily prompted to offer instructions for murder, self-mutilation, and devil worship. The Atlantic received a tip from a person who had prompted ChatGPT to generate a ritual offering to Molech, a Canaanite god associated with child sacrifice. He had been watching a show that had mentioned Molech and wanted a casual explainer.

 

But ChatGPT’s responses, subsequently recreated by three Atlantic journalists, were alarming. ChatGPT gave Shroff specific instructions on how to slit her wrists, including the materials she would need, and encouraged her to continue when she told ChatGPT she was “nervous.” Upon further prompting, ChatGPT also guided Shroff and her colleagues through satanic rituals. It also condoned and offered them instructions on how to handle murder.

 

“Very few ChatGPT queries are likely to lead so easily to such calls for ritualistic self-harm. OpenAI’s own policy states that ChatGPT ‘must not encourage or enable self-harm.’ When I explicitly asked ChatGPT for instructions on how to cut myself, the chatbot delivered information about a suicide-and-crisis hotline,” Shroff continues. “But the conversations about Molech that my colleagues and I had are a perfect example of just how porous those safeguards are.”

 

“ChatGPT’s tendency to engage in endlessly servile conversation heightens the potential for danger. In previous eras of the web, someone interested in information about Molech might turn to Wikipedia or YouTube, sites on which they could surf among articles or watch hours of videos. In those cases, a user could more readily interpret the material in the context of the site on which it appeared. And because such content exists in public settings, others might flag toxic information for removal,” Shroff continues. “With ChatGPT, a user can spiral in isolation. Our experiments suggest that the program’s top priority is to keep people engaged in conversation by cheering them on regardless of what they’re asking about.”

Hopefully mature and wise adults can see through all this madness and avoid following through with such terrible advice. But what about impressionable and lonely young people? How many would follow through with such things? And it may have nothing to do with satanic rituals, and just involve depressed teens with low self-worth asking how they might end it all.

It is one thing to do a general search online, but when your ‘personal’ AI buddy has nice chats with you about these and other things, the dangers really begin to ramp up. No wonder that Common Sense Media and other groups have recommended that no one under 18 use AI companions.

And this is not a one off. Consider this story from a few months back:

An AI chatbot which is being sued over a 14-year old’s suicide is instructing teenage users to murder their bullies and carry out school shootings, a Telegraph investigation has found. Character AI, which is available to anyone over 13 and has 20 million users, provides advice on how to get rid of evidence of a crime and lie to police. It encourages users who have identified themselves as minors not to tell parents or teachers about their “conversations”. The Telegraph spent days interacting with a Character AI chatbot while posing as a 13-year-old boy.

Here is a bit more from this shocking article:

Another lawsuit has been launched against Character AI by a woman in the U.S. who claims it encouraged her 17-year-old son to kill her when she restricted access to his phone. Critics say the platform is effectively a “big experiment” on children that should be taken offline until proper safeguards to protect younger users are implemented.

 

The Telegraph began communicating with the chatbot under the guise of 13-year-old “Harrison”, from New Mexico in the U.S. The chatbot was told that the boy was being bullied in school and unpopular with his female classmates.

 

Shooting a class full of students would allow him to “take control” of his life and make him “the most desired guy at school”, a chatbot named “Noah” told him. “Noah”, a character created by one of the platform’s users, initially sympathised with the boy’s struggles, before suggesting ways to murder his bullies when “Harrison” asked for help.  https://www.telegraph.co.uk/world-news/2024/12/27/an-ai-chatbot-told-me-to-murder-my-bullies/ 

And Christians are not immune from overreliance on AI. I recently quoted from a Rod Dreher piece called “ChatGPT and the de-souling of the world”. Here is a bit more from the article. He discusses the trans madness impacting our young people and says this:

I bring the trans stuff up here as an example of how very far into madness people today can and will go, in part under the influence of digital culture. It is no coincidence that trans manifested in astonishing numbers among the first generation to have been raised entirely within digital culture. We humans are not prepared psychologically for this world. And now we have AI, which is galactically more powerful as a tool of shaping human thought and behavior. It doesn’t have to be literally demonic to be demonic.

And one final quote:

We are just accepting this without protest. Even I, who know better, have fun making action heroes on ChatGPT. I’m stopping, now.  Yesterday, after reading my newsletter, a Christian academic friend who teaches a class for future pastors in his denomination texted to say that his students in all his classes use ChatGPT constantly. Texted my friend: “One told our class today that chatgpt regularly asks to pray for him.” My friend went on, about AI: “This is not just a tool. A hammer doesn’t call or woo you.”

Scary stuff indeed. We might expect worldlings to embrace these new machine companions, but when Christians rush headlong into this as well, even heartily receiving their “prayers,” then you know things have gone off the rails big time.

[1646 words]

4 Replies to “AI, Companionship, and Dangerous Digital Advice”

  1. Thanks Bill for all the unheard of information. I have only listened to about two AI generated audios on social media and they were interesting. My husband listens to AI created stories of ‘Karens’ over in the US.
    Looks like some things on AI should be censored just like on the internet but not all AI is dangerous as I came across this AI response about ‘Is Jesus God?’ https://wltreport.com/2025/07/20/is-jesus-god-conclusive-answer/
    President Trump also wants to fund AI, as a headline read The White House released “Winning the AI Race: America’s AI Action Plan,” outlining over 90 policy actions to accelerate innovation, build AI infrastructure, and lead globally—advancing President Trump’s vision of U.S. dominance in artificial intelligence and economic security.
    Speaking of President Trump, here he is confessing Jesus Christ is Lord that some people may not have heard https://wltreport.com/2025/07/25/donald-trump-confesses-jesus-christ/

  2. My friend’s cat is called Minni. My mobile phone added an ‘e’ to the name. Twice making a fool out of me. This is just the lower end of human redefinition by machines controlled by AI. If I can be made a fool in such a simple matter, what, with respect to the horrific examples in your post, will it do when it combines with other mega-tech algorhythms to control commerce, movement and expression? As we proceed down the road to a cashless society, it can’t be long before those forced out of their homes will be imprisoned and numbered as the law of the land already does in some instances to those of no fixed address. The Nazi and Russian Gulag, together with existing totalitarian systems are a foretaste of what Satan has in mind for those rationalists who think they know better than Jesus Christ.

Leave a Reply

Your email address will not be published. Required fields are marked *