
I Am Still Not Sold on All Things AI
On AI, ChatGPT, and human creativity:
I am old school when it comes to all things AI, including digital voice assistants such as Alexa, Siri and Cortana, and chatbots such as ChatGPT, and various programs that are supposed to make our lives easier and better. And being old school means – at least for me – being old as well.
Being an old reader, writer and teacher means that I am not overly impressed with all the new digital technologies overtaking us like a flood, especially the ones involved in learning, teaching, writing, the arts, and so on. For example, one great concern for every lecturer is plagiarism. It was always a problem, but it certainly ramped up big time in the internet age, and it seems that AI will simply compound things even further.
And it is not just learning and writing in general: consider the Christian world in this regard. Are we now at the place where, say, a pastor can get most of his sermons created by AI, or even his written prayers? The temptation to bypass the Holy Spirit and rely on machines is an ever-present danger. I have written about these matters before, as in this piece: https://billmuehlenberg.com/2025/05/11/men-and-machines-god-and-gadgets/
And then worse yet are artificial relationships. Relying on a chatbot or a sexbot or interactive porn instead of having real relationships with real human beings is another problem area. See this piece for example: https://billmuehlenberg.com/2025/07/11/ai-and-the-end-of-relationships/
One need not be an expert in these new technologies to understand the various downsides or problems with them. The other day I saw a few things posted online tied in with all this. American Christian philosopher and lecturer Douglas Groothuis had said this:
AI and Writing: Many Questions
How many in the upcoming generation will learn how to write as genuine authors? Will they learn grammar, punctuation, vocabulary, and rhetoric? Will they receive wisdom from exemplary authors of both substance and style, such as C. S. Lewis? Will they master the apt turn of phrase, the proper word choice, the art of sentence construction and paragraphing? Will they know the subtle difference between a semicolon and a comma, between a semicolon and a period? Will they know how to document quotations and ideas? Will the footnote survive? Will they know how to self-edit and edit others’ work? Or will their personality expressed through writing, their authorship, be outsourced to AI? If so, it is literary suicide (with a happy AI face).
And I reposted a quote that has gone viral by the author Joanna Maciejewska: “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” My friend Kerry responded:
Yep – you are right. Given that AI is programmed, it’s a clever way of manipulating broader opinion on issues if people just rely on AI to produce a piece of work for them rather than learn the skills needed to produce work themselves. I’m sure AI has the capability to just ‘gather’ certain viewpoints in answer to a query and filter others out. A clever way to make sure any ‘wrong’ or ‘incorrect’ opinions are filtered out of research.
So many issues arise here. I STILL have not used ChatGPT or anything like it. I still have not even used Copilot on MS Word. I want to write what I as a person can write. I do not want machines to take over. Sure, spellchecks and the like can be of value. And often other folks might point out a typo or mistake in articles I have penned.
So in a sense I am the product of others, including what and how I write. But there are so many legitimate areas of concern here. In the article I linked to above on God and gadgets, a number of good comments came in. Two can be shared here. My friend Kerry again had wise words:
Many of these AI programs just do a broad sweep over the internet to gather data for the answer to a question, which is essentially gathering from someone else’s writing. As an example I have seen this recently in my own research with trying to improve my sourdough bread baking skills – if I have a general query about something to do with sourdough bread baking, the AI program built into the search engine does a search for me and spits out its answer at the top of the search results, and underneath that is the usual list of search results to work through. It’s when I read through some of these original blogs in the general results that I find passages that have been lifted and copied into the AI answer at the top of the page, word for word. And of course not everything on the internet is correct so AI may well gather incorrect information in its answers.
One of the other concerns I have with these programs is how they perpetuate the current problem we have of people coming up through the school system who cannot read properly, spell, or think on their own now losing the capacity to research things for themselves – now they can just ask a question and pass off an AI ‘mish-mash’ of data as their own.
Yes quite right. And my friend Ed said this:
Good article, Bill. A couple of other concerns:
1. AI makes things up, so if you use it for any kind of research you need to check whether the sources cited actually exist. The most notorious example I know of was an American lawyer who used it to do research for a legal brief. Some of the cases and quotes were from cases that simply did not exist (but were properly numbered and dated). He didn’t check them and submitted the brief to the court. Of course the case was thrown out. I don’t know of any case where it happened, but I imagine something similar could happen with biblical or theological citation – it could make up a chapter and/or verse (maybe a whole new book in the Bible); or cite a non-existent work from a well-known theologian. According to my programmer son the technical term for this is called hallucination – a real term for a real phenomenon.
2. You mention the concern about plagiarism. Authors and artists are very concerned about this area and their intellectual property. AI shamelessly adapts existing material without credit – well of course, a machine has no shame.
Thanks for continuing to bring important issues into the light!
So many books have now been written about these matters. Let me share from just one of them. The 2024 volume Human Rights, Robot Wrongs: Being Human in the AI World by Susie Alegre has a chapter on this issue. Ch. 6 discusses “Robot Writers and Robot Art”. It begins:
Generative AI like ChatGPT and Dall-E grabbed the public imagination in 2023 in a way that no other Al innovation had before. In addition to the dangers around security, data, confidentiality and intellectual property, headlines tell us that it is the end for human writers and for independently written student essays. Whether you hail it as a wonder tool to supercharge human productivity or dread its potential to destroy the creative industries, generative Al is impossible to ignore, and easy access to text, image and video generators gives us all the chance to see what it might mean for us.
Things like plagiarism, copyright issues, and intellectual property rights are among the many issues that arise here. Alegre, an international human rights lawyer, reminds us that Article 27 of the UN Declaration of Human Rights “gives artists and writers both economic and moral rights in their work.”
While folks can now pinch anything that has been written, this just gets worse for all forms of creativity with generative AI. As she says about art:
Recognising the fundamental difference between images created by machines and art is vital for the future of human creativity. Without this kind of differentiation, there won’t even be a two-tier system and the artistic pipeline will flatline. Skills and inspiration honed over millennia will dry up. Art will become ever more elitist, the preserve of tech bros as the only people able to afford it, even if they do not appreciate it. Art is about emotion and connection. Do we really want to let Al suck the joy out of our lives?…
Generative AI is a threat to the arts in general. Image generators like Dall-E and Midjourney have popped up winning art and photography prizes, fooling people around the world. But tech-generated content is not art, and it can be as bad for our minds and cultures as highly processed fast food is for our bodies and the environment.
She ends the chapter this way:
As ever, AI is not the problem; it is the corporations behind it and the ways we use it. If we allow AI-generated content to put our creative industries out of business, there will be no cultural heritage to protect before too long. . . . Generative AI risks destroying our ability to create and develop cultural heritage, the creative cultures that make us human, and our ability to understand and care about each other and the world around us. If we do not take radical steps now to protect and respect the space for human culture, creativity and the creators of the future, we may lose what it means to be human entirely.
Numerous articles can also be cited. One recent piece begins this way:
The big story making rounds today is New York Magazine’s “Everyone is cheating their way through college,” using GenAI, which concludes, inter alia, and probably correctly that
“Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate…Both in the literal sense and in the sense of being historically illiterate and having no knowledge of their own culture, much less anyone else’s.” https://garymarcus.substack.com/p/everyone-is-cheating-their-way-through
Again, for the Christian, this is also an area that we must think through carefully. Pinching the work of others without attribution is already quite common. On many occasions when I have looked up a quote online, I will find a pastor or Christian leader that has pinched partial or even entire large passages from others, passing it off as their own work. That is intellectual dishonesty at best, and the sin of theft at worst.
And as mentioned, how many pastors and teachers today are getting much, if not most, of their sermons or devotionals or teachings straight out of AI? Yes, we can make use of things like online Bible programs and other resources. But no church leader should ever neglect hard work and study, and especially regular reliance on the Holy Spirit, going instead for a quick AI-generated substitute.
As is usually the case, there are some who think AI and the new technologies are going to bring in a new heaven and a new earth. Others are Luddites who want absolutely nothing to do with it. Perhaps most folks are somewhere in the middle.
I tend to be rather cautious and skeptical about where all this is heading, especially given how much of this is tied in with the pseudo-religion of transhumanism. The warnings of earlier writers like C. S. Lewis and George Orwell need to be heeded. If not, our brave new future will be very interesting, to say the least.
[1725 words]




















Patrick Wood has been warning us about AI and the rise of technocracy for years, but here comes AI and people are all “yippee!” Don’t know something or ask a question? “Go ask chatGPT.” I’m like you. I’m not touching it and encouraged others not to do so. I hate when I tried to type a post on FB (to the 100 people I truly know) it will correct what I’m writing to something I am not or try to add to it when I’m good with what I am writing. People keep marching in lockstep to enslavement. Don’t have the jab now (although I know some getting the newest “booster”) so we better have something else we can jump on the bandwagon about.
Thanks Susan. Yes, I mention his 2022 book The Evil Twins of Technocracy and Transhumanism here: https://billmuehlenberg.com/2025/01/17/what-to-read-on-ai-transhumanism-and-the-new-digital-technologies/
I am with you.
Thanks Suzanne.
All new technology carries risks.
Cars are awesome, but in the wrong hands, they can crash into crowds.
AI like Grok or chatGPT is no different—it’s powerful, but its impact depends on the wisdom, or ignorance, of the people using it.
Wise users can harness AI to simplify tasks, gain quick insights, or explore biblical truths with clarity, making life richer and work faster.
For example, Grok can help study Scripture or solve practical problems with ease. But ignorance or misunderstanding of AI can lead to trouble.
It’s like using the wrong end of a claw hammer to turn a screw—ineffective and damaging.
People might blindly trust AI’s answers, spread false information, or misuse it to deceive others.
Without a proper grasp of its limits, AI can amplify errors or confusion. And in the wrong hands can be extremely damaging.
The solution lies in users seeking wisdom, grounding their use in the truth of the Bible, and learning how AI works.
With clear understanding, AI’s benefits shine, but it can also be used for evil purposes.
Thank you Marcus.