The Promises and Perils of AI and Our Posthuman Future

Key thoughts on where we are heading:

As science and technology march inevitably further on, what we find is always a mixed bag. New developments and discoveries and inventions can be a real godsend, making life so much better, easier and more efficient. Of course many of these same things can be used for great evil as well, and it is always a balancing act in trying to pursue the good while restraining the bad.

Christians are not to be Luddites when it comes to new technologies, but neither are they to be gullible and unaware. In a fallen world almost everything can be used for good or ill. And given how AI is not some stand-alone thing, but is too often part of much bigger and scarier agendas, such as those of the transhumanist and posthumanist activists, great care is needed.

Artificial intelligence, along with so many related matters, be it robotics, genetic engineering, new digital technologies and so on, are developing far more rapidly than our ability to properly assess them morally, socially and spiritually. The many benefits and goods of all this can easily be outweighed by the many dangers and risks.

So Christians especially need to think carefully and prayerfully about our posthuman future. If some believers might be far too critical, others can be far too gullible and unaware of the brave new world implications found here. One social media friend for example made this comment when I was discussing these matters:

“Should we fear AI like Christian leaders have in the past? I think it will be a race to take advantage of it’s potential. With it we can translate the Bible without little effort to all the languages of the world. Communist and Muslim nations will not be able to stop the flow of information to their people. This is great potential to spark a global Christian Great Awakening.” I replied to him as follows:

AI is FAR more than about Bible translation of course. The Christian is called to be a biblical realist, fully aware of sin, power and corruption. Sure, some technologies can be used for good, but we dare not be naïve here. The transhumanists and posthumanists are fully committed to their dystopian vision. Go back and reread The Abolition of Man by Lewis, or any of the 40 books I discuss in the comment below.

That annotated reading list is found here: https://billmuehlenberg.com/2025/01/17/what-to-read-on-ai-transhumanism-and-the-new-digital-technologies/

In this article I want to quote from just five of those volumes, demonstrating that some of those most involved in these areas are very much concerned about where things are heading. Refer back to my reading list for full bibliographic details of these books.

One volume, The Coming Wave, is penned by someone with a long history in this field. Mustafa Suleyman is currently the CEO of Microsoft AI. Early on in this important book he says this:

AI has been climbing the ladder of cognitive abilities for decades. And it now looks set to reach human-level performance across a very wide range of tasks within the next three years. That is a big claim, but if I’m even close to right, the implications are truly profound. What had, when we founded DeepMind, felt quixotic has become not just plausible but seemingly inevitable.

 

From the start, it was clear to me that AI would be a powerful tool for extraordinary good but, like most forms of power, one fraught with immense dangers and ethical dilemmas, too. I have long worried about not just the consequences of advancing AI but where the entire technological ecosystem was heading. Beyond AI, a wider revolution was underway, with AI feeding a powerful, emerging generation of genetic technologies and robotics. Further progress in one area accelerates the others in a chaotic and cross-catalyzing process beyond anyone’s direct control. It was clear that if we or others were successful in replicating human intelligence, this wasn’t just profitable business as usual but a seismic shift for humanity, inaugurating an era when unprecedented opportunities would be matched by unprecedented risks.

 

As the technology has progressed over the years, my concerns have grown. What if the wave is a tsunami? (p. 9)

For three decades Stuart Russell has been a leading figure in AI science. In Human Compatible: AI and the Problem of Control he asks a number of hard but crucial questions. In the book’s Afterword he writes:

Meeting a criterion such as generating “true and accurate” content does not, of course, guarantee that the system is completely safe. For example, a sufficiently capable system could be entirely truthful about its ineluctable plan to take control of the world. What we really need, of course, are systems that are provably safe and beneficial to humans, as outlined in this book. Unfortunately, the AI safety research community (which includes my own research group) has not moved nearly fast enough to develop an alternative technology path that is both safe and highly capable.

 

There is now broad recognition among governments that AI safety research is a high priority, and some observers have suggested the creation of an international research organization, comparable to CERN in particle physics, to focus resources and talent on this problem. This organization would be a natural complement to the international regulatory body suggested by British prime minister Rishi Sunak.

 

Despite the torrent of activity around Al regulation, almost no attention has been paid to the Dr. Evil problem mentioned in Chapter 10—the possibility that bad actors will deliberately deploy highly capable but unsafe AI systems for their own ends, leading to a potential loss of human control on a global scale. The prevalence of open-source Al technology will make this increasingly likely; moreover, policing the spread of software seems to be essentially impossible. (p. 320)

Mo Gawdat, the former chief business officer of Google [X] said this in Scary Smart:

It is predicted that by the year 2029, which is relatively just around the corner, machine intelligence will break out of specific tasks and into general intelligence. By then, there will be machines that are smarter than humans, full stop. Those machines will not only become smarter, they will know more (as they have access to the entire internet as their memory pool) and they will communicate between each other better, thus enhancing their knowledge. Think about it: when you or I have an accident driving a car, you or I learn, but when a self-driving car makes a mistake, all self-driving cars learn. Every single one of them, including the ones that have not yet been ‘born’.

 

By 2049, probably in our lifetimes and surely in those of the next generation, AI is predicted to be a billion times smarter (in everything) than the smartest human. To put this into perspective, your intelligence, in comparison to that machine, will be comparable to the intelligence of a fly in comparison to Einstein. We call that moment singularity. Singularity is the moment beyond which we can no longer see, we can no longer forecast. It is the moment beyond which we cannot predict how AI will behave because our current perception and trajectories will no longer apply.

 

Now the question becomes: how do you convince this superbeing that there is actually no point squashing a fly? I mean, we humans, collectively or individually, so far seem to have failed to grasp that simple concept, using our abundant intelligence. When our artificially intelligent (currently infant) supermachines become teenagers, will they become superheroes or supervillains? Good question, huh?

 

When such superpower is unleashed, anything can happen…. (pp. 7-8)

Image of Masters or Slaves?: AI And The Future Of Humanity
Masters or Slaves?: AI And The Future Of Humanity by Peckham, Jeremy (Author) Amazon logo

Scientist Jeremy Peckham has been involved in AI for some thirty years, and he offers this warning in Masters or Slaves? AI and the Future of Humanity:

While there’s a push towards creating ‘trustworthy AI’, even going as far as having product markings and standards approvals, I believe that this is dangerous because it doesn’t address the core effects on humanity. It focuses on important but subsidiary issues such as data bias and transparency. In essence many AI applications are just opaque algorithms, trained on a vast amount of data. As we’ve seen, this data could be skewed, and how the probability of input data machines matching this database was reached cannot be known. We cannot think of AI in the same way that we might think about constructing a safe or trustworthy bridge for traffic to cross, because in bridge design the engineering principles are well understood, verifiable and transparent.

 

The issue that we face as a civilization isn’t whether AI is or can ever be made trustworthy, but how we can use it wisely, given its limitations in the way it shapes us. (p. 214)

Finally, James Barrat in Our Final Invention makes this rather ominous remark:

In writing this book I spoke with scientists who create artificial intelligence for robotics, Internet search, data mining, face recognition, and other applications. I spoke with scientists trying to create human-level artificial intelligence, which will have countless applications, and will fundamentally alter our existence (if it doesn’t end it first). I spoke with chief technology officers of Al companies and the technical advisors for classified Department of Defense initiatives. Every one of these people was convinced that in the future all the important decisions governing the lives of humans will be made by machines or humans whose intelligence is augmented by machines. When? Many think this will take place within their lifetimes….

 

But artificial intelligence brings computers to life and turns them into something else. If it’s inevitable that machines will make our decisions, then when will the machines get this power, and will they get it with our compliance? How will they gain control, and how quickly? These are questions I’ve addressed in this book….

 

I’m not the first to propose that we’re on a collision course. Our species is going to mortally struggle with this problem. This book explores the plausibility of losing control of our future to machines that won’t necessarily hate us, but that will develop unexpected behaviors as they attain high levels of the most unpredictable and powerful force in the universe, levels that we cannot ourselves reach, and behaviors that probably won’t be compatible with our survival. A force so unstable and mysterious, nature achieved it in full just once—intelligence. (pp. 3-5)

The words of these experts need to be carefully considered. And lest some claim that I am just quoting from religious worry warts, as far as I know, only Peckham of the five considered here is a Christian. So plenty of non-Christian or non-religious thinkers and players in this field are sharing very real concerns about our posthuman future.

We need to heed their warnings.

[1783 words]

4 Replies to “The Promises and Perils of AI and Our Posthuman Future”

  1. Isaac Asimov and successors wrote much on this – the Three Laws of Robotics etc.

  2. Yes John, some of the books presented here discuss that. The Barrat book for example says this (although I left it out of the quote that I shared above):

    And how will the machines take over? Is the best, most realistic scenario threatening to us or not? When posed with this question some of the most accomplished scientists I spoke with cited science-fiction writer Isaac Asimov’s Three Laws of Robotics. These rules, they blithely replied, would be “built in” to the AIs, so we have nothing to fear. They spoke as if this were settled science. We’ll discuss the three laws in chapter 1, but it’s enough to say for now that when someone proposes Asimov’s laws as the solution to the dilemma of superintelligent machines, it means they’ve spent little time thinking or exchanging ideas about the problem. How to make friendly intelligent machines and what to fear from superintelligent machines has moved beyond Asimov’s tropes. Being highly capable and accomplished in Al doesn’t inoculate you from naïveté about its perils. (pp. 4-5)

  3. Hi Bill,

    Thank you for raising this important topic. As a Christian, I share the concerns you and the quoted authors raise. I also read the previous two articles to this one as well as your article on John Lennox’s updated AI book.

    I have been thinking about this problem that indeed can become a challenge, especially if AI technologies start to build robots that play a physical role in our world because people start to believe they can bring better outcomes relative to what people can do. In one sense, there are examples where an AI – like ChatGPT – can know more than a person. And if we believe as people that ChatGPT can know more and allow this to flow over into the physical world (besides the physical impact AI is having already through massive data centers, the use of energy and real-estate space), then we are at risk if they become armed, or gain control over military systems and their related energy needs.

    As a Christian, I currently lead a company that uses aspects of machine learning in processing massive amounts of data that is used in infrastructure planning – urban, transport to solve problems related to traffic congestion, pollution and unaffordable housing. We follow the laws of physics to position events and in my professional experience I know that the laws of physics can be used in a AI algorithm like a regression formula that is modelling a lot of data (for example, predicting congestion when modelling cars queuing in a limited resource like a motorway on-ramp). As we know, the laws of physics were created by God himself, so all my ‘AI’ did was comply with those laws in trying to model reality – that model will not be perfect but as long as it is subject to people making decisions that use what the regression outputs, it is OK. In fact, almost 20 years ago I did a long-standing graduate Physics paper called “Inverse Problems” that is simply about finding out the mathematics function that governs what happens in a physical world – e.g. how photons move – based on the observed data. Machine Learning and Deep Learning is a reinvention of what has been around for decades – “Inverse Problems” – but with larger amounts of data to process.

    Will people believe that AI has more knowledge than a person, or a team of people? And if so, will people start to believe that AI can make better decisions than we can?

    As I was pondering these challenges, I have two contributions to this topic.

    Firstly, I’d suggest it is important for Christians to be involved in AI-related technologies, as long as it is not morally wrong, so that our witness can prevent evil outcomes from AI. I hope the example I gave above for what we do as a company fits this.

    Secondly, I believe that God, and how He uses His church, still determines ultimate reality in the eternal sense. Imagine this scenario (which I don’t hope happens because it is evil, but still don’t think that it will determine eternal reality). In this scenario, AI became seen to be superior to all human decision making, and we gave it the exclusive power to physically re-make the world in a better way which includes deciding how many people are to inhabit this new world, with the mass-extinction of everyone else. Such an outcome is of course horrendous.

    The Bible clearly teaches (2 Peter 3:10 – 13) that the earth and the works that are in it will be burned up (“laid bare”, or “found”) – the works includes everything that humanity ever did. This means that ultimately, AI will cease to exist. The Bible also teaches (Revelation 20:11 – 21:8, 24 – 27, Matthew 25:31 – 46) that human beings will one day be sent to one of two places – the new heavens and new earth, or the lake of fire (the second death) for eternity. Even in the terrible outcome that AI kills people, people themselves do not cease to exist. In fact even if AI robots were to kill everyone, it would be left to its own devices and subject to God’s ultimate authority and timetable, which is clearly outlined in 2 Peter 3:10 – 13. AI also cannot experience what we experience – in this life – but definitely not after this life. In fact, AI cannot go to where we go when we die. AI is, ultimately, powerless and will end up on the scrap heap of human history when one day God brings this world to an end. AI is not like a person and has no immortality.

    As a simple experiment, we can ask ChatGPT about these things. I did this and its response is that people have infinitely more knowledge and should remain ultimate control. I asked it extensively about military AI applications and it claimed that human beings are still part of the decision making process. Nonetheless, ChatGPT’s responses are not enough for me to stop being vigilant around this new technology.

    When we are discussing these matters I do believe that we should be ready to give a defense of our beliefs (1 Peter 3:15) in this area by outlining, amongst other things, the fact that people are eternal which will never be possible for AI to do because we are created by God.

    Thanks for the articles Bill.

Leave a Reply

Your email address will not be published. Required fields are marked *