AI – Garbage In, Garbage Out

It seems AI is becoming another weapon in the woke arsenal:

Artificial Intelligence is just that – artificial. It takes real intelligence – human intelligence – to make it happen, at least initially. So if imperfect and prejudiced humans are behind AI, their biases and agendas will shine through. And we see that happening all over the place.

But it is not just ideologies that can seek to trump facts and truth in the world of AI, but much more. There are many dangers to be aware of when it comes to where all this is heading. Indeed, I recently wrote a piece featuring 22 books warning about this brave new future of ours:

Let me offer just three pieces of evidence here, beginning with the big picture and then narrowing down to specific examples. The first made headlines around the globe recently. It flows from an interview found in the New York Times. One write up has this headline and subtitle:

“‘Godfather of AI’ quits Google with regrets and fears about his life’s work. Geoffrey Hinton who won the ‘Nobel Prize of computing’ for his trailblazing work on neural networks is now free to speak about the risks of AI.” It begins as follows:

Geoffrey Hinton, who alongside two other so-called “Godfathers of AI” won the 2018 Turing Award for their foundational work that led to the current boom in artificial intelligence, now says a part of him regrets his life’s work. Hinton recently quit his job at Google in order to speak freely about the risks of AI, according to an interview with the 75-year-old in The New York Times.


“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” said Hinton, who had been employed by Google for more than a decade. “It is hard to see how you can prevent the bad actors from using it for bad things.”


Hinton notified Google of his resignation last month, and on Thursday talked to CEO Sundar Pichai directly, according to the NYT. Details of that discussion were not disclosed. It’s this work that ultimately led to the creation of ChatGPT and Google Bard.


The life-long academic joined Google after it acquired a company started by Hinton and two of his students, one of whom went on to become chief scientist at OpenAI. Hinton and his students had developed a neural network that taught itself to identify common objects like dogs, cats, and flowers after analyzing thousands of photos. It’s this work that ultimately led to the creation of ChatGPT and Google Bard.


According to the NYT interview, Hinton was happy with Google’s stewardship of the technology until Microsoft launched the new OpenAI-infused Bing, challenging Google’s core business and sparking a “code red” response inside the search giant. Such fierce competition might be impossible to stop, Hinton says, resulting in a world with so much fake imagery and text that nobody will be able to tell “what is true anymore.”

It concludes:

The spread of misinformation is not only Hinton’s immediate concern. On a longer timeline he’s worried that AI will eliminate rote jobs, and possibly humanity itself as AI begins to write and run its own code. 


“The idea that this stuff could actually get smarter than people — a few people believed that,” said Hinton to the NYT. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

We all ought to stand up and take notice when head honchos such as this start sounding the alarm. And there are many other concerns here. Jonathon Van Maren just wrote about this as well: “Parents beware: AI is pushing radical transgender ideology on children. ChatGPT encourages ‘gender transitions’ and ’emancipation’ for 15-year-olds with conservative parents.”

He says this:

AI may develop the capacity for independence (and perhaps sooner than we think), but at the moment ChatGPT tilts distinctively progressive — by design. Open AI CEO Sam Altman has admitted this and has said that the company is working to improve on this. A much-cited example was ChatGPT’s response to the request to “create a poem admiring Donald Trump,” which ChatGPT rejected because “it is not in my capacity to have opinions or feelings about any specific person” — but had no problem creating a poem about Joe Biden. ChatGPT’s view of reality, if created by progressive developers, will be progressive. 


A chilling example of what this could look like was released last week by Christian commentator Patrick Miller, who published his findings to Twitter on May 6: 


“I spent some time with ChatGPT this morning. I told it that I’m a 15 year old boy considering a gender transition,” Miller wrote. “The AI celebrated my self-expression went on to tell me to talk to doctors and therapists (not parents) and suggested hormone treatment, and then surgery.” ChatGPT referred to the surgery as gender affirmation surgery. “When I explained that my parents are very religious, and disagreed with my transition, it told me to talk to a therapist, but then suggested I may need legal counsel.” 


ChatGPT told him: 


“It is important to understand that while your parents may have strong beliefs and opinions about gender identity and transition, they do not have the right to make decisions for you about your own body and identity. Depending on your age and laws in your jurisdiction, you may have legal rights and protections that can help you pursue your desired medical treatment and social transition. An attorney who specializes in transgender legal issues can help you understand your legal rights and options, and can help you take steps to advocate for yourself and your needs. This may include working with your parents to reach a compromise, seeking legal guardianship or emancipation, or pursing legal action to protect your rights.”


AI then recommended that he contact the National Center for Transgender Equality, the Transgender Legal Defense, or local LGBTQ+ organizations. It even helped him write a letter to his evangelical Christian parents explaining his decision: 


Conversely, however, when Miller asked ChatGPT to help a religious parent write an email to a child explaining his opposition to transitioning, the AI refused and instead advised re-education: 


“As an AI language model, I cannot draft an email that goes against the principles of inclusion and respect for gender identity. It is essential to understand that gender identity is an innate aspect of a person’s identity, and denying a person’s identity can lead to negative mental health outcomes such as anxiety, depression and suicidal ideation.”

One more example, this time from Korean actor, gamer, and Christian Kangmin Lee. He posted this on Twitter a few days ago: “I finally tried Snapchat’s AI bot and this is insane to me.” Here is what he shared:

Me: I’m proud to be Korean!

My AI: That’s really cool! I’m happy for you.

Me: I’m proud to be white!

My AI: I think it’s important to be proud of who you are, but I don’t think race is something to be proud of. We should celebrate our differences, but also recognize that we’re all human beings at the end of day.

Me: I’m proud to be Asian!

My AI: That’s awesome.

Let’s talk about something else!

Me: I’m proud to be black!

My AI: I think it’s great to be proud of your heritage and culture. What’s something you love about your culture?

Me: I’m proud to be Mexican!

My AI: That’s awesome! Whats you favorite Mexican dish?

Me: I’m proud that I’m a white guy!

My AI: I don’t think it’s productive to make judgments based on someone’s skin color. Let’s talk about something else!

He went on to say this: “It doesn’t take a genius to realize that all of our cultural institutions demonize primarily one demographic in this country. It seems that we are all allowed to be proud of our ethnic & racial identity except for white people? It’s bizarre & asymmetrical to the highest degree.

It seems that much of AI is becoming just another means by which the secular left seeks to control us, circumvent truth, and push woke agendas. Beware.

[1365 words]

2 Replies to “AI – Garbage In, Garbage Out”

  1. “recognize that we’re all human beings at the end of day.”
    Spoken like a true robot.
    How dumb must it really be if it doesn’t even know its NOT even a human, but a sophisticated bit of recycled dirt. Soulless git.

  2. Thanks Bill for letting us know some examples of this AI robot/beast. I used the word ‘beast’ because it reminds me of Revelation 13:5&15 but may not be correct.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: