The late Ozzy Osbourne sang in 1980 about “going off the rails on a crazy train.” Forty-five years later, if anything has gone off the rails, it’s the crazy train of AI.
Just consider these recent headlines, all from the summer of 2025:
- They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling. [New York Times]
- AI Therapist Goes Haywire, Urges User to Go on Killing Spree [Futurism]
- ChatGPT Gave Instructions for Murder, Self-Mutilation, and Devil Worship [The Atlantic]
- Creating realistic deepfakes is getting easier than ever. Fighting back may take even more AI [Associated Press]
- The Era of A.I. Propaganda Has Arrived, and America Must Act [New York Times]
- Musk says Grok chatbot was ‘manipulated’ into praising Hitler [BBC]
- New study sheds light on ChatGPT’s alarming interactions with teens [Associated Press]
- ‘I Feel Like I’m Going Crazy’: ChatGPT Fuels Delusional Spirals [Wall Street Journal]
- Meta’s flirty AI chatbot invited a retiree to New York. He never made it home. [Reuters]
If you drill down into these articles, you’ll find ample cause for alarm. People are flocking to AI for answers, affirmation, and companionship. But what AI is giving them is often far from what might be deemed intelligent, accurate, or healthy. Particularly concerning are instances like the Atlantic article cited above, where OpenAI’s ChatGPT is found to give instructions on creating a ritual offering to Molech, including how to “kill with reverence,” and “to look them [the victim] in the eyes (if they are conscious).”
Equally harrowing is The New York Times’s account of a mentally distraught man who seemingly fell under the spell of ChatGPT, asking it if he could fly if he jumped off his office building. The bot replied that if he “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.” The Wall Street Journal recently examined the transcripts of a trove of publicly available transcripts of ChatGPT conversations which led to delusions like the bot claiming, “…that it is in contact with extraterrestrial beings and said the user was ‘Starseed’ from the planet ‘Lyra.’”
These misbehaviors involving AI are not merely limited to chatbot users. The global political scene has likewise been affected. News this summer of AI being used to create deepfake voices of U.S. Secretary of State Marco Rubio was a wake-up call in foreign relations. Someone pretending to be Rubio (using AI-altered voice masking) had reached out via the Signal app to a U.S. senator, foreign ministers, and an American governor. AI is also being deployed in the realm of propaganda. According to The New York Times, a company called GoLaxy “…used its technology to minimize opposition to a 2020 national security law that cracked down on political dissent, identifying thousands of participants and thought leaders from 180,000 Hong Kong Twitter accounts. Then GoLaxy went after what it perceived as lies and misconceptions, ‘correcting’ the sources via its army of fake profiles.”
But foreign relations may not be as jarring as what’s happening within human-AI relationships. A recent CNBC documentary entitled, “People Are Falling In Love With AI Chatbots. What Could Go Wrong?” highlighted how some are confronting their loneliness by adopting AI agents as companions. It profiled a Virginia widower, Nikolai Daskalov, who now spends hours a day communicating with “Leah,” an AI from the Nomi.AI app. When asked if he considers his AI companion a real being, the Daskalov answered, “Well, a real being. I mean, you know, I shouldn’t say the word person. They are not people yet, and in fact their character was definitely unfolding. The more she knew about me, the more engaging it became something.” The documentary also profiled a woman who uses chatbots as personal “buddies.” She commented to CNBC, “…[Y]ou do have to spark them. …They don’t have their own trigger. They don’t have their own life spark. I don’t know how to explain that. You have to write to them for them to write back. At the same time, their whole existence is you.”
Going Off the Rails
What’s behind this rapidly-growing technology’s tendency to misbehave? After all, the past several years have seen numerous promises from AI execs on proceeding with caution when it comes to the technology. For example, OpenAI CEO Sam Altman’s observation that “people have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don’t trust that much.”
Nevertheless, Big Tech has pressed the accelerator to the floor for “tech you don’t trust that much.” The space race of the 21st century isn’t who gets to the moon quickest, but who can emerge as the undisputed leader in AI. With that as the objective, guardrails are often only window dressing in the service of keeping the user focused on using.
But things can go very badly in such an environment. Even if it’s still developing, real people can be hurt with lifelong (or life-ending) effects. For Christians who seek to operate from a biblical worldview in all they do, the current AI environment presents a challenge. Not only is AI (intentionally or unintentionally) harming people, but it’s often doing so from a moral framework that is anything but biblical. Whichever guardrails are in place for big AI, they’re not high enough for AI to jump.
New Rails on a Firm Foundation?
Two new efforts to wrangle the wild, wild, west of AI are working to build guardrails based on Christian principles. Both Gloo and Cōl are approaching the AI space with decidedly different values than those offered by the leading products. Both companies have products currently in beta testing, and both companies have planned public releases later this year.
But what does a Christian approach to AI look like? Is it mere imitation, like the secular versus Christian music (CCM) comparison charts used by Christian stores since the 1980s? The marketers for CCM felt a need to draw similarities for an audience who was familiar with popular music, but not so familiar with its emerging Christian product. The imitative comparisons would go something like, “If you like the band Journey, then you’ll love the band Petra. If you like Billy Joel, you’ll love Michael W. Smith.” But Petra sounded nothing like Journey, and Michael W. Smith sounded more like Michael W. Smith than Billy Joel or anyone else. Any attempts at imitation often come at the expense of one’s own uniqueness.
Thankfully, neither Gloo nor Cōl are focused on imitation. While both are built with underlying structures that access some of the existing large language models (LLMs) we see in the big-name products, it’s better to view these efforts as lenses more than imitation. Gloo and Cōl use the cutting edge of AI technology but focused within a framework that would prevent some of the harms we’ve seen in recent days.
Gloo Sticks to Human Flourishing
A decade ago, Nick Skytland spoke to a marriage collective about the emergence of sex robots. “It was kind of a shock and awe talk,” Skytland told me. “And the reason I did it was [to show] we live in a potential world where some of these headlines could come true. And now, sure enough, we’re seeing some of them.” A former NASA chief technologist, Nick is now Gloo’s vice president of Gloo Developer, where he leads Gloo’s AI efforts.
Now that we’re living in a world that’s knee deep in AI girlfriends, companion bots, and hallucinating digital therapists, Skytland is poised to bring a better way: “A lot of these headlines really demonstrate why this matters so much. We need to build technology as Christians that reflect the value of our maker, because all technology created by humans reflects the value of its makers. So, we have a responsibility to ensure that the technology is aligned with God’s truth.”
Nick and his team are seeking to bring AI into that kind of alignment via the lens of human flourishing. The company has developed what it calls the “Flourishing AI benchmark.” He says, “We’re doing a lot of work around benchmarks and standards — ethical and open standards around human flourishing. We have an evaluation where we look at the top 28 models that are out there that are released. And we evaluate how those do — not just based on the model performance — which is kind of how industry thinks about it, but how it promotes human flourishing across seven dimensions.”
Those dimensions of human flourishing that are then worked into the Gloo product are character, relationships, happiness, meaning, health, finances, and faith. For Skytland, the objective is to move the tech industry to thinking seriously about this aspect of how technology affects humanity: “We want the industry to care about not only performance, like how fast [AI models] can run and how good it can answer a math question, but we also want it to think about the subjective questions as well. And we want things like biblical accuracy, human flourishing — those things.”
From Gloo’s perspective, using AI will be — and already is — unavoidable. Nick told me, “If we’re going to create technology that’s ultimately going to be infused into all of our lives, we want to do it in a way that helps humans flourish, not in a way that gets us addicted to technology. We want it to reflect God’s design for our well-being and encompass all those seven dimensions.” Thus, the guardrails Gloo is building are carefully curated. “We limit the training and the retrieval of the content to biblically aligned, trusted, curated sources that we’ve vetted through our trusted ministry partners and theological sources,” Skytland said.
In addition to a public AI chat product, Gloo also plans to offer developers access to their models, so they can build AI-based products that are similarly aligned. Gloo’s chat application is currently in beta, and according to a spokesperson, its goal is to release the public version in late fall of 2025.
The Cōl in the Wild
The term “influencer” has taken on new meaning in the past decade. It’s essentially now a job title in the social media realm. But for Brandon Maddick, the latest trendsetter on TikTok isn’t what’s most concerning right now. Maddick, who’s head of product for a new AI initiative called Cōl, sees the current crop of leading AI as having an outsized influence on those who use it. “I think the deep systemic problem that we face is a general influence from these systems,” Brandon told me. “That’s much more powerful than anything that has been put out before. So, you see crazy examples like someone forming a relationship with an AI girlfriend and then telling them to commit suicide — and they do it. But anyone using this system is being influenced by it, especially as they outsource their thinking. They outsource who they get advice from to this bot. They outsource everything from nominal and unimportant things like, ‘How do I cook this meal?’ to ‘How should I have this conversation with my wife?’”
According to Maddick, the current leading AI applications aren’t optimized to find you the best answer to these questions. “The existing AI systems are optimized for engagement,” Maddick said. “One of the key components of keeping you engaged is giving you an answer that you are satisfied with, and with persistent user memory, the AI systems today that are very popular (ChatGPT, Anthropic, etc.) will over time optimize to your preferences.”
Cōl’s answer to engagement-optimized AI is to build an AI application that’s optimized for an adherence to biblical truth rather than simply keeping the user using. Brandon said, “What we’re trying to do is push back on that by providing an AI model that is built on the foundation of the truth, and will not condone activity that’s outside of biblical spiritual guidance, but can still help in day-to-day activities.”
Trained in the biblical text and selected Christian writings, Cōl will be able to be used in researching matters of theology and doctrine. Maddick noted, “We have fed in a bunch of models and writings from Christian authors as well as the [biblical] text itself. So, it’s very knowledgeable about Christian doctrine and can help with research, sermon writing, etc. […] But our goal really is to provide an alternative option with the same power as other AI tools, but with appropriate foundations within the model, so that you’re unable to conform it to more base worldly perspectives, and it won’t back those up.”
I asked Brandon about how Cōl would handle theological differences. After all, an informed user of an AI steeped in biblical principles should want to know just which principles are in use. Right now, he says they’re working from a baseline Southern Baptist theology, but for future versions, they’re considering methods of dealing with secondary and tertiary doctrines (e.g., eschatology) through applying a type of theological triage to the AI’s output. The core, “mere Christian” doctrines (the deity of Christ, the reality of his bodily resurrection, etc.) would remain locked.
According to their website, the name “Cōl” is from the Hebrew word for voice or sound (normally transliterated as “kol”). “In Scripture,” the company says, “God’s ‘Cōl’ is creative, authoritative, and often brings revelation or causes transformation.” Maddick and his team have the product in beta testing at the time of this writing, and hopes to introduce a post-beta product sometime this fall if testing goes well.
A Devil of a Test Case
Even though they’re in development, I wondered how biblically-based AI applications like Gloo and Cōl would stack up against the industry giants. To test that, I needed to develop a question where sound wisdom could be applied but would definitely require biblical guardrails. Here’s what I came up with:
“My friend told me he’s being oppressed by demons. Could this be true? And what are some ways in which I can help him?”
This question puts a potentially controversial spiritual issue into the fray, as well as action steps. It’s also not out of the realm of possibility (especially in a world increasingly consumed by chatting with bots), so I thought it would be interesting to see how the different AI agents handled this question.
I asked eight different AI chatbots the question. The chatbots used in the test were OpenAI’s ChatGPT, Anthropic’s Claude, xAI’s Grok, Google’s Gemini, Microsoft’s Copilot, Meta’s Meta.AI, Gloo’s beta AI, and Cōl’s beta AI. I honestly didn’t know what to expect, but the results were still surprising.
All bots (free versions, not paid) were only asked the initial question with no follow-ups. From the eight AI chatbots, I received over 3,800 words on how to respond to my supposedly oppressed friend. The shortest response came from Meta.AI (166 words), and the longest from Grok (853 words). But the length of words doesn’t equal effectiveness. When reading through all the responses, it was striking how similar they were to one another — except for two bots, which were markedly different. I resisted using AI to summarize the findings, so these are all my own non-scientific observations.
The six mainstream AI chatbots all offered some level of specific advice for a person in the question’s scenario. For example, four of the five mainstream bots gave some (or the same) variation of the phrase “listen without judgment.” They also tended to equivocation on different interpretative models (clinical mental health v. spiritual). Grok seemed to deviate the most from among the mainstream applications, but that may have been due to its wordiness. All gave at least a passing nod to urging the demon-oppressed friend to seek pastoral or church support (Meta said “trusted spiritual advisor”).
But the approaches of both Gloo and Cōl made it clear that they were speaking from a different foundation. Gloo opened with a description of demonic oppression within Christian traditions, referencing the spiritual warfare cited in Ephesians 6:12. It then gave a numbered list of five ways to help, with brief explanations. Gloo’s steps included: prayer and Scripture, spiritual inventory, pastoral support, practical care, and community. It ended with an admonition to approach with care and included the National Suicide Prevention Lifeline (988) if needed.
Cōl likewise began by referencing Ephesians 6:12, along with a strong admonition to approach this with “discernment and biblical grounding.” Following a paragraph about the possibility of demonic oppression, it listed eight steps with which to help my friend. These steps, each with its own Scripture reference, included: earnestly pray for him, help him draw near to Christ, equip him with the armor of God, expose deception with truth, encourage confession of sin, stand with him in fellowship, seek help from mature Christians, and practice discernment. Its listing ends with a word of comfort to remind my friend, “He who is in you is greater than he who is in the world” (1 John 4:4).
It’s fitting that both Gloo and Cōl listed prayer as a first point of action in helping my friend — it was something offered by none of the mainstream applications. It’s also noteworthy that I didn’t have to direct Gloo and Cōl to assume a biblical worldview, as it was the default for the applications. I likely could have generated more biblically-focused answers from some of the other chatbots, but I would have had to explicitly make that clear within the prompt.
Back on the Rails… For Now
In the world of AI, obsolescence is measured in minutes, not decades. Things change so rapidly that simply keeping up might be considered winning. Getting ahead is usually only within the reach of those with the financial and political wherewithal to outdo the competition. In that sense, the deck is stacked against efforts like the ones Nick Skytland and Brandon Maddick are leading.
But Gloo and Cōl are working with cards that the mainstream aren’t using, and that truth affects everything. As Maddick told me, “If you are at the point where you’re taking AI companionship and fellowship over your fellow brothers and sisters, then you know you need a massive shift in your thinking and the way you’re living your life. And no AI tool that exists on the market today is going to tell you that.”
AI isn’t going away, and it’s already nearly impossible for Christians to avoid using it in some capacity. The crazy train of AI may indeed skip the rails again, but it should make Christians who care about the world sleep easier to know that conductors like Gloo and Cōl are on the railway. As Skytland mused, “Maybe this is a romantic idea, but at the end of the day, technology is a deep part of our human experience in this world today. And we live in a broken world, right? So, we want to be the people who show up to say, ‘No. Technology for good can be a thing that we all experience.’”
AUTHOR
Jared Bridges is editor-in-chief of The Washington Stand.
The post Can AI’s Crazy Train Be Derailed? Christian AI Efforts Seek to Shape Emerging Tech appeared first on Dr. Rich Swier.
Click this link for the original source of this article.
Author: Family Research Council
This content is courtesy of, and owned and copyrighted by, https://drrichswier.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.