As companies continue to push the boundaries of artificial intelligence, predictions regarding the technology’s capabilities also evolve. Artificial general intelligence, or AI as intelligent as humans, was once thought to be decades away. Now, even the “godfather of AI,” Geoffrey Hinton, believes it may arrive in just five years.
As these timelines condense, a disconnect appears: While a growing number of people use AI daily for work or fun, many remain unaware of its full potential. Sudhanshu Kasewa, an adviser at the nonprofit research group 80,000 Hours, told Straight Arrow News that he sees a wide swing in people’s perception of how the technology could affect the human race.
“The biggest misconception is that [most people don’t realize] there’s a pretty big set of negative outcomes ranging from a few people losing their jobs, a bunch of misinformation and extinction,” he told SAN.
While most experts agree AI is likely to lead to a major workforce shakeup, some do believe AI could end civilization. In April, the AI Futures Project published the fictional case study, AI 2027, which spelled out this fear in detail. But researchers say there are things humanity can do now that can both prevent terrible outcomes and usher in a new age of society.
Raising the alarms now
Experts who spoke with SAN stressed a sense of urgency in understanding AI’s effects. Aleksandra Przegalińska, a Harvard researcher specializing in human-AI interaction, noted that AI chatbots from just two years ago would appear primitive to most people today.
“If we were to look back to 2023 and talk to these early bots, we would think that they’re very limited,” she told SAN.
In 2022, ChatGPT struggled to solve basic math equations or develop coherent paragraphs. Now, AI systems are solving International Mathematical Olympiad problems and earning a nearly perfect score on the SAT.
Adam Dorr from RethinkX, a technology-focused think tank, told SAN that if companies and researchers don’t pick up the pace, technology will leave humanity behind.
“We either solve this by design, or it gets solved without us by default. That’s a recipe for chaos and disaster,” he said. “These are the least capable these systems are ever going to be today. Tomorrow, they’re going to be more capable.”
As AI grows in power, the industry must increase oversight and hire people focused on AI safety, according to Josh Landes from the AI safety training organization, BlueDot Impact. And that needs to happen quickly, he said.
“There’s just not enough people working on what we think is the most important problem of our lifetime, maybe of humanity overall,” Landes told SAN.
A system under stress
According to the Stanford Institute for Human-Centered Artificial Intelligence, in 2024, “AI-related incidents” – such as security issues, system failures and research misconduct – jumped 56.4% from the previous year. Since 2013, AI incidents have increased twentyfold. For example, in 2024, explicit AI-generated “deepfake” images of Taylor Swift “flooded” X despite the company’s policies banning manipulated media and nonconsensual nudity. Following the incident, hundreds of AI researchers and activists signed an open letter urging lawmakers to criminalize such images.
As these incidents increase, the number of people dedicated to preventing them remains low.
“The community of people thinking about AI safety numbers in the low tens of thousands,” Kasewa told SAN. “In comparison to that, there’s 100,000 people working as machine learning engineers and researchers … trying to build really, really powerful AI systems that could be quite dangerous.”
Investment patterns offer insight into companies’ priorities.
“We’re on track to see maybe like $500 billion in AI investments in this year alone,” Landes told SAN. “Maybe for every $100 that go into capabilities, maybe $1 is spent on safety. I think that ratio is maybe not so good.”
The regulatory framework also lags behind AI’s technological advancement. The U.S. still lacks a comprehensive federal law regulating AI.
“AI in the U.S. is currently less regulated than a taco truck, which sounds insane to me,” Kasewa told SAN.
The alignment challenge
Unbiased. Straight Facts.TM
Despite being trained by Meta to be honest, the Diplomacy-playing AI, CICERO, developed strategies that involved premeditated deception to win the game.
Alignment is at the heart of AI safety. In simple terms, alignment is the assurance that AI systems behave according to human values, goals and intentions.
An AI that becomes misaligned could create unintended consequences. Those consequences could even lead to harm. In 2010, two trading algorithms designed to maximize profits triggered a market collapse, wiping out nearly $1 trillion.
Challenges with alignment are complicated by a fundamental problem: Today’s most advanced AIs work in ways that sometimes even their creators don’t comprehend. These machines are sometimes called “black boxes,” meaning researchers put in a prompt and receive the answer they want without knowing how the machine arrived at that answer.
“We don’t have transparency. Most of the models, unfortunately, are characterized by opacity,” Przegalińska told SAN. “We don’t understand what’s happening inside, and it’s very hard to achieve a goal of full transparency of a model.”
That opacity becomes more dangerous as AI’s capabilities increase.
“If AI develops certain research capabilities that are sort of not perceivable for us, we might not even understand what AI has discovered, and we won’t be able to make products out of it,” Przegalińska said. “There will be a sort of a parallel society of AI researchers and human researchers, and these will not intersect.”
Is it too late?

Governments have begun responding to such concerns in recent years.
In the United Kingdom, lawmakers created the AI Security Institute, the first state-backed organization dedicated to ensuring advanced AI is safe and helpful.
“I’m quite impressed by the UK AI Security Institute,” Kasewa told SAN. “I don’t know of any other government body anywhere in the world that actually has that kind of mandate.”
Still, broader government comprehension lags behind AI’s reach, contributing to what Landes called a consistent gap in understanding.
“I do think in many parts of the government, people are not thinking about AI being like the super big cross-cutting thing that it’s very likely going to be,” Landes told SAN.
Industry’s safety deficit
As AI companies race to be the first to develop AI as capable as a human, another misalignment is taking shape: companies can choose to prioritize research that will generate the most revenue amid little external pressure to prioritize safety.
“This is the peril of being in arms race conditions … you throw caution to the wind,” Dorr, from RethinkX, told SAN.
This creates what Kasewa describes as a fundamental coordination concern.
“There’s just such few local incentives pushing in these directions,” he said. “No one is noticing that their own actions don’t align with those global incentives.”
Even Hinton, the “godfather of AI,” has criticized the industry’s attempt to limit regulation.
What needs to change
AI safety challenges will require a multipronged approach, from technical research to international policy.
On the technical side, researchers told SAN that AI needs to undergo more rigorous testing before it is deployed.
“One of the biggest priorities is adding capacity. Making sure that the best people working on this are not working only on the capability side but also on the safety side,” Landes told SAN.
This could be improved if universities, governments and organizations created clear career pathways into AI safety work. More funding invested into safety or alignment training could also boost the number of people working in the field.
The global nature of AI development requires international cooperation, but only a few countries are developing and researching AI. Kasewa said those countries need to make safety agreements with each other.
“We need leaders of these countries to come together and agree on all kinds of safety,” he said.
Those changes should be made now — before companies reach artificial general intelligence, according to Kasewa.
“Even if it’s happening in the next 15 to 20 years, that’s still a really, really small time. The window for trying to do useful stuff here is going to rapidly shrink,” he told SAN.
The choice ahead

AI’s path forward depends on choices made now.
“Let’s assume AGI, we get to it in 2030. That’s 1,600 days right now. You have that many days. Figure out how to make it count,” he told SAN.
Despite the challenges, experts like Kasewa remain hopeful.
“I think catastrophe is not the default outcome,” he told SAN. “I think this is a solvable problem. I think if we have enough people working on this, if we have enough spending on this, we can make this work.”
Click this link for the original source of this article.
Author: Brent Jabbour
This content is courtesy of, and owned and copyrighted by, https://straightarrownews.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.