• Wion
  • /Trending
  • /'I'm pretty terrified': Ex-OpenAI researcher quits, issues dire warning

'I'm pretty terrified': Ex-OpenAI researcher quits, issues dire warning

'I'm pretty terrified': Ex-OpenAI researcher quits, issues dire warning

Story highlights

World news: Ex-OpenAI researcher criticised AI labs for engaging in what he called a risky race toward AGI, arguing that the lack of alignment solutions increases the likelihood of unintended consequences

Former OpenAI researcher Steven Adler has outlined his concerns about the rapid advancement of artificial intelligence, stating that the industry's pursuit of artificial general intelligence (AGI) poses significant risks.

In a series of posts made on X, Adler, who has worked in AI safety for four years, explained his decision to leave OpenAI in November.

"Honestly, I'm pretty terrified by the pace of AI development these days," Adler wrote. "When I think about where I'll raise a future family or how much to save for retirement, I can't help but wonder: Will humanity even make it to that point?"

Add WION as a Preferred Source

Honestly I'm pretty terrified by the pace of AI development these days. When I think about where I'll raise a future family, or how much to save for retirement, I can't help but wonder: Will humanity even make it to that point?

He criticised AI labs for engaging in what he called a risky race toward AGI, arguing that the lack of alignment solutions increases the likelihood of unintended consequences. According to Adler, no research lab has successfully addressed the issue of AI alignment and the accelerating pace of development reduces the chances of finding a solution.

"An AGI race is a very risky gamble, with a huge downside. No lab has a solution to AI alignment today. And the faster we race, the less likely it is that anyone finds one in time," he stated.

Adler also noted the competitive pressure among AI companies, which he believes forces labs to cut corners to stay ahead. He called for greater transparency and regulatory measures to prevent unsafe practices.

"Even if a lab truly wants to develop AGI responsibly, others can still cut corners to catch up, maybe disastrously. And this pushes everyone to speed up," he wrote. "I hope labs can be candid about the real safety regulations needed to stop this."

For now, Adler is taking a break but remains interested in exploring AI safety topics, including control methods, scheming detection, and safety cases.

Meanwhile, OpenAI has lost its top position on Apple’s free app store rankings, with DeepSeek overtaking it as the leading AI application.

About the Author

Tarun Mishra

Tarun Mishra is a Sub-Editor at WION. He has worked with leading outlets, covering business, global affairs, technology, space exploration and culture. With a diverse background sp...Read More