Read the Beforeitsnews.com story here. Advertise at Before It's News here.
Profile image
By Selwyn Duke (Reporter)
Contributor profile | More stories
Story Views
Now:
Last hour:
Last 24 hours:
Total:

AI Researcher Quits; Says He’s “Pretty Terrified”

% of readers think this story is Fact. Add your two cents.


874847_lowBy Selwyn Duke

It’s easy to call warnings about artificial intelligence (AI) “sensationalism.” (So many things today are, after all.) In fact, when I told a “terrified” ex-AI researcher on X yesterday that I was writing a story about the alarm he sounded and had questions for him, another respondent — an author, it turns out — wrote of me, “Typical journo… If it bleeds, it leads.” Perhaps there’s no worse insult than “typical journo,” a status below used-car salesman and personal-injury lawyer and just above politician. (Maybe.) But I think my psyche will survive. Will we, however, survive the coming AI revolution?

No, that question isn’t sensationalistic, at least not according to to the man I approached, ex-OpenAI safety researcher Steven Adler. Explaining Monday why he quit his OpenAI position in November, he posted the following quite alarming message on X:

Honestly I’m pretty terrified by the pace of AI development these days. When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point? [Tweet below.]

The “AGI” acronym Adler uses stands for “artificial general intelligence.” This is, IBM’s website explains,

a hypothetical stage in the development of machine learning (ML) in which an artificial intelligence (AI) system can match or exceed the cognitive abilities of human beings across any task. It represents the fundamental, abstract goal of AI development: the artificial replication of human intelligence in a machine or software.

In other words, this sounds much like the thinking machines so often portrayed, ominously, in science fiction. “HAL” from 2001: A Space Odyssey comes to mind.

Just Science Fiction?

As for the “alignment” Adler spoke of, that concerns preventing the ominousness. As TechTarget informs:

AI alignment is a field of AI safety research that aims to ensure artificial intelligence systems achieve desired outcomes. AI alignment research keeps AI systems working for humans, no matter how powerful the technology becomes.

But how effective will this be? As Adler said, no “lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time.” He’s not the first expert to warn of an AI-authored apocalypse, either.

For example, legendary late physicist Stephen Hawking sounded an alarm in 2014. “The development of full artificial intelligence,” he bluntly told the BBC, “could spell the end of the human race.”

Echoing this is Stuart Russell, a professor of computer science at the University of California Berkeley. He told the Financial Times, Newsweek reported Tuesday, that the

AGI race is a race towards the edge of a cliff…. Even the CEOs who are engaging in the race have stated that whoever wins has a significant probability of causing human extinction in the process because we have no idea how to control systems more intelligent than ourselves.

Then there were the comments made by former Google chief business officer Mo Gawdat in 2021. He likened “this digital [AGI] creature to an ‘alien being, endowed with superpowers,’ which has already arrived on Earth in larval form,” related Joe Allen at Substack at the time.

Finishing the thought, Gawdat said unabashedly, “The reality is — we’re creating God.”

Deity — or Devil?

Creating God is impossible, of course. Yet Gawdat isn’t alone in harboring these pseudo-deific AI visions. For example, Elon Musk revealed in 2023 that Google co-founder Larry Page aspires to create an AGI “digital god.”

Also on Page’s page is former Google engineer Anthony Lewandowski. As commentator Matt Rowe related in 2023, Lewandowski predicts that AGI will

have the entire internet at its disposal, much like our own nervous system, and all the phones, tablets, PCs, and other connected devices would, in effect, be its sense organs. The AI will see and hear everything and be everywhere at all times. Lewandowski argues that the only rational word to describe that is “god” — and he believes that humans will basically have to worship and pray to AI to influence its decision-making.

Impossible — or Probable?

On the other side, however, are those insisting AI will always just be a “correlation machine.” It will only ever be as good as the programming humans provide and ever subject to having its plug pulled. But is this short-sighted?

Consider: If something occurs in the physical world, we know it’s possible. Moreover, if we can observe it long enough, we can come to understand it. And can we not then also, eventually, replicate it?

Alright, well, humans — these beings with intellect and free will — “occur” in our world. So as technology advances and understanding of the brain increases, could we eventually create an artificial entity that also possessed intellect and free will? I suspect so.

This theory could be tenable, too, to both secularists and theists. With the former, their implied belief is that man is a mere organic robot, some pounds of chemicals and water. And then the idea that we could create inorganic robots that could mirror us shouldn’t seem far-fetched.

As for theists, remember that we’re not talking about replicating what cannot be observed in the material world: the soul. At issue is only replicating what can be. So unless one believes that the “mind,” which philosophers often consider a non-material entity separate from the brain, is required for intellect and free will, there may be little if any reason to doubt what’s at issue here: the sentient-machine thesis.

Inevitable?

Returning to Adler, I don’t know if he believes AGI can become “sentient.” He didn’t respond to my questions by publication time. (In fairness to him, he said on X that he was taking a break and could be on vacation.) But he apparently does worry that it can go rogue. And while he emphasizes the need for effective regulations, will this realistically happen? As Adler puts it, a given lab may truly want to develop AGI “responsibly.” Yet, “others can still cut corners to catch up, maybe disastrously.”

Moreover, it’s as with nuclear weapons. The United States could’ve unilaterally refused to develop them. But this wouldn’t have stopped the Soviets and, later, the Chinese, from doing so. And, really, having the soulless Beijing communists being AGI’s sole owners might be as ominous as the machine “owning” itself. So what can we conclude? Is there hope?

Well, sensationalistic or not, perhaps it’s only God who can save us from artificial super-intelligence enabled by human stupidity.

                    This article was originally published at The New American.

http://www.selwynduke.com” target=”_blank”Selwyn Duke is a writer, columnist and public speaker whose work has been published widely online and in print, on both the local and national levels. He has been featured on the Rush Limbaugh Show and has been a featured guest more than 50 times on the award-winning Michael Savage Show. His work has appeared in Pat Buchanan’s magazine The American Conservative, at WorldNetDaily.com and he writes regularly for The New American


Source: https://www.selwynduke.com/2025/01/ai-researcher-quits-says-hes-pretty-terrified.html


Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world.

Anyone can join.
Anyone can contribute.
Anyone can become informed about their world.

"United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.

Before It’s News® is a community of individuals who report on what’s going on around them, from all around the world. Anyone can join. Anyone can contribute. Anyone can become informed about their world. "United We Stand" Click Here To Create Your Personal Citizen Journalist Account Today, Be Sure To Invite Your Friends.


LION'S MANE PRODUCT


Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules


Mushrooms are having a moment. One fabulous fungus in particular, lion’s mane, may help improve memory, depression and anxiety symptoms. They are also an excellent source of nutrients that show promise as a therapy for dementia, and other neurodegenerative diseases. If you’re living with anxiety or depression, you may be curious about all the therapy options out there — including the natural ones.Our Lion’s Mane WHOLE MIND Nootropic Blend has been formulated to utilize the potency of Lion’s mane but also include the benefits of four other Highly Beneficial Mushrooms. Synergistically, they work together to Build your health through improving cognitive function and immunity regardless of your age. Our Nootropic not only improves your Cognitive Function and Activates your Immune System, but it benefits growth of Essential Gut Flora, further enhancing your Vitality.



Our Formula includes: Lion’s Mane Mushrooms which Increase Brain Power through nerve growth, lessen anxiety, reduce depression, and improve concentration. Its an excellent adaptogen, promotes sleep and improves immunity. Shiitake Mushrooms which Fight cancer cells and infectious disease, boost the immune system, promotes brain function, and serves as a source of B vitamins. Maitake Mushrooms which regulate blood sugar levels of diabetics, reduce hypertension and boosts the immune system. Reishi Mushrooms which Fight inflammation, liver disease, fatigue, tumor growth and cancer. They Improve skin disorders and soothes digestive problems, stomach ulcers and leaky gut syndrome. Chaga Mushrooms which have anti-aging effects, boost immune function, improve stamina and athletic performance, even act as a natural aphrodisiac, fighting diabetes and improving liver function. Try Our Lion’s Mane WHOLE MIND Nootropic Blend 60 Capsules Today. Be 100% Satisfied or Receive a Full Money Back Guarantee. Order Yours Today by Following This Link.


Report abuse

Comments

Your Comments
Question   Razz  Sad   Evil  Exclaim  Smile  Redface  Biggrin  Surprised  Eek   Confused   Cool  LOL   Mad   Twisted  Rolleyes   Wink  Idea  Arrow  Neutral  Cry   Mr. Green

MOST RECENT
Load more ...

SignUp

Login

Newsletter

Email this story
Email this story

If you really want to ban this commenter, please write down the reason:

If you really want to disable all recommended stories, click on OK button. After that, you will be redirect to your options page.