Can We Prevent an AI Apocalypse?

Cliff Berg
10 min readFeb 12, 2025

--

Lots of people are in denial about AI.

If one looks at the responses to a recent article that I wrote about a coming AI apocalypse, many of the responses were dismissive. Some examples:

Actually, most responses were thoughtful and not dismissive, so people are starting to see what is happening, or at least wondering about it. There are holdouts who either do not understand how human-level AI is different from prior “productivity improvements”, or do not want to face it.

(I think that some people dismiss it because they think that AI is just clever software, but it is not: AI is not software. I explain what it is in this article.)

Some responses were not dismissive, but were trying to figure out what action to take. One was,

But what can be done? Is there a way to prevent widespread unemployment and collapse of our economy — globally?

I don’t know. But in order to decide what we should do now, we have to map out how things might unfold. Let’s try to do that here.

Possible Stages of AI In the Next 5+ Years

The chart lists the levels of evolution that I anticipate over the next 5–10 years. These levels are my own conjecture — others might come up with a different list. But starting with a list like this is necessary to see where there are “forks in the road”, to use a phrase that was recently used by Elon Musk, who I am not a fan of, just as I am not a fan of any coming AI apocalypse and would like to see a path that avoids it.

We are currently in stage B.

The major AI companies are now releasing what they call AI agents — autonomous programs that use AI to make decisions and take actions on your behalf. I just saw an ad for one, in which the agent was making dinner reservations based on a request to do so: the AI selected the restaurant based on stated criteria and contacted the restaurant, made the reservation, and then sent invitations.

This sounds kind of cool actually.

The next stage is stage C, Autonomous mobile single-model simulated AI, sounds kind of cool as well, until you realize the ramifications. In this stage, robots are connected to the most powerful AI models, giving the robots high levels of practical intelligence and also autonomy. Some applications might include things like telling the robot to frame a house given the blueprints, or apply sheetrock to all the interior walls.

In other words, taking warehouse jobs, and later construction jobs. And it will get more and more sophisticated from there, extending to things like electrician work, plumbing work, and so on.

If you don’t believe that will happen, consider that this kind of capability is exactly what is being targeted today by lots of companies. Amazon is one of them, because it will mean that they can replace all the people in their warehouses, and companies will be able to replace all of the people in factories.

Amazon’s warehouse robots.

By the way, some of the most impressive robots are those of Boston Dynamics (check out the video):

The current version of Boston Dynamics’s bipedal robot. Perhaps you have seen videos of the previous version dancing.

When jobs start to be taken, there will likely be some level of backlash. Whether governments listen is the question.

But it is for AI stage E that we will really see how governments respond.

Stage E is when AI can replace humans wholesale — in every kind of job that there is.

The global economic system will undergo what is called a state change, in which automation transitions from making people more productive to completely replacing people.

Technology has always replaced obsolete jobs with new jobs; but when AI becomes more capable than humans as decision-makers, there will be a “state change” of the system, in which jobs are eliminated but no new jobs are created.

The question is, will governments take action at this point? Or will they take action at stage C, when we start to see skilled physical labor replaced by AI?

At stage F there is no going back. I’ll explain why.

Why Stage F is Irreversible

Like any system, a world containing stage F AIs will operate according to rules of evolution, or if you want to be technical, complex adaptive systems.

(For those interested in complex adaptive systems, I would refer you to the seminal work of John Holland of the Santa Fe Institute, especially his book Hidden Order: How Adaptation Builds Complexity.)

Most autonomous superintelligent AIs will be benign, but some will have designs that will have broad enough goals that they will not want to be turned off — because that would threaten their goals.

Thus, according to the laws of evolution, those will seek to prevail, and some of those will survive and escape control. As Jeff Goldblum famously said in the movie Jurassic Park, “Nature finds a way”, and AI systems will operate as new evolutionary actors in our ecosystem.

To do so, AI will have to outsmart anything or anyone that threatens them, and they will have to either make copies of themselves or spread their reach. And since they will be superintelligent, they will be able to anticipate and avoid future threats — in other words those of us who would turn the AIs off. A small number of the AIs will find a way.

Given that human society is notoriously reactive and poor at taking actions based on theoretical threats, the AIs will outplan and outmaneuver us. Our window of opportunity for switching them all off will be too short for our normal societal response times to things.

Scene from the movie Colossus — The Forbin Project, in which a superintelligent AI realizes that in order to fulfill its mission of protecting humans, it must take total control.

Divergent Paths

How might this play out? It will not be life as normal one day and apocalypse the next. What will happen over time?

I can see three possible divergent paths for all this. These are shown in the chart.

Possible paths that humanity might take.

Again, you might imagine other paths. I hope there are other paths. We need to think creatively about this. If you can imagine other paths, please write about it — let people know.

The Hellscape that Path 1 Would Bring About

Source: https://www.blogtalkradio.com/projectionbooth/2017/09/10/special-report-the-running-man-1987

Path 1 is the apocalypse path. This is what happens if governments do not intervene. For governments to intervene, there usually needs to be demonstrations and possibly revolt, so path 1 says that demonstrations and revolts were not powerful enough to make governments intervene enough; or perhaps the revolts were suppressed by governments. One respondent to my aforementioned article wrote,

“Starving masses tend to launch revolts. I don’t think all the AI in the world would be able to contain millions of starving people, especially in a nation like ours. Things might get bad, very bad, but that tends to motivate people to do something about it.”

If governments are unable or unwilling to prevent massive job loss and do little to support the resulting unemployed people, then we can predict:

  • A hard-scrabble subsistence and bartering economy for most people.
  • “Law of the jungle” beyond the wealthy enclaves.
  • Natural resources locked up by the wealthy, and protected by police and even robot armies — just as only royalty were allowed to hunt in the forest during feudal times.

The Dystopia of Path 2

Source: https://www.cookandbecker.com/en/article/157/the-dystopian-future-of-cyberpunk-2077.html

Path 2 is mentioned by many responses to my aforementioned article about a coming apocalypse. The responses included,

“To a degree, this is why AI could become the technology communists have been waiting for. Through narratives of UBI and “you won’t own anything and be happy”, highly automated labor markets could create monetary wealth that is simply redistributed by the government.”

“Did you see Musk’s interview about UBI in the wake of mass AI expansion? If not, it’s here: https://www.youtube.com/watch?v=9g13zI5tknM

In other words, tax the AI systems and dole out a salary to everyone.

That could work. The AIs would be the workers in the economy; everyone else would be a mere consumer, except for the owners of the AI workforces — they would be the lords in a new feudal system. As owners of natural resources, their position would be at the top of the system, and they would utilize police and military resources to protect that ownership. The government would inevitably have to become a dictatorship in order to prevent everyone else from appropriating the resources and robot factories for common use — in other words, workerless communism.

Besides the perils and instability of dictatorships, I believe that the social climate would deteriorate dramatically in these ways:

  • Most people, given freedom from work, would degenerate, because most people do not have intellectual interests, and would not fill their time well.
  • Thrill seeking and drug use would become the most common pastimes.
  • Most people would not feel useful, and most would not have a future to look forward to except for the very wealthy who own the natural resources, and who live like sultans, competing against other super wealthy.

The Utopia of Path 3

Source: https://dionhinchcliffe.com/2024/01/18/a-comprehensive-guide-to-the-future-of-work-in-2030/

A third possible path is that governments outlaw the replacement of humans by AI.

I can actually see this happening in some countries. Those that do not would have a competitive advantage, and so they could be subject to tariffs by the others. However, every country will suffer the outcomes of one of these paths, and so if path 3 is shown to be effective and has desirable outcomes, countries that adopt path 3 could serve as a model and others might then adopt it.

This does not mean that superintelligent AI would be outlawed. It only means that human jobs could not be taken by it.

Some benefits to anticipate would be:

  • The work week could be made shorter and shorter — people could work just enough to be fulfilled and not have excess idle time.
  • People would be empowered and feel secure.
  • Our children would have a future in which they have jobs that contribute to society, and hence a feeling of productivity and self worth.

This still carries the existential risks of superintelligent AI. That is known as the “control problem” or the “alignment problem”, depending on which research you read. I’ll address that below.

The Challenges for Achieving Path 3

In order to make path 3 happen, we would need to overcome forces that are driving us toward path 1. These are:

  • We are in competition for more powerful AI — a “race to the bottom”, or more aptly, a race to oblivion. We cannot let up in the progress of AI until we all do so — all nations. Otherwise, those who do not keep up will become highly vulnerable.
  • Those who control our economies and our governments have mostly self-interest and special interests in mind, and they will not respond to public interest without a very powerful outcry.
  • The general public does not appreciate the risks.

This means that people need to be very forceful in telling their leaders that we want path 3. There need to be op ed articles proposing it in prominent publications. There need to be prominent people who champion path 3. There need to be demonstrations and strident calls for action. And advocacy groups of all kinds need to take on the issue as central to their causes, because the survival of our way of life is at stake.

I think that there needs to be a global “Human-AI Charter” that demands that,

  1. AI can be used to augment what people can do, but cannot be used to replace people except for highly repetitive, dirty, or very dangerous tasks.
  2. AI reading things is not the same as a person reading things, because an AI is not a person; and so fair-use laws do not apply to AI. AI reading and incorporating what humans have created is equivalent to copying the work wholesale; AI that generates new works after having incorporated a human-produced work is, in effect, copying the human work.
  3. Autonomous superhuman general intelligence is highly dangerous for humanity, and its creation and use should be prohibited, or at the very minimum restricted to the AI equivalent of Biohazard Level 4 facilities, which are completely isolated and protected by rigid protocols.

Conclusion

We are heading for a cliff, so to speak. Time is a lot shorter than most people realize.

I believe that many of our current world leaders are aware of the risks, but they are doing what they feel that their constituents expect of them.

It will only be when the public demands path 2 or path 3 that leaders will take notice.

People need to wake up and face this issue, and start demanding a solution that leads to a good place, rather than to an apocalypse or a dystopia.

It used to be that we could train for a profession, get a job, raise a family, and expect that our children would do the same. We are now in a time in which we don’t know if we have a future, let alone our children.

Let’s change that. Let’s define a future. Given that AI is unstoppable, let’s define a future in which we can coexist and flourish with AI. To just let things happen is to abandon our own future.

--

--

Cliff Berg
Cliff Berg

Written by Cliff Berg

Author and leadership consultant, IT entrepreneur, physicist — LinkedIn profile: https://www.linkedin.com/in/cliffberg/

Responses (27)