AGI Is Upon Us At Last — and What That Will Mean

Cliff Berg
9 min readNov 24, 2023

--

https://www.blogtalkradio.com/projectionbooth/2017/09/10/special-report-the-running-man-1987

A lot of people underestimate AI. Not everyone though. It depends on how you think. If your thought processes use what Whole Foods founder John Mackey calls an analytically intelligent approach, then you will conclude that AI is nothing to worry about. But if you your thought processes use what Mackey calls a systems intelligence approach [ref 1], then you will be extremely concerned about AI.

[Ref 1: Sisodia, Rajendra ; Rajendra. Conscious Capitalism, With a New Preface by the Authors: Liberating the Heroic Spirit of Business (p. 184). Harvard Business Review Press. Kindle Edition.]

Over the past year I have listened to people and podcasts by people who have used ChatGPT (the more powerful GPT-4 version) extensively. They have drawn conclusions about what it can do. The conclusion is usually along the lines of,

This is a new capability. We will have to incorporate it into our (schools, businesses, etc). It will make people more effective, but it won’t replace them — at least not all of them.

The critical flaw in this highly analytical thought process is that it presumes that AI has reached a plateau. Consider the class “S curve” of innovation:

Source: https://www.futurebusinesstech.com/blog/the-s-curve-pattern-of-innovation-a-full-analysis

This curve shows how innovation tends to progress over time. Each technology first has a steep acceleration of improvement, but that rate slows, and eventually the improvement plateaus. At some point there is a breakthrough and the technology is either radically changed or something entirely new replaces it. There is then another steep period of improvement followed by a plateau.

The analytical view of today’s AI assumes that we are now approaching a plateau: that ChatGPT represents a curve that will slow and we will be in a “new normal”, and that a future breakthrough is far off, and we can deal with that when it comes, but it is not here yet. Thus, they think of today’s AI as being on the dashed blue line below:

In contrast, a systems view of the progress of AI will look far back, at where it was decades ago, and where it is today, and see the long term trend. The systems view will perceive the trend shown by the dashed red line below:

This trend is very worrisome. It projects an apocalyptic future that is not far off. This is what has some people worried.

Today there was an article in Reuters that claims that,

“​​Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.”

According to the article, this new and improved OpenAI technology is called “Q*”, and that “Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s search for what’s known as artificial general intelligence (AGI)”.

Altman knew of this technology. A day before he was fired for four days, at the Asia-Pacific Economic Cooperation summit he said,

“Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime”.

So something is up. They have something that is more powerful than ChatGPT. It will be a new S curve, not a continuation of the one that ChatGPT started.

And all the people who studied what ChatGPT can do over the past year now will have to reassess.

And I will not be surprised if yet another new S curve is not far behind. And another.

Because the red dashed line is the real trend.

Here’s What It Will Mean — This Is Not Like Other Tech

In the past, new technology empowered people. Jobs were created instead of being eliminated. But this time it will not be like that. We are not there yet, but at some point soon, AI will be able to replace human thinking for most if not all things.

When that happens, people who have power will not need laborers or servants anymore.

In the past, the powers-that-be relied on a working class to do the work: the engineering, running the factories, serving the food, discovering new medicines, and so on. The rich needed everyone else — the middle class and the poor.

This is what is known in physics as a “state change”, also known as a “phase change”. A state change is when the interactions within a system change such that the overall behavior of the system suddenly changes. For example, water freezes when the interactions between molecules slow enough so that suddenly the molecules arrange themselves in a rigid structure. At that point, the overall behavior of the water changes from being fluid to being solid.

To this point, advancing technology has empowered humans. At some point, the technology will cross a threshold, and humans will become irrelevant. There will be a state change: instead of the past repeating itself to create more jobs, suddenly jobs will rapidly decline.

How This Will Play Out

Companies will begin to replace humans with AI-driven labor. Instead of empowering humans to do more and bigger things, tech will wholesale replace humans, and the new opportunities will be filled by still more machines — not by empowered humans.

Construction workers will be replaced by robots. But so will doctors. And lawyers. And engineers. And even scientists. Restaurants will be staffed by robots. Even therapists will be machines: today lots of people find conversational AI systems to be just like talking to people. People will get used to the idea, just as they got used to meeting online.

No profession will be safe. Unemployment will soar, and the global economy will collapse: the restaurants will be empty because no one can afford to eat out when they have no job. At some point unemployment will reach 50%, and at that time the world’s wealthy will become nervous about revolt.

The government will have to become a massive handout system. But anyone who has relied on US unemployment insurance or Social Security or the British National Health Service knows what that looks like. In “democratic” countries, the government is actually controlled by special interests. Our votes are always for those who are presented as choices to vote for. The choices are curated by who gets funding to campaign for votes.

Since the world’s wealthy either own everything or have a controlling interest in all of the world’s resources, they will seek to shield themselves from the rest of humanity. They will build protected enclaves. They will enlarge police forces to protect “our property”. Police forces will increasingly look like armies, with military-like capabilities. People’s AR-15s will be no match for Bradleys, AI-based smart weapons, and massive surveillance. The idea that a population can rise up against the government is a fantasy today: just look what happened at Tiananmen Square or the Arab Spring, or to Alexei Navalny in “democratic” Russia.

The world will begin to look more and more like it was depicted in that sci-fi movie Running Man: totalitarianism, economic collapse, and rich people in enclaves, with the totalitarian government serving their and only their interests.

What Should We Do?

In reality, I don’t know that there is much that we can do. Solving this would require unprecedented collective action at the global scale. The fact that the US, EU, and China are interested in the issue is a positive sign, but action is what is needed. Of course, Russia views it in purely competitive terms, but they will ultimately follow China’s lead, and they are currently living beyond their means, making their decline and irrelevance inevitable.

Here is what it would take to prevent the destruction of jobs by AI:

  • The US, EU, and China would have to create a durable accord on AI safety, and implement aggressive controls that they are serious about adhering to. In particular, they would have to subordinate the military advantages of AI to concerns about the future. (Good luck with that.)

But if we assume that job destruction occurs, then here is what is required to prevent the apocalyptic “Running Man” follow-on scenario that I have described:

  • Natural resources of all kinds would have to be seized by governments, so that private interests no longer control them. That makes sense anyway: why should special interests own things like mineral deposits? Private ownership of natural resources originates from a time when those resources seemed unlimited.
  • Governments would have to change their systems in ways that amplify the interests of citizens over wealthy special interests. For example, in the US, treating corporations as “people” (e.g. via the Citizens United decision) would have to be disallowed by law. Also, money would have to be removed from elections. Elections should be about votes, not ad spending, but that’s a hard one for so many reasons, especially given the increasing role of social media and how easily it is manipulated by special interests. Outside the “Western” world, I don’t know enough about the Chinese Communist Party to have a confident opinion on what would be needed in their case, but it would have to prioritize taking care of its people over protecting the elites. Again, good luck.

I think that a lot of people are worrying about this, in the undercurrents of their awareness. It might explain part of the general unease that people feel nowadays — an unease that has no obvious explanation from among the usual sets of things. We see trends like where AI is going, and we feel helpless.

People generally feel like the world of the past, which they cherished, is slipping away, with an unpredictable and scary future. Technology is the major factor in that change, which has been accelerating. It feels like being on a train that is headed for oblivion.

Our leaders do not really understand the problem either, because we tend to elect business people and lawyers — people “like us” — and technology is largely beyond their grasp. They frankly are in over their heads. And many of the “analytical” thinkers, to use John Mackey’s term, who think generative AI is just another benign trend, are in positions of influence, including being venture capitalists who fund these companies and expect a quick financial return. And they are among the ones who will be in the protected enclaves anyway.

It is tragic, because the future could have been so incredible. Imagine all diseases cured. The elimination of poverty. Plentiful energy that does not produce CO2 or pollute the air. Flying cars. Travel between the planets. These things are all coming. And the will to limit population growth and all the pollution and environmental effects that result from too many people. We could have had a world of leisure, excitement, and security. But alas, it is not to be.

All we can really do is make AI safety a voting priority, and hope for the best.

Although, in closing, I would like to share that I sometimes wonder about the incredible improbability of being an apex life form, from among the almost uncountable life forms, and alive at this watershed time — at literally the endpoint of civilization and possibly the end of humanity. Perhaps that improbability is a hint that this situation that we find ourselves in is not what it seems.

--

--

Cliff Berg
Cliff Berg

Written by Cliff Berg

Author and leadership consultant, IT entrepreneur, physicist — LinkedIn profile: https://www.linkedin.com/in/cliffberg/

Responses (4)