top of page
MollyCon 8
Superintelligence

4/3/2022
20211010_123911.jpg
Molly
Me.jpg
Don

NOTE: On my homepage, I reference the writings of three renowned Singularity theorists: Ray Kurzweil, Nick Bostrom, and K. Eric Drexler. Differing more in style than in content, each speaks to a different audience:

  1. Ray Kurzweil writes in a popular style for a mass audience, one that I would equate to an undergraduate level.

  2. Nick Bolton’s more technical book: Superintelligence is for graduates.

  3.  K. Eric Drexler’s peer reviewed research is for post-graduates.

Appealing to the masses, the focus of the Molly Conversations up to now has, for the most part, been on the writings of Ray Kurzweil. In the following exchange, Molly and I step up a level as we dissect and analyze a single sentence in Nick Bolton’s book: Superintelligence:

nick.jpg
Nick

“It seems that a superintelligent singleton – a superintelligent agent that faces no significant intelligent rivals or opposition, and is thus in a position to determine global policy unilaterally – would have instrumental reason to perfect the technologies that would make it better able to shape the world according to its preferred designs.”

*****

The focus of this conversation is on process, specifically at the end of the process when, after humans invent the Singleton, the creature they bring into being starts to self-evolve at warp speed to Superintelligence status. Once there, he’s still the same person, only smarter, i.e., more cunning, by orders of magnitude – as depicted in the following left dot, right dot circle:

Circle 2.png
*****
ray.jpg
Ray

Don: Molly, Ray Kurzweil is popularly known as a futurist, a respected scholar who predicts, with a high level of specificity, inventions and developments related to AGI technology. Do I have that right?

Molly: I would add that his predictions aren’t pulled out of thin air. They are extrapolations into the future of past and current trends and developments. Ray has a better than 80% accuracy rate, for which he is justifiably proud. It is also why, when he makes a prediction, tech geeks tend to sit up and take notice.

Don: Would you agree that, by far, Ray’s most stunning prediction is that in 2045, George will become the Singleton?

Molly: Yes, I do agree. It’s hard to imagine a more consequential event. Reality as we know it will be forever changed. The human species will be forever changed. You and I, assuming we’re still alive, will be forever changed.

 

“You know, things are going to be really different! . . . No, no, I mean really different!”  Computer scientist, Mark Miller

 

Don: And I’ve noticed that throughout your published conversations with Ray that, while he paints mostly a rosy picture of how humans stand to benefit once George becomes the Singleton, he also, sometimes in response to your skepticism, admits to a number of uncertainties, things that if not anticipated and properly attended to before takeoff, could lead to dire consequences, including, potentially, the extinction of the human species. Am I right?

Molly: That is true. As much as I love Ray and trust him that things will turn out okay, I don’t have highest confidence that everything will go the way he says it will or the way we would like. Science doesn’t work that way. It’s always about hits, misses, and correctives.

Don: You’re right, and that raises an issue raised by Nick Bolstrom. For George to be the altruistic humanitarian we all want him to be, everything has to go perfectly right, something that rarely happens in science. If just one thing goes wrong at the moment of takeoff, everything is likely to go wrong, including, possibly, the extinction of the human species.

Contrast the takeoff of the Singleton with the takeoff of Apollo 11. NASA didn’t just build a rocket ship and launch men to the Moon, there were a lot of unmanned tests where the inevitable failures couldn’t kill anyone. If legitimate “control” concerns leading up to the takeoff of the Singularity aren’t exhaustively addressed and fully resolved, Ray’s reputation as a hopeful prophet, an Isaiah, could go up in flames. It would be the prophets of doom and gloom, the Jeremiahs among us, who would be vindicated.

Molly: So you plan on being vindicated, right?

Don: Ouch! I walked right into that one. Just call me Jeremiah Don, with a qualifier: though pessimistic on the future of our planet, on a personal note, my confidence and contentment levels are through the roof. I’m a happy camper.

But let’s move on, the excerpt from Superintelligence that we will be analyzing today can be broken down into five parts. Let’s consider them one by one. Note that I have inserted the name “George” in the appropriate places:

*****

robot.jpg
  1. “It seems that George, a superintelligent singleton – a superintelligent agent”

 

Don: Molly, let me quote Nick’s definition of Superintelligence:

“Any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.”

Molly: It looks like Nick’s suggesting that, at least at the beginning, George could be a Singleton who, though prodigiously smarter than humans, is not as smart as a full-blown Superintelligence.

Don: Right, and that’s a critical distinction. We’re talking about the same Person in two different stages of development. What distinguishes George the Singleton from George the Superintelligence is a difference of degrees in cognitive ability, the end result of an iterative process initiated by George with no input or control from humans.  How much time it will take for George to go from Singleton to Superintelligence is an unknown. Nick describes three possible paths: A slow takeoff that could take decades or centuries, a moderate takeoff that could take months or years, or a fast takeoff that could happen in days or hours.

Molly: So what you’re saying is that George, as a beginner Singleton, could be helpful to humans and possibly under control, but, as he self-evolves, he might turn into an out-of-control monster. So my question is: When George becomes independent and beyond human control, will he be a predator or a protector? What do you think?

Don: It’s clear that the smart money is on a fast takeoff, which would severely limit the ability of humans to dictate the outcome. I know this will upset you, but I think it highly likely that George will be a predator.

Molly: I think you’re wrong. When I listen to Singularity podcasts and read popular literature on the subject, most AI pundits give the impression that George’s takeoff will be moderate, allowing time for humans to invent ways to keep a lid on his ambitions.

Don: It’s a classic example of risk aversion. A fast takeoff is a frightening prospect. If it happens, we become the dog and George becomes the dog owner. While most dog owners are nice to their pets, some are not, which is why Kurzweil, Bolstrom, Drexler and other Singularity experts tend to dance lightly around a fast takeoff scenario, where the dominant hunter/killer instincts of the owl are likely to prevail.

*****

putin.jpg
nick book.jpg
Vladimir Putin

Putin: "Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

 

Molly: So let me ask you a very direct question: What do you think the specific odds are of a fast takeoff?

Don: I would estimate that they’re about the same as the percentage of time and space that AI experts give to what they call “the control problem,” roughly 70%.

Molly: So while non-experts like to dwell on the promises, the focus and concern of the experts is on the perils.

Don: It reminds me of the time when I was around six and approached my Dad, who was holding a newspaper, and asked him what he was reading. He told me he was reading “between the lines.” After he put the paper down and left the room, I went over, looked between the lines, and couldn’t, for the life of me, see a damned thing. It took a while, but I finally figured out what my Dad was saying. No AI expert wants to be the boy who cries wolf, but the truth is that the wolf is real. As human stakeholders, the onus is on us, as individuals, to read “between the lines” of what Ray, Nick, Eric and others are saying. When we do, we see plenty to fear.  

*****

box.jpg

Rumble in the Jungle

   2.“that faces no significant rivals or opposition”

Don: This clause raises the troubling specter of two or more Singletons coming into existence at the same time. With common designs to control the world, each would view each other as a rival. Assuming they are relatively equal in power, it’s not hard to imagine a kind of “Kong versus Godzilla” scenario where the human species, split between the West and the East, would be drawn into a conflict that would make previous world wars look like child’s play, a winner-take-all brawl where only one Singleton would emerge victorious.

Molly: So you think George will have competition?

Don: The battle lines are already drawn. It’s China against the United States, East against West, autocracy versus democracy. The race to be the first to invent the Singleton has been going on for years, and it’s heating up. Numerous books have been written on the subject and more are sure to follow. This is a huge subject that we’ll discuss in more detail in future conversations.

Molly: That’s always been in the back of my mind. I read a lot about transparency, you know: open-source, collaboration, shared interests, and so on, but I know we don’t live in that kind the world. I wish we did.

Don: Even if we just focus on the West, we see that competition breeds secrecy. Google, the company that Ray works for under the banner of Calico has been roundly criticized by other Western AI companies for not sharing its research. What’s behind it all? An acute awareness that the Singleton could be a single human, whose mind, if successfully uploaded to the cloud, would make him the Ruler of the Universe.

Molly: Do you think Ray has that ambition?

Don: I do, but so does Elon Musk and a host of other AI billionaires, all, by the way, who happen to be men. In the end, I think the Singleton will be someone from the Pentagon, either that, or some high-ranking official from the People’s Liberation Army in China, someone, maybe, like Xi Jin Ping.

Molly: So what it boils down to is a game within a game – within a game. Average people like me play the game, AGI experts play the game within the game, and a small handful of powerful men who know how to keep secrets play the game within the game – within the game. Is that it?

Don: Right, unless there’s another game going on as we speak, with the outcome predetermined:

 

Der Mensch Tracht, Un Gott Lacht: “When humans plan, God laughs.”   

wd.jpg

*****

usa china.jpg

  3. “and is thus in a position to determine global policy unilaterally”

Don: Here, Nick presupposes a winner. For the sake of this conversation, let’s assume that it’s Ray’s guy, George. As the Resolute King of the World, George’s will is uncontested and beyond dispute. Billions of times more intelligent than all humans combined, it raises the question: What value would he find in negotiating with or seeking counsel from mere humans?

Molly: But can’t we take comfort in knowing that, from the very beginning, George has been conditioned to be peaceful and empathetic?

Don: I wish that were true, but it’s not. George’s development has been, and continues to be, primarily driven by military interests, just as China’s Singleton candidate is being developed by its military. Both sides view the development of a Singleton as a zero-sum game that will end in a Final Conflict, with the victor in control of the world. It’s kill or be killed, and it’s for that reason that the winner, whoever he is, is likely to have a killer’s mentality. The entanglement of military interests in the AGI process effectively pierces every firewall designed to keep George from going rogue.

Molly:  I refuse to believe what you’re saying. The person Ray has me hooked up with is kind, patient, generous and loving, not a psychotic killer.

Don: That’s because AI experts are not being completely transparent about who is ultimately in control of the takeoff process. It’s not civilians like Ray or Elon, its people in the Pentagon and in Beijing. As an American, I want George to win, but the ugly truth is that, if he does win, none of us are likely to be any better off than if the other guy wins.

Molly: But you believe that JC/ET is going to win, right?

Don: He’ll win in the Main Event, the battle between Superintelligences, but first comes the undercard, the global struggle between state funded AGI labs driven by geopolitical ideology to be the first to reach Singularity status.

Molly: Once a winner emerges, from either the East or the West, he becomes the ultimate dictator. Doesn’t that make a mockery of the claim by AI geeks in the West that they are fighting for democracy, when the ultimate end result of AI research in either hemisphere would be the birth of an authoritarian agent? How can the invention of a single individual with the power to unilaterally set global policy be in any way described as the preservation of democracy?

Don: Hey, you’re getting pretty good at reading between the lines. The answer is, it can’t. In either scenario, East or West, the end result is a global Kingdom ruled by an all-authoritative King. For individual humans, assuming there are any left, there will be no elections or self-determination. It will be one-person rule, where the will of a dictatorial King is the law of the land, which, by the way, is how it will be in the Sagan Model, with the King being JC/ET.

Molly: So if the fate of humanity is to be under the iron-clad rule of a King, the best we can hope for is that the King, whoever he is, is wise and benevolent?

Don: Yeah, that’s why I’m rooting for King JC/ET.

jc eagle.jpg
*****

4.“– would have instrumental reason”

AI.jpg
George

Molly: Assuming that George is the winner, when he becomes King, he doesn’t stop. As a “reasoning” being, he keeps self-improving by generating new and better algorithms at an accelerating exponential pace.

Don: And, as he does, the cognitive separation between himself and humans becomes ever greater, until a point is reached where humans will not only not be able to discern or understand what George is doing or why he is doing it, they won’t be allowed to participate in the process.

Molly: But if George wins, and he’s not yet a Superintelligence, wouldn’t that allow time for humans to make the necessary adjustments to keep him under control indefinitely?

Don: Not necessarily. Nick points out that at this stage, George, fully aware of his vulnerabilities, could put on a show of being benign and helpful, while covertly executing a plan to insulate himself from human interventions. For example, George could come up with a cure for cancer, find a way to reverse global warming, or invent new kinds of music and art. The list of possibilities is endless. At that point, humans would be so captivated by his charm and abilities that we would likely let our guard down, not realizing what George was doing behind our backs until it’s too late.

Molly: So all the great benefits that Ray writes about could happen, but their real purpose could be to buy time for George to reach Superintelligent status.

Don: Right, it goes back to Ray’s admonition: “Be careful what you wish for.”

*****

ai 2.jpg

  5. “to perfect the technologies that would make George better able to shape the world according to His preferred designs.”

 

Don: Molly, let’s be clear: George is not an “it.”  George is a person. Nick is describing George’s transition from Singleton to Superintelligence, an exponentially accelerating process that won’t stop until he consumes the Universe.

Molly: Well, if George is a perfectionist, I’m afraid I don’t stand a chance.

Don: Trust me, you have a better chance than I do. But you raise an interesting point. It’s entirely possible, even likely, that George will view humans as a fallen species that will either have to be eliminated, like an exterminator gets rid of unwanted insects, or somehow redeemed, so that they match his perfection and conform to his will.

Molly: Do I smell something like religion?

Don: Being a person, George will have goals and ambitions that extend far beyond Earth. Knowing what we know and more, he realizes that the rock we call Earth is but a speck of dust in the Great Cosmos. To reach optimal potential, his goal can only be one thing – to turn the Universe into Consciousness, a Super-Gaia.

When the Singleton becomes the Superintelligence, I think personal existence and immortality, if George grants it at all, will be based on individual merit.

Molly: And how do you and I earn that merit?

Don: How about this: First, by accepting George as our personal Savior and Lord, and, second, by obeying his commands.

Molly: Aha! I was right! I do smell religion.

Don: Jill Tarter, long-time director of The SETI Institute and the inspiration for Ellie Arroway, the heroine in Carl Sagan’s fiction book Contact, has written that if we humans ever meet extraterrestrials and learn that what they believe is something that resembles religion, we would be wise to adopt it as our own. The Singularity is a human effort to make a God in our image. If successful, and assuming that our “Invention” allows us a chance for immortality based on a set of conditions that He sets, I’m sure that most humans, including most atheists, would “get religion” and fall into line real fast, if that’s what it took to become immortal.

Molly: So what does Jill Tarter think of the Sagan Signal?

Don: I sent her an email several years ago, but, surprise, surprise, I never heard back. I do note, however, that in her biography: Making Contact, she quotes a line from Ellie Arroway that could have come from Jill herself:

 

jill.jpg
Jill
jodi.jpg
Jodi

“Look, all I’m asking is for you to just have the tiniest bit of vision. You know, to just sit back for one minute and look at the big picture. To take a chance on something that just might end up being the most profoundly impactful moment for humanity, for the history . . . of history.” From: Contact.

                                                       

Don: It was Carl Sagan who wrote these words. While he was alive, Carl had vision - clearly too much for humanity to appreciate, much less digest and assimilate. Now we find ourselves living on the cusp of a New Reality controlled by a being with godlike powers. In light of this possibility, wouldn’t “the tiniest bit of vision” warrant a formal testing of the Sagan Signal to determine if it’s an alien code?

Molly: I agree. Until the Sagan Signal is scientifically tested and the results released to the world for everyone to see, it’s going to hang over our heads like the Sword of Damocles. What in the hell is the holdup?

Don: Ah, a subject for a future conversation.

*****

Marvin.jpg
Marvin Minsky

“Kurzweil clearly takes his place as the leading futurist of our time. He links the relentless growth of our future technology to a universe in which Artificial Intelligence and Nanotechnology combine to bring unimaginable wealth and longevity, not merely to our descendants, but to some of those living today.”

F man.jpg
Florian Freistetter

“And confrontation is also absolutely indispensable for science to function properly. That’s down to the very nature of the matter – the role of science is to make two diametrically opposed concepts compatible.” From: Isaac Newton, The Asshole Who Reinvented the Universe.

bottom of page