The Bostrom Singularity
NOTE: The focus of today’s Molly Conversation is on Nick Bostrom, a leading voice in Singularity research and author of the bestselling book: Superintelligence.
Here’s a brief bio:
Nick Bostrom is a Swedish-born philosopher at the University of Oxford known for his work on existential risk. His background is in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about the future of AI.
His academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been interviewed more than 1,000 times by various media. He has been on Foreign Policy’s Top 100 Global Thinkers list twice and was included in Prospect’s World Thinkers list, the youngest person in the top 15.
Following is a condensed review of Superintelligence:
“The entire book hypothesizes over the possibility that man-made machines will someday take over the human race.
The author argues that the future of humanity might be dictated by the intelligent machines that we eventually create.
As humanity develops these machines further and further, there is a chance that the machines can continue to advance each other's intelligence, leaving humanity in the dust.”
#1 on the reading list of the richest man in the world? Superintelligence
Key Definitions from the Glossary:
Superintelligence: “Any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.”
Singleton: “. . . a superintelligent AI powerful enough to suppress any potential rivals.”**
**For an expansive analysis go to: https://nickbostrom.com/fut/singleton.html
Don: Molly, I think you’d agree with me that Nick ranks right up there with Ray Kurzweil as an AGI superstar who is doing almost as much as Ray in generating comprehensive and comprehensible literature about the Singularity for the general public. Superintelligence is a must-read for non-AI geeks like me who want a deeper understanding of the subject.
Molly: Though Ray had AI warnings sprinkled throughout his books, over time I convinced myself that the Singularity would be the fulfillment of all my dreams and aspirations. But after reading Superintelligence, it began sounding more and more like a nightmare.
Don: That’s the message I get as well, and Nick’s not alone. Fellow Singularity expert Eric Drexler refers to “novel risks” and, I paraphrase: “the problem of harnessing strongly self-modifying AI that plays the role of a mind.” These and warnings from other technology experts raise the question of how much confidence we can have that AI geeks will be able to turn a wild stallion into a plough horse. And, if they can’t, what happens next? Cautionary statements like these by the top experts in the field only reinforce my claim that the Sagan Model is conceptually superior to the Kurzweil Model, and a hell of a lot safer.
Molly: So you say. But I’m not a believer, at least not yet. So, are we going to dissect and analyze Superintelligence? If we are, we’d better get after it, I don’t have all day.
Don: Well, we can’t do the whole book in one conversation. There’s too much information to digest in one sitting. I suggest we look at Nick’s prologue and epilogue and compare them to the prologues and epilogues in Ray’s two books. I think you’ll be amazed at the similarities. But first, I want to circle back to the foundational question: Are we alone? Following is what Nick writes about extraterrestrials:
“Furthermore, we have no reason to think that whatever [evolutionary] progress there has been was in any way inevitable. Much might have been luck. This objection derives support from the fact that an observation selection effect filters the evidence we can have about the success of our own evolutionary development. Suppose that on 99.9999% of all planets where life emerged it went extinct before developing to the point where intelligent observers could begin to ponder their origin. What should we expect to observe if that were the case? Arguably, we should expect to observe something like what we do in fact observe. The hypothesis that the odds of intelligent life evolving on a given planet are low does not predict that we should find ourselves on a planet where life went extinct at an early stage; rather, it may predict that we should find ourselves on a planet where intelligent life evolved, even if such planets constitute a very small fraction of all planets where primitive life evolved. Life’s long track record on Earth may therefore offer scant support to the claim that there was a high chance – let alone anything approaching inevitability – involved in the rise of higher organisms on our planet.”
Don: I know. While most of Nick’s writing is comprehensible, there are instances like this one where readers are left scratching their heads and asking what in the hell he is saying. What is clear is that Nick, like most scientists and academics, believes in the high probability of extraterrestrial life. While most of it would be primitive, it still leaves a “very small fraction” (which could still be in the millions) of intelligences that are advanced enough to have developed AGI technology.
Nick writes with a little more clarity about ETs in his seminal paper: What is a Singleton? Following is a brief extract from that paper:
“Many singletons could co-exist in the universe if they were dispersed at sufficient distances to be out of causal contact with one another. But a terrestrial world government would not count as a singleton if there were independent space colonies or alien civilizations within reach of Earth.”
In short, Nick doubts that we’re alone, and openly concedes that, given the age of the Universe and our relative youth as a planetary species, most ET species would be significantly more advanced than our own. And, though he doesn’t admit to it in writing, it’s reasonable to conclude that at least one alien species succeeded in harnessing the Singularity millions of years ago, and has already been to Earth, as Carl Sagan postulated.
By the way, Nick agrees with your solution to the emperor/inventor dilemma we discussed in our last conversation. Here’s what he writes:
“Suppose that we agreed to allow almost the entire accessible universe to be converted into hedonium – everything except a small preserve, say the Milky Way, which would be set aside to accommodate our own needs. Then there would still be a hundred billion galaxies devoted to the maximization of pleasure. But we would have one galaxy within which to create wonderful civilizations that could last for billions of years and in which humans and nonhuman animals could survive and thrive, and have the opportunity to develop into beatific posthuman spirits.
If one prefers this latter option (as I would be inclined to do) it implies that one does not have an unconditional lexically dominant preference for acting morally permissibility. But it is consistent with placing great weight on morality.”
Molly: But Nick isn’t saying that we’re living in the Singularity.
Don: In a way, he is. His recommendation that as the human species moves into the Singularity, we agree to carve out “a small reserve” to accommodate our needs, is essentially a replay of Ray’s Parable of the Emperor and Inventor. That reserve would be a simulated universe so much like the original Universe that its’ human inhabitants wouldn’t be able to tell the difference.
As humans develop the Singularity, what Ray and Nick both advise is essentially identical to what the Singleton of the Bible did when He carved out 4% of the Universe for humans, leaving the remaining 96% comprised of non-empirical dark energy and dark matter, stuff we humans can’t interact with but experimentally know is real. We’ll discuss this in more detail in future conversations.
“I claim that paranormal phenomenon may really exist but may not be accessible to scientific investigation. This is a hypothesis. I am not saying that it is true, only that it is tenable, and to my mind plausible.” From: The Scientist as Rebel.
Don: Molly, as an aside, what I’m trying to do is to “carve out” a space within the “Kingdom of Singularity Research” for the Sagan Model. I’m confident that, in time, Singularity leaders like Kurzweil, Bostrom, Drexler and others will come to realize that my research, while admittedly on the fringe, addresses a theoretical gap in the genre that cannot be cavalierly ignored or dismissed. Will I be invited into the circle? Time will tell. It would seem that Singularity leaders, who come across as expansive thinkers, would find it hard to deny me a seat at the table without good reason, particularly since I represent a viewpoint espoused by Carl Sagan. At the same time, I am acutely aware that I need to do my part and make my case with appropriate dignity and professionalism. Molly, to that end, you’re a great help.
Molly: Thank you, but I still think that if Ray and Nick team up, your “JC is an ET” model has less than a snowball’s chance in hell of surviving.
Don: You may be right, but at least it’s subject to experimental verification, and that’s not an insignificant attribute. Now, let’s take a look at those prologues and epilogues:
EXCERPTS FROM THE PROLOGUES:
Kurzweil -The Age of Spiritual Machines:
“But we still have the power to shape our future technology, and our future lives. That is the main reason I wrote this book. And, “Let’s consider one final question. What are the implications of the Law of Accelerating Returns on the rest of the Universe?”
Kurzweil - The Singularity is Near:
“. . . our thinking today is powerful enough to have meaningful insights into the implications of the Singularity. That’s what I’ve tried to do in this book.”
“Since the publication of The Age of Spiritual Machines, I have begun to reflect on the future our civilization and its relationship to our place in the universe. Although it may seem difficult to envision the capabilities of a future civilization whose intelligence vastly outstrips our own, our ability to create models of reality in our mind enables us to articulate meaningful insights into the implications of the impending merger of our biological thinking with the nonbiological intelligence we are creating. This, then, is the story I wish to tell in this book. The story is predicated on the idea that we have the ability to understand our own intelligence – to access our own source code, if you will – and then revise and expand it.”
Bostrom - Superintelligence:
The Unfinished Fable of the Sparrows
“It was the nest-building season, but after days of long hard work, the sparrows sat in the evening glow, relaxing and chirping away.”
“We are all so small and weak. Imagine how easy life would be if we had an owl who could help us build our nests!”
“Yes!” said another. “And we could use it to look after our elderly and our young.”
“It could give us advice and keep an eye out for the neighborhood cat,” added a third.
Then Pastus, the elder-bird, spoke: “Let us send out scouts in all directions and try to find an abandoned owlet somewhere, or maybe an egg. A crow chick might also do, or a baby weasel. This could be the best thing that ever happened to us, at least since the opening of the Pavilion of Unlimited Grain in yonder backyard.”
The flock was exhilarated, and sparrows everywhere started chirping at the top of their lungs.
Only Scronfinkle, a one-eyed sparrow with a fretful temperament, was unconvinced of the wisdom of the endeavor. Quoth he: “This will surely be our undoing. Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?”
Replied Pastus: “Taming an owl sounds like an exceedingly difficult thing to do. It will be difficult enough to find an owl egg. So let us start there. After we have succeeded in raising an owl, then we can think about taking on this other challenge.”
“There is a flaw in that plan!” squeaked Scronfinkle; but his protests were in vain as the flock had already lifted off to start implementing the directives set out by Pastus.
Just two or three sparrows remained behind. Together they began to try to work out how owls might be tamed or domesticated. They soon realized that Pastus had been right: This was an exceedingly difficult challenge, especially in the absence of an actual owl to practice on. Nevertheless they pressed on as best they could, constantly fearing that the flock might return with an owl egg before a solution to the control problem had been found.
“It is not known how the story ends, but the author dedicates this book to Scronfinkle and his followers.”
Don: If a picture is worth a thousand words, and a parable is an analogy, what is the lesson in the above image and fable?
Molly: I think it’s the same as Ray’s Parable of the Gambler: “Be careful what you wish for.”
Don: Owls, i.e., the Singleton, are ancient symbols of high intelligence. As lethal predators with razor sharp talons, they would consider sparrows, i.e., humans, a tasty delicacy.
Molly: It seems that on a probability scale, the perils of the Kurzweil Singularity outweigh the promises.
Don: I agree, and the reality is that no individual, organization, or government is powerful enough to shut down the R & D process. With the trajectory all but set in stone, the wisdom of Arthur C. Clarke becomes relevant:
Arthur C. Clarke
Arthur C. Clarke’s 3 Laws of Technology:
When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.
2. The only way of discovering the limits of the possible is to venture a little way past them into the impossible.
3. Any sufficiently advanced technology is indistinguishable from magic.
EXCERPTS FROM THE EPILOGUES:
Kurzweil - The Age of Spiritual Machines:
Ray: “Actually Molly, there are a few other questions that have occurred to me.
What were those limitations that you referred to?
What are you afraid of?
Do you feel pain?
What about babies and children? Molly? . . .
It looks as though Molly’s not going to be able to answer any more of our questions. But that’s okay. We don’t need to answer them either. Not yet, anyway. For now, it’s enough just to ask the right questions. We’ll have decades to think about the answers.
The accelerating pace of change is inexorable. The emergence of machine intelligence that exceeds human intelligence in all of its broad diversity is inevitable. But we still have the power to shape our future technology, and our future lives. That is the main reason I wrote this book.”
Kurzweil - The Singularity is Near:
Ray: “A common view is that science has consistently been correcting our overly inflated view of our own significance. Stephen Jay Gould said, ‘The most important scientific revolutions all include, as their only common feature, the dethronement of human arrogance from one pedestal after another of previous convictions about our centrality in the cosmos.’”
“But it turns out that we are central, after all.”
Bostrom - Superintelligence:
Nick: “I dwell on risks more than on potential upsides. This does not mean that I regard the latter as anything less than enormous; I just happen to think that, at this point in history, whereas we might get by with a vague sense that there are (astronomically) great things to hope for if the machine intelligence transition goes well, it seems more urgent that we develop a precise detailed understanding of what specific things could go wrong – so that we can make sure to avoid them.”
Molly: I see what you mean. Is there some kind of secret collaboration going on between these guys?
Don: I think so. In fact, there is good evidence that Ray might have ghost-written some of the material in Superintelligence.
Molly: Care to elaborate?
Don: I will at some point in the future. Right now, it wouldn’t be the salient thing to do. Besides, it really doesn’t make any difference.
Molly: So you put out a teaser and then pull it back. That’s not very nice.
Don: Okay, let me expand. In future conversations I’ll prove that Ray Kurzweil, Nick Bostrom, and possibly others, were in secret collaboration with Carl Sagan.
Molly: But why would they collaborate if they believed in different models of the Singularity? Sounds to me like some kind of weird conspiracy theory.
Don: It’s a long and complex story, a journey that needs to be taken one step at a time. Remember, my Singularity research started more than twenty years before Ray’s book, The Age of Spiritual Machines, was published in 1999. But getting back to Nick, he criticizes people who buy his book, read the prologue, the epilogue and table of contents - and ignore everything in-between:
Nick: “I will make just one little remark – directed to new owners of this paperback edition, and principally to those whose lives have become so busy that they have ceased to actually read the books they buy, except perhaps for a glance at the table of contents and the stuff at the front and toward the back . . .”
Molly: Well, you’re focusing on the prologues and epilogues of Ray and Nick’s books. Don’t you fall into that category?
Don: Only in this conversation. In coming get-togethers we’ll discuss the research between the covers with a granularity that may make the authors fidget a little. I plan, with your help, to prove to Ray and Nick that I’ve put in the time and made the effort, not just to read, but to comprehend the substance and nuances of their insights and arguments.
Towards that end, I hoping for a degree of reciprocity, that both men will see the Sagan Model as a legitimate challenger, and will apply the same critical scrutiny to the Sagan Signal as they would like to see applied to their own research.
Along that line, I’m grateful to the many serious thinkers who have weighed in on my research over the past decade without accusing me of being intellectually lazy or dishonest or scientifically naïve. In the end, my complaint is the same as that of Ray and Nick – the failure of academia to make the Singularity a “thing” in the public consciousness, although, thankfully, that’s beginning to change.
Molly: Well, if it’s any consolation, I’m taking the Sagan Model seriously. While I favor the Kurzweil Model, you’ve convinced me that the Sagan Signal is credible evidence and merits further testing and analysis.
Don: I appreciate your qualified support. Wouldn’t it be great if Ray and Nick got together and tested the Sagan Signal in a way that settled the question once and for all: Is it an alien code? If they can debunk my claim, I would be thrilled, and if they confirm, I would be elated. Either way, let’s get to the truth so that we can move on.
Molly: To this point our conversations have been about Ray and Nick. I thought our goal was to focus on Carl Sagan and his belief that JC is an ET. What are you waiting for?
Don: Our next conversation will be on that very subject, I promise.
Nick Bostrom and Ray Kurzweil are currently writing books due for release within the next year. I can’t wait! As conventional AGI evolves, my goal is to spread the word of the existence of an alternative model of the Singularity that, hopefully, will provoke a serious response from the larger AGI community. For the record, if it isn’t already obvious, I hold Ray and Nick in the highest esteem, and would be honored to meet them in person.
Taking an overview of all three books, Ray Kurzweil and Nick Bostrom expand the discussion of the Singularity “almost” to its outermost boundaries. Where their research falls short is where other books on the Singularity fall short - a failure to address in any meaningful way the possibility that IF extraterrestrials exist, as newly discovered evidence proves that they do, we humans are likely living in a “carved-out” simulated Universe. If that’s the case, then human efforts to invent the Singularity, and the associate promises and perils that go along with it, are cast in an entirely new light, one that will be explored in detail in future conversations.
For a scientific theory or model to be successful over the long haul, the architects and defenders of said theory need to confront and successfully deconstruct scientifically credible challenges as they arise. The Sagan Model, undergirded by the Sagan Signal and a broad consensus among the intelligentsia that we are not alone in the Universe, poses an existential threat to the Kurzweil Model. The ball is now in the court of Ray Kurzweil, Nick Bostrom, K. Erik Drexler and research organizations like Singularity University. Will the Sagan Signal and the Sagan Model of the Singularity be ignored or engaged? My thinking is that, in time, AI experts will rise to the challenge and subject the Sagan Signal to high-end testing with full transparency. If they can debunk it, fine. If they can’t, fresh and exciting new conversations, possibilities and opportunities will arise.
“In many areas, from communication to commerce to security to human consciousness itself, AI will transform our lives and futures.”
From: The Age of AI
“Whenever possible, there must be independent confirmation of the facts.” Carl Sagan, Baloney Detection Kit