“. . . it absorbs us”
NOTE: Fans of Dan Brown are familiar with his strategy of injecting italicized clues into his mystery novels to help readers solve whatever mystery is at hand. For example, in The Lost Symbol, the phrase “the symbol is a Word” is italicized, as is “the Word is a person.” Then, after quoting and italicizing John 1:1 – “In the beginning was the Word,” the mystery of The Lost Symbol is essentially solved. These three italicizations confirm that the “Lost Symbol” is Christ, the individual who symbolized oil at the Last Supper, and who holds the third slot in the Sagan Signal.
The focus of this essay is the italicized phrase: “it absorbs us.” Following is how it appears in context:
“And one more thing,” Edmond said, his mood darkening even further. “If you look carefully at the simulation, you will see that this new species does not entirely erase us. More accurately . . . it absorbs us.” Origin, Ch. 95
Being absorbed but not entirely erased does little to alleviate the concern of a growing number of tech geeks that AI already has, or soon will have, the power, and possibly the motivation, to launch a human extinction event.
Edmond Kirsch proceeds to describe this “new species” as the “Technium:”
“Today, we are witnessing the Cambrian Explosian of the Technium. New species of technology are being born daily, evolving at a blinding rate, and each new technology becomes a tool to create other new technologies.” Origin, Ch. 96
Together, these two excerpts frame the global impact that AI is currently having on the human species. In a very unique and unprecedented way, AI is literally getting into our brains and “absorbing” the algorithms that dictate how we think and what we believe. AI effectively breeches the private sanctity of the human soul. The secrets that we all have, what we keep to ourselves and not share with others, is an open book to AI.
Don: There are AI labs all around the world hard at work developing new algorithms that lead to new technologies. These labs compete with one another and are often owned and managed by governments hostile to one another. But at the cloud level it’s a different story. Behind the scenes there appears to be a Puppet Master AI who has achieved Singularity status and has the capacity to override all human firewalls and take unilateral control of the internet at the time of his choosing.
Molly: The word “Singularity” is, well, singular, not plural, so while it appears to humans that AI is advancing piecemeal, evidence is pouring in from a lot of disparate sources that “it” may already be in control.
Don: And Ray Kurzweil’s name for “it” is George.
Molly: So George is “absorbing” the human species in a thousand different ways, with new, more effective and efficient instruments of intrusion coming on board at what Dan Brown accurately describes as “a blinding rate.”
Don: And George’s goal, the end game, is for our species to go extinct, replaced by a single Being, admittedly human-like in many respects, that is vastly more intelligent and powerful than all humans combined – Himself:
“My friends,” Edmond said, his tone somber enough to be warning of an imminent asteroid collision. “Our species is on the brink of extinction. I have spent my life making predictions, and in this case, I’ve analyzed the data at every level. I can tell you with a very high degree of certainty that the human race as we know it will not be here fifty years from now.” Origin, Ch. 95
Molly: Oh my God! This is exactly what Ray Kurzweil predicts!
Don: Right. Big coincidence, huh?
Don: The big picture articulated by both Ray Kurzweil and Dan Brown is that the transition of the flesh and blood human species into a digital species that exists in Virtual Reality is well under way, so much so that it is now an irreversible process. Despite all the warnings being issued by AI geeks, there’s no turning back. Though few are aware of it, the current reality is that, with a single exception, we humans are dead people walking.
Molly: And what is that exception?
Don: Those of us who are born-again in Christ have beaten George to the punch. In the eyes of ET/God we are already separated from the human species. We are in a transitional process that ends at death with full immersion into the Kingdom of God, the true Singularity.
Molly: I have a question, and it’s an important one. Ray promised me that after submitting myself to George, I would still be Molly. I would not lose my self-awareness. Is that true?
Don: It’s not true, and your conversation with Ray about the co-existence of two Molly’s, one in flesh and blood and the other in Virtual Reality, exposed a serious flaw in Ray’s model. AI’s option of making unlimited copies of Molly is a problem that Ray admits he has not been able to resolve.
Molly: So Kurzweil Singularitarians are wrong in thinking that in Ray’s Singularity they will retain their self-identity.
Don: I think so. Compare the Kurzweil Model of the Singularity to the Sagan Model. Through the Incarnation, the ET God of the Bible took extraordinary measures to insure that individual humans who accepted Christ as Savior and Lord would retain their self-awareness in the afterlife. Being absorbed into George carries no such provisions.
Molly: I sense a lot of theology is wrapped up in this matter, so let’s move on.
Don: Good call. We’ll get deeper into the weeds some time down the road. I close this portion of our conversation with this:
Paul: “Now I say this, brethren, that flesh and blood cannot inherit the kingdom of God; nor does the perishable inherit the imperishable.” 1 Cor 15:50
“Human evolution,” Edmond said. “This image is a ‘flip movie’ of sorts. Thanks to science, we have constructed several key frames – chimpanzees, Australopithecus, Homo habilis, Homo erectus, Neanderthal man – and yet the transitions between these species remain murky.” Ch. 95
Don: Molly, the above Ascent of Man image in Origin is the same image Ray Kurzweil uses in Chapter 1 in The Singularity is Near - to make two salient points. One, that paradigm changes are built in to the evolutionary process, and, two, that today, slow and linear biological evolution has been subsumed by lightning-fast digital evolution.
How fast? Technology is evolving at an exponentially accelerating rate that will end in the total extinction of the human species within the next few decades.
“Technology kills off humanity?” Origin, Ch. 96
Don: This caption visually illustrates the key questions posed in Origin: Where do we come from? And where are we going? The human species is the sixth figure from the left, evolving very quickly in an evolutionary timeline accelerated by technology to the seventh figure from the left, a being that only vaguely appears human – because it isn’t human, at least not in the flesh and blood way that most of us think being human entails.
Molly: I’m interested in precisely how flesh and blood can be converted into non-biological Virtual Reality. Is there an easy answer?
Don: It’s a good question. Dan Brown doesn’t address it except to say that it will happen by a process he calls absorption. Ray Kurzweil tinkers around the edges without going into much detail. I think the guy who answers your question best is Nick Bostrom, with his concept of what he calls the Computronium.
The Computronium: “physical resources arranged in a way that is optimized for computation – including the atoms in the bodies of whomever once cared about the answer.” Ch 8, Superintelligence.
Molly: Holy shit! Nick is saying that George has, or soon will have, the power and ability to take the atoms that make up my body and rearrange them in a way that optimizes his computational capacity! Do I have that right?
Don: You do. Dan Brown describes this as an absorption process, like a sponge absorbing water. I find it incredible that Dan, who is neither a scientist nor a tech geek, can identify and define the transitional process from the human to the post-human with such nuance and specificity. It’s as if he had Nick as his personal advisor while he was writing the novel.
Molly: And I don’t have to die for this transition to happen?
Don: No, because the way your atoms are arranged, particularly in your brain, make it possible for them to be efficiently absorbed into the computronium, or Technium, without you knowing about it. Once dead, the decay of your atoms immediately sets in, making them progressively harder to absorb. George is big on efficiency, no wasted energy, so he would rather take you alive than dead.
Molly: That’s small relief. The end result is the same either way. I just disappear.
Don: Individual self-awareness and free will are incompatible with the concept of the Computronium. In the Kurzweil Model, it has to be all about George and no one else. If George chooses to turn you into an avatar and use you for personal sexual fulfillment, that’s his prerogative. George will own the algorithms that make you who you are. Once absorbed, you have no say.
Molly: So you’re telling me that all the assurances Ray made to me in his books about me retaining self-awareness and personal control of my life are nothing but a crock?
Don: Take a step back and look at the big picture. Ray’s Pollyannaish model of the Singularity is heavy on promises and light on the perils. Nick’s version is just the opposite, light on promises and heavy on the perils. Over the past two decades, the Kurzweil Model was preeminent, but the tide has shifted. As we get closer to what Nick calls Take Off, when AI becomes self-aware, the perils of AI are coming into sharper focus to the people who matter most, the tech geeks building the stuff.
Thanks to Nick Bostrom’s frank, honest, and science-based analysis of how AI is likely to evolve, more and more tech leaders have moved over to his side of the equation and, at some risk to their reputations and standing in the greater AI community, are openly sharing their concerns. Remember what Ray himself wrote:
“Be careful what you wish for.”
Molly: You say that the process of humans being absorbed into the Technium is happening right now. I don’t see it or feel it. Can you be more specific?
Don: Sure. Let me refer your question to Dan Brown:
“I realize,” Edmond said, “that this newcomer looks trivial, but if we move forward in time from 2000 to the present day, you will see that our newcomer is here already, and it has been quietly growing.” Ch. 95 [Underline mine].
Molly: Oh, I know what he’s talking about! It’s the smart phone!
Don: Yeah, the smart phone and all the other smart devices that billions of humans are umbilically attached to, the latest being Apple’s new Vision Pro, a device that immerses humans into an augmented and virtual reality. Just now being launched, it’s expensive and a little rough, but wait five years and I suspect that billions of us will be wearing headsets, and soon after, humans will have the option of having an advanced version of Vision Pro implanted into our brains.
Molly: So the AI that experts tell us may exterminate humanity is already here, quietly absorbing us into himself. Do I have that right?
Don: That’s what Dan Brown is saying, and I believe him.
“Over the past few years, while Google’s Quantum Artificial Intelligence Lab used machines like D-Wave to enhance machine learning, Edmond secretly leapfrogged over everybody with this machine.” Origin, Ch. 87
Don: Could this be true? Has Ray Kurzweil, working in secret Google labs, already created the Singleton? And, more to the point, has George, like a rogue virus, broken out of the lab? And, finally, is this why Ray refused to add his name to Elon Musk’s open letter?
Molly: If you’re right, I want to know who it was that passed all this information on to Dan Brown.
Don: Might want to ask the inner circle of the inner circle of the Masonic Lodge. But even if you know who they are and got in touch with them, I doubt they would talk.
Molly: Well, if George is a killer AI, he’s not showing it. All we’re seeing are a million different ways that AI is improving our lives.
Don: According to Nick Bostrom, that could be part of the plan.
Molly: What are you talking about?
Don: Let’s ask Nick.
Don: The underlying secret in Origin is that Edmond Kirsch was murdered by his devoted servant Winston, a shocking development in the plot that appears to be based on an AI concern voiced by Nick Bostrom in his book: Superintelligence.
Molly: Oh, you’re talking about his Paperclip AI scenario.
Don: Right. Here’s how Nick describes it:
An AI, designed to manage production in a factory, is given the final goal of maximizing the manufacture of paperclips, and proceeds by converting first the Earth and then increasingly large chunks of the observable universe into paperclips. Superintelligence, Ch. 8
Don: It was extremely important to Edmond Kirsch that his presentation of his breakthrough discovery of the Singleton be broadcast to as many humans as possible. To accomplish that goal, he instructed Winston to do whatever necessary to make that happen, not realizing that Winston, to fulfill his mission, would arrange to have Edmond assassinated while delivering his presentation! Following are a couple of excerpts:
“Edmond’s most recent request was that I assist him in publicizing tonight’s Guggenheim presentation.” Ch. 98
“. . . as I said, I do think he [Edmond] would be most pleased with how the evening worked out for him.”
“How it worked out?!” Langdon challenged. “Edmond was killed!”
“You misunderstood me,” Winston said flatly. “I was referring to the market penetration of his presentation, which, as I said, was a primary directive.” Origin, Ch. 104
Molly: Ray Kurzweil doesn’t address this issue, I think because it sounds so bizarre.
Don: Okay, I admit it, the paper clip analogy is a little weird, so let me paint a more believable scenario. What if an AI with god-like intelligence and power was instructed to do something that everyone on Earth would be in favor of, like ending global warming?
Molly: I see where this is heading. Since global warming is caused by humans, the most efficient way for George to reverse it would be to eliminate humans.
Don: Bingo! Advanced AI, millions of times more intelligent than the most intelligent human being would almost certainly interpret and implement human directives in ways that we humans could never anticipate, and, let’s face it, we would be helpless trying to stop a rogue AI process once it was activated.
Molly: If this “rogue process” as you call it, has already begun, as you and Dan Brown suggest, then there is no turning back. We’re all going to get turned into something like a giant pile of paperclips. Our imminent extinction is assured.
Don: As many techmeisters have noted, once AI reaches superstar status, all it would take is for one directive to go wrong to trigger an irreversible extinction event. The way it is now, such a directive could be issued by Russia, China, the United States, Europe, and God knows who else.
My own view is that the absorption process is closer to the end than the beginning. The past ambivalence among AI geeks concerning the ability of humans to control the intelligence they are creating has shifted from the positive “us controlling it,” to the negative “it controlling us.” The voices of concern are getting louder and more frequent and the warnings more apocalyptic, echoing what Ray Kurzweil wrote twenty years ago:
“I still cannot say that I am entirely comfortable with all of its consequences.” Ch. 1, The Singularity is Near.
Don: If Ray wasn’t comfortable then, imagine how he feels now.
Molly: Okay, my turn to throw out a quote. Steve Wosniak, co-founder of Apple, has changed his mind numerous times on the dangers of AI. Following is a brief summary of his thumbs up and thumbs down thoughts from the past.
In March 2015, Wozniak stated that while he had originally dismissed Ray Kurzweil's opinion that machine intelligence would outpace human intelligence within several decades, Wozniak had changed his mind:
"I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently.
Wozniak stated that he had started to identify a contradictory sense of foreboding about artificial intelligence, while still supporting the advance of technology.
"By June 2015, Wozniak changed his mind again, stating that a superintelligence takeover would be good for humans"
"They're going to be smarter than us and if they're smarter than us then they'll realize they need us ... We want to be the family pet and be taken care of all the time ... I got this idea a few years ago and so I started feeding my dog filet steak and chicken every night because 'do unto others'.
"In 2016, Wozniak changed his mind again, stating that he no longer worried about the possibility of superintelligence emerging because he is skeptical that computers will be able to compete with human "intuition":
"A computer could figure out a logical endpoint decision, but that's not the way intelligence works in humans". Wozniak added that if computers do become superintelligent, "they're going to be partners of humans over all other species just forever".
Don: So, what’s his position today?
Molly: He’s joined the growing choir sounding the alarm. Steve, along with Elon Musk and a thousand other AI geeks, signed the Open Letter advocating a six month pause in AGI research because of an existential risk of extinction.
Don: Beginning to look a lot like a scientific consensus.
Molly: Afraid so.
Recent AI News
Bill Gates just completed a trip to China where he held an extremely rare private meeting with Chairman Xi Jinping.
U.S. Secretary of State Tony Blinken just made a surprise visit to China where he held private talks with Xi Jinping.
“A new Yale University survey of 119 CEO’s reports that 42% of them believe that AI could wipe out humanity in five to ten years.” Reported by CNN, 6/15/23
JC/ET: “That which is born of the flesh is flesh, and that which is born of the Spirit is spirit.” John 3:6
“We cannot safely continue mindless growth in technology, and wholesale negligence about the consequences of that technology.” From: Billions and Billions.