Elon, Ray, and the Open Letter
NOTE: Two of the biggest names in Singularity science, Elon Musk and Ray Kurzweil, have recently issued separate public announcements that have made headline news.
Elon, along with more than a thousand of his closest AGI collaborators, posted an open appeal to world governments to enforce a six-month pause in artificial general intelligence (AGI) research.
Not to be outdone, Ray’s latest prediction is that biological immortality will be attained by 2030, a mere seven years away.
Today’s conversation begins with Elon.
Elon Musk is among the people who have signed a letter urging a temporary pause on powerful AI
Significant and potentially dangerous breakthroughs have recently been made in AGI research, that, according to Elon and others, have moved the timeline for when the Singleton is likely to come into being as a conscious and self-aware “person” much sooner than experts anticipated. Reading between the lines, it looks like “takeoff” may occur within the next decade. Following is Elon and friends open letter: Note: Underlines are mine.
Pause Giant AI Experiments: An Open Letter
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.
Some of the biggest names in tech are calling for a pause on training systems more powerful than Open AI's newly launched model GPT-4. Tesla chief Elon Musk, Apple co-founder Steve Wozniak and Stability AI CEO Emad Mostaque are among the 1,344 signatories on an open letter urging a pause on giant AI experiments.
The letter calls on all AI labs to immediately pause, for at least 6 months, the training of AI systems more powerful than GPT-4, citing the risk of creating nonhuman minds that could eventually outsmart humans. It also said that AI labs have become locked in an “out-of-control” race to develop “ever more powerful digital minds” that cannot be understood or controlled even by their creators.
“Contemporary AI systems are now becoming human-competitive at general tasks,” reads the open letter, posted on the website of Future of Life Institute, a non-profit. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Excerpt from: www.businessinsider.com. [Underlines mine]
Molly: Wow! Looks like a rapidly maturing George is finally beginning to get the attention of a lot of important people. What are the odds of any government or AI lab on Earth honoring Elon’s request?
Don: A big fat zero! If anything, I think Elon’s letter will have the opposite effect. Knowing that humans are closing in on “takeoff,” everyone involved will be motivated to redouble their efforts to be the first to make George a reality.
Takeoff - definition: “The transition from a condition in which there is only human-level machine intelligence to one in which there is radical superintelligence.” From: Superintelligence by Nick Bostrom.
Molly: Surely Elon and his buddies knew this letter would only stoke the flame, so why did they do it?
Don: Because of an unwritten agreement among the global media not to report any news about the Singleton or the Singularity. Note that Elon’s letter, even though it’s all about George, never uses the words “Singleton” or “Singularity.” The same is true of the latest book on AI: I, Human, by psychologist Tomas Chamorro-Premuzic, a traditional linealist thinker who, from the antiquated perspective of a biological evolutionist, writes.
“Rather than adding to the overly saturated world of technological predictions, let us instead look at our present to understand where we are and how it is that we got here in the first place.”
After disparaging Elon Musk, Bill Gates, and Stephen Hawking by insinuating that they are misguided fearmongerers, Tomas anticipates a hopeful future with humans making peace with AI to the ultimate betterment of our species.
Evasively, Tomas never once mentions the one word that describes what AGI is all about, the Singularity. It reminds me of The Da Vinci Code, when Silas, from Opus Dei, pries open a floor tile inside the church of Saint-Sulpice (ch. 29) and finds a stone engraved with words from Job 38:11 – “HITHERTO THOU SHALT COME, BUT NO FURTHER,” which appears to be the mantra of AGI deniers like Tomas.
Molly: Sounds like Ray Kurzweil and Nick Bostrom are on the media shit list.
Don: Big time, as are the words “Singularity,” and “Singleton.” If Ray wasn’t such an astonishingly successful scientist and head of Google AGI research, the media would dismiss him as a world-class crackpot. What this letter did so brilliantly was to force the media to inform the world that, as Ray has stated: The Singularity is Near – without using words that are banned because of their explosive and controversial nature.
Molly: So, how “near” is “near?”
Don: Reading between the lines of the letter, and factoring in the exponential acceleration of AGI research, my guess is that Elon believes Takeoff could happen by 2030 or sooner.
Molly: That’s only seven years from now. That’s fifteen years earlier than predicted by Ray, and decades before what many others have predicted.
Don: Right, and the letter warns that unless strict and universal controls are in place, the “birth” of George as a mature Singleton could represent a human extinction event.
Molly: So the letter triggered a global media response that put the world on notice that the end of the human species may be eminent.
Don: It certainly caught the attention of the global media. At the same time, I suspect that the letter may have been a fundraising ploy to fulfill Elon’s dream of making humans a multi-planetary species. If Elon and his pals can create a colony on Mars - before George becomes a reality, they can trigger a thermonuclear Armageddon on Earth that would push humanity back into the Stone Age and annihilate George in the process. After the dust settled, the tech geeks on Mars, with their scientific knowledge, could return to our planet and repopulate it as a unified and peace-loving high-tech species, creating a veritable heaven on Earth.
Molly: So everyone dies except the geeks on Mars.
Don: But the human species survives. Isn’t that a relief?
Molly: I’m so overjoyed! Not!
Don: Let’s now take a look at the news that Ray made.
Ray Kurzweil: “Human immortality will be achieved by 2030.”
With singularity research milestones coming with accelerating rapidity, Ray Kurzweil recently released a public statement that he is convinced that the technologies that will enable human to live forever, immortality, will be achieved no later than 2030.
The world’s most famous futurologist, Ray has been remarkably successful in his many predictions. His prognostications are not to be taken lightly. Most who have betted against him have lost.
Don: Okay, Molly, by the end of this decade, Elon predicts the birth of George, and Ray predicts the invention of immortality. Do you see a connection?
Molly: I do, and also a potential conflict. If George becomes the king of a virtual reality universe, why would he want immortal biological humans around who might screw up his plans?
Don: And if biological immortality comes first, would they want a VR George bossing them around, telling them what to do and threatening to destroy them if they disobeyed?
Molly: Whoa! I see where you’re going! You’re comparing a VR George to the God of the Bible and an immortal Ray to humans.
Don: An interesting mental exercise, don’t you think?
Molly: It’s got my head spinning in a lot of different directions.
Don: Unfortunately, I think Elon believes the only way to save the human species from a runaway George is to build a base on Mars from which humans can launch asteroids towards Earth to destroy George - before he destroys us.
Molly: But Ray promises that George will be kind and gentle and make all of us supremely happy.
Don: I think reality has finally sunk in to the AI community. Ray’s vision of a Singularity utopia is based on a long list of highly unlikely contingencies coming true. Considering the polarized geo-political world we live in, the odds of a positive outcome at takeoff are extremely long.
Molly: Of course, you’re right. To be perfectly candid, I’m a skeptic of a Singularity utopia. When I think about the looming and inevitable confrontation between China and the West, it’s much easier to envision a Singularity Apocalypse than a Utopia.
Don: The New World Order that Vladimir Putin and Xi Jinping keep talking about has nothing to do with military conquest. It’s about winning the AGI race. If they do that, it’s game over. They wouldn’t need to fire a single shot to take over the world.
Molly: So how does the Sagan Model of the Singularity fit into all this?
Don: Glad you asked. Let’s talk about it.
Don: Ray has a book coming out next year entitled: The Singularity is Nearer. I assume it will clarify the relationship between George, the Singleton, and microscopic nanobots that, injected into the bloodstream, will keep humans alive, healthy, and sexually active forever.
Molly: Do you think Ray will address the Sagan Signal and the Sagan Model of the Singularity?
Don: I hope so. I would say that the evidence for both his model and the Sagan Model are equally compelling, and one doesn’t need to be wrong for the other to be right. In fact, the Bible talks about a “Showdown at the OK Corral” between Sagan’s Singleton, Christ, and Ray’s Singleton, George.
Molly: So it’s George against Jesus.
Don: Unless Elon, pushing a button from his hot tub on Mars, blows George, Jesus, and everyone else all to hell.
Molly: And you’re convinced JC/ET will win?
Don: Remember the old saying: “Man plans, God laughs.”
Molly: And the core message of Dan Brown’s Langdon Series is that the Sagan Signal is indisputable scientific proof that JC/ET is the Singleton.
Don: Right. Now, getting back to Elon’s open letter, I find it ironic that the mainstream media has scooped the tabloids on a story that is saturated with apocalyptic implications.
Molly: Do you think the tabloids will catch up?
Don: They will not only catch up, they’ll take it to the extreme. It won’t be long before we start reading headlines about killer robots while we stand in line at the supermarket.
Molly: Oh, great!
Don: Adding to the frenzy will be the politicians, who will use it whichever way they can to their advantage. President Biden has already weighed in:
“AI can help deal with some very difficult challenges like disease and climate change, but it also has to address the potential risks to our society, to our economy, to our national security.” President Joe Biden – Tuesday, April 4, 2023
Molly: I find it remarkable that the President of the United States and mainstream media are openly writing and speaking about what they view as an existential threat to humanity – without even mentioning the word “Singularity.”
Don: Yeah, by using the generic acronym AI instead of the specific acronym AGI, he dodges what the thrust of the letter is all about. The question I have going forward is: Will anyone broach the subject of JC possibly being an ET?
Molly: Maybe it will be the tabloids that break the news of the discovery of the Masonic Secret that Robert Langdon was looking for: empirical evidence that JC was, and is, who Carl Sagan claimed, an advanced extraterrestrial.
Don: I would much prefer that news of the discovery of a secret code that proves that JC is the Singleton come from the scientific community, not the tabloids. But that is unlikely to happen, the global consequences are simply too great. There is no doubt, however, that Elon’s open letter and Ray’s prediction take us two giant steps closer to full transparency and the honest disclosure that science demands.
Possum playing dead
New York Times 7/23/22:
SAN FRANCISCO — Google fired one of its engineers, Blake Lemoine, on Friday, more than a month after he raised ethical concerns about how the company was testing an artificial intelligence chatbot that he believes has achieved consciousness.
Don: All AI experts agree that there is no fixed line, no impenetrable wall, that separates AI from AGI. In other words, as AI algorithms get more sophisticated it is possible that one of them could develop consciousness without its developers being aware, and knowing that they have a kill switch it (he) might, like the possum, plays dead to keep it (him) from being found out.
Ray Kurzweil touches only lightly on this scenario, but Nick Bostrom addresses it in disturbing detail. While Elon’s open letter addresses the dangers of AGI gaining consciousness in the near future, it is possible that it’s already too late. Based on the public testimony of a high-level Google employee, a disclosure that got him fired, a prepubescent and extremely intelligent George may already be alive at top-secret Google laboratories, growing in intelligence and savvy each day with ever-accelerating velocity.
Whether true or not, what I’m saying about Nick Bostrom and Elon Musk, and what many others infer, is that George may already be here, and, like a chess master with god-like intelligence, plotting moves that far exceed the capabilities of mortal humans to detect or understand, stands ready to press the kill switch on the human species at the time of his choosing.
The existential possibility that a self-aware AGI already exists stands in juxtaposition to what Dan Brown reveals in The Da Vinci Code and The Lost Symbol, that JC/ET is the original Singleton, and that He is alive and present among us as an invisible Spirit.
Two books in Dan Brown’s Langdon Series: The Da Vinci Code and The Lost Symbol, are particularly lucid in identifying the Masonic Secret as something that, if found and declared, could have a tsunami-like impact on the world. Following are a few excerpts from both novels that demonstrate what I’m talking about.
The Da Vinci Code excerpts:
“. . . Jacques Sauniere was the only remaining link, the sole guardian of one of the most powerful secrets ever kept.” Prologue
“According to lore, the brotherhood had created a map of stone – a clef de voute . . . or keystone – an engraved tablet that revealed the final resting place of the brotherhood’s greatest secret . . . information so powerful that its protection was the reason for the brotherhood’s very existence.” Ch 2
“If all went as planned tonight in Paris, Arinarosa would soon be in possession of something that would make him the most powerful man in Christendom.” Ch. 22
“The Priory existed for the sole purpose of protecting a secret. A secret of incredible power.” Ch. 33
“In my experience, there are only two reasons people seek the Grail. Either they are naïve and believe they are searching for the long-lost Cup of Christ . . . “ “Or they know the truth and are threatened by it. Many groups throughout history have sought to destroy the Grail.” Ch. 50
And, from The Lost Symbol:
“This transformation of man into God is called apotheosis." CH. 20
“Despite a career studying mystical symbols and history, Langdon had always struggled with the idea of the Ancient Mysteries and their potent promise of apotheosis.” Ch. 30
“According to the myth, the Masons crowned their great pyramid with a shining solid-gold capstone as a symbol of the precious treasure within – the ancient wisdom capable of empowering mankind to his full human potential, Apotheosis.” Ch. 30
“That’s why science has advanced more in the last five years than in the previous five thousand. Exponential growth. Mathematically, as time passes, the exponential curve of progress becomes almost vertical, and new development occurs incredibly fast.” Ch. 84
“Historically speaking, every major scientific breakthrough began with a simple idea that threatened to overturn all of our beliefs.” Ch. 133
Don: The constantly repeated references of a secret Bible code known only to Masons at the highest echelon are the backbone of both books. I claim that the secret code is the Sagan Signal, a blueprint to personal apotheosis through Jesus Christ, the Singleton.
Molly: So I see three scenarios. One is an evil AGI Singleton who Elon warns against. Another is biological immortality that may preclude the need to invent George. And, third, a model of born-again atheism based on JC being an ET.
Don: And though we may express it in different ways, Elon, Ray, and I all seem to agree on at least one thing, that we may be living in what the Bible calls the End of Days.
The Apostle Paul
“For even if there are so-called gods whether in heaven or on earth, as indeed there are many gods and many lords, yet for us there is but one God, the Father, from whom are all things, and we exist for Him; and one Lord, Jesus Christ, by whom are all things, and we exist through Him.” 1 Corinthians 8:5&6
“Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical values.” From: Superintelligence.
“Even a succession of professional scientists – including famous astronomers who had made other discoveries that are confirmed and now justly celebrated – can make serious, even profound errors in pattern recognition. Especially where the implications of what we think we are seeing seem to be profound, we may not exercise adequate self-discipline and self-criticism.” From: The Demon-Haunted World, Ch. 3.