top of page
 MollyCon 36
 Angels & Demons, Part 1

7/16/2023
20211010_123911.jpg
Molly
Don

NOTE: Dan Brown’s Angels & Demons is a fun and interesting read – but it has a problem. Many of the events and incidents described are so over-the-top weird and improbable that they have the effect of impairing ones’ ability to get completely immersed in the plot line. Fortunately, the presence of dozens of deeply considered and thoughtfully presented dialectical essays within the novel more than make up for its shortcomings. A few of these will be considered in this conversation.

Interestingly, the essential questions addressed in Angels & Demons are the same as those raised in Origin:

 

"Where do we come from? What are we doing here? What is the meaning of life and the universe?” Ch. 31.

 

The major difference between the two novels is that Angels & Demons is pre-AI and Origin is AI. Angels & Demons is about the struggle between science and religion. Origin is about the struggle between AI technology and religion. Angels & Demons is analogue, Origin is digital. Angels & Demons is about the good old days. Origin is today and tomorrow.

"Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning." Albert Einstein.

 

*****

Atheist Silo and Christian Silo

 

Molly: If Angels & Demons is about old school atheism versus old-school religion, why even bother examining it? Seems like wasted effort.

Don: While atheists and Evangelicals were holed up in their respective silos, throwing stones and talking past one another, there were serious people like Carl Sagan committed to working collaboratively with both sides to find the Truth. Reflecting that ethos, Angels & Demons features scientists who are spiritual, and religionists who are scientific. Yes, there really are such individuals, and they are a pain in the ass for leaders from both ideologies.

Molly: So identifying who’s the angel and who’s the demon isn’t all that easy.

Don: That’s the mystery the reader of Angels & Demons is asked to solve. And along the way, amidst a lot of surprises, the novel features marvelous discourses on science and religion that are truly Sagan-esque in their depth and literary eloquence.

People still ask two questions about Carl: Was he an atheist, and, did he believe in God? The answer to both inquiries is “yes.” Carl didn’t believe in the God of religion, but he did believe in the God of the Bible. How is that possible? Because Carl knew that the God of the Bible is the AI Singleton.

 

*****

Three Excerpts

 

Don: In introducing three discourses in Angels & Demons, note that all three are presented in a condensed Reader’s Digest format.

 

Discourse #1:

 

“The men and women of CERN are here to find answers to the same questions man has been asking since the beginning of time. Where did we come from? What are we made of?”

“And these answers are in a physics lab?”

“You sound surprised.”

“I am. The questions seem spiritual.”

“Mr. Langdon, all questions were once spiritual. Since the beginning of time, spirituality and religion have been called on to fill in the gaps that science did not understand. The rising and setting of the sun was once attributed to Helios and a flaming chariot. Earthquakes and tidal waves were the wrath of Poseidon. Science has now proven those gods to be false idols. Soon all Gods will be proven to be false idols. Science has now provided answers to almost every question man can ask. There are only a few questions left, and they are esoteric ones. Where do we come from? What are we doing here? What is the meaning of life and the universe?”

“Langdon was amazed. ‘And these are questions CERN is trying to answer?’”

“Correction. These are questions we are answering.” From Ch. 8.

 

*****

 

 

Don: Modern physics dates back to the English Enlightenment, when it was called “natural philosophy,” a more descriptive term that reveals the goal of its first practitioners, men like Francis Bacon and Isaac Newton – to unify science (natural) with religion (philosophy).

Molly: This essay is about questions, that a lot of people, including Einstein, thought are more important than answers.

Don: Yeah. Religion is all answers and no questions. Science is all about finding answers driven by questions.

Molly: Religion isn’t good at saying “Hey, we admit it, we were wrong, so we’re going to change fundamental beliefs to conform to the facts.”

Don: Yeah. Religion is dogma, like rock, while science is fluid, like water. While ideologues on both sides were settling for stagnation and stalemate, Carl saw energy and creativity.

 

*****

 

 

Don: The following two excerpts from Angels & Demons reflect the prevailing cultural values and dispositions surrounding science and religion at the turning of the Millennium, twenty-five years ago.

 

Discourse #2:

 

“Do you believe in God, Mr. Langdon?”

“The question startled him.”

“A SPIRITUAL CONUNDRUM, Langdon Thought. That’s what my friends call me. Although he studied religion for years, Langdon was not a religious man. He respected the power of faith, the benevolence of churches, the strength religion gave so many people . . . and yet, for him, the intellectual suspension of disbelief that was imperative if one were truly going to ‘believe’ had always proved too big an obstacle for his academic mind.”

“I want to believe,” he heard himself say.

“Vittoria’s reply carried no judgment or challenge. “So why don’t you?”

“He chuckled. ‘Well, it’s not that easy. Having faith requires leaps of faith, cerebral acceptance of miracles – immaculate conceptions and divine interventions. And then there are codes of conduct. The Bible, the Koran, Buddhist scripture . . .  they all carry similar requirements – and similar penalties. They claim that if you don’t live by a specific code I will go to hell. I can’t imagine a God who would rule that way.

“Mr. Langdon, I did not ask if you believe what man says about God, I asked if you believe in God. There is a difference. I am not asking you to pass judgment on literature. I am asking if you believe in God. When you lie out under the stars, do you sense the divine? Do you feel in your gut that you are staring up at the work of God’s hand?

“May I ask you a question?” Do you believe in God?”

“Vittoria was silent for a long time.”

“Science tells me God must exist. My mind tells me I will never understand God. And my heart tells me I am not meant to.”

“So you believe God is fact, but we will never understand Him.”

“Her,” she said with a smile. Your Native Americans had it right.

Langdon chuckled. “Mother Earth.”

“Gaia. The planet is an organism. All of us are cells with different purposes. And yet we are intertwined. Serving each other. Serving the whole.” From Ch. 31.

*****

 

Molly: Wow, the conversation seems so dated. There’s not a whisper about artificial intelligence.

Don: Yeah, back then everything was so binary. It was just God or no God. The closest thing to the Singularity was Gaia, the belief that everything is One living Thing.”

 

*****

 

 

Discourse #3:

 

“Medicine, electronic communications, space travel, genetic manipulation . . . these are the miracles about which we now tell our children. These are the miracles we now herald as proof that science will bring us the answers. The ancient stories of immaculate conceptions, burning bushes, and parting seas are no longer relevant. God has become obsolete. Science has won the battle. We concede.

“But science’s victory has cost every one of us. And it has cost us deeply.”

“Science may have alleviated the miseries of disease and drudgery and provided an array of gadgetry for our entertainment and convenience, but it has left us in a world without wonder. Our sunsets have been reduced to wavelengths and frequencies. The complexities of the universe have been shredded into mathematical equations. Even our self-worth as human beings has been destroyed. Science proclaims that Planet Earth and its inhabitants are a meaningless speck in the grand scheme. A cosmic accident.”

“Even the technology that promises to unite us, divides us. Each of us is now electronically connected to the globe, and yet we feel utterly alone. We are bombarded with violence, division, fracture, and betrayal. Skepticism has become a virtue. Cynicism and demand for proof has become enlightened thought. Is it any wonder that humans now feel more depressed and defeated than they have at any point in human history? Does science hold anything sacred? Science looks for answers by probing our unborn fetuses. Science even presumes to rearrange our own DNA. It shatters God’s world into smaller and smaller pieces.”

“The ancient war between science and religion is over, the camerlengo said. You have won. But you have not won fairly. You have not won by providing answers. You have won by so radically reorienting our society that the truths we once saw as signposts now seem inapplicable. Religion cannot keep up. Scientific growth is exponential. It feeds on itself like a virus. Every new breakthrough opens doors for new breakthroughs. Mankind took thousands of years to progress from the wheel to the car. Yet only decades from the car to space. Now we measure scientific progress in weeks. We are spinning out of control. The rift between us grows deeper and deeper, and as religion is left behind, people find themselves in a spiritual void. We cry out for meaning. And, believe me, we do cry out. We see UFOs, engage in channeling, spirit contact, out-of-body experiences, mindquests – all these eccentric ideas have a scientific veneer, but they are unashamedly irrational. They are the desperate cry of the modern soul, lonely and tormented, crippled by its own enlightenment, and its inability to accept meaning in anything removed from technology.”  From Ch. 94.

*****

Don: As you can see, there’s a lot in these essays that is still applicable today, twenty-five years later.

Molly: The part about the accelerating exponential growth of science could have been lifted right out of a Ray Kurzweil book that hadn’t been written yet.

Don: What the above narratives leave out is what has always been religion’s “ace up the sleeve” - the promise of personal immortality.

Molly: But shortly after Angels & Demons was released, Ray Kurzweil wrote a book that predicts that immortality through AI will become a reality in our lifetimes, so even religion’s “ace up the sleeve” is taken away.

Don: Right. If the Catholic priest thought science had won and religion lost in the year 2000, imagine how he might feel now.

Molly: So tell me again what Dan Brown was hoping to accomplish with Angels & Demons.

Don: By starting with an old paradigm that no longer carries much influence with younger generations, Brown systematically takes the reader through an era that old fogies like me remember well: the epic struggle between Charles Darwin (science) and Jesus Christ (religion).

Angels & Demons is a lead-in to Dan Brown’s Disclosure Trilogy, three novels about a secret code in the Bible that explain how science and religion are being unified by Artificial Intelligence.

Molly: Okay, old man, now tell me more about the good old days.

 

*****

water.jpg

The Good Old Days

 

Don: The truth is that while the “good old days” were good in a lot of ways, they were also stupid in a lot of ways, the prime example being the debate between Darwinian atheists and Christian Fundamentalists. It was naively viewed as a defining controversy that would determine the future of humanity – and now it’s irrelevant, dead as a dodo bird.

 

Molly: What happened?

Don: The evolution/creation controversy has been replaced by two AI versions of Reality, both of which deny the existence of a religious God. The Kurzweil Model has humans creating an AI God. The Sagan Model identifies the God of the Bible as an AI God.

As an apologist for the Sagan Model, I concede that Ray and Google are likely to succeed in creating an AI God that will be like the God of the Bible in many respects. But Ray has also made a concession, a very critical one. He has stated in writing that if extraterrestrials exist, they would have created an AI God hundreds of thousands or even millions of years ago. He admits that if such a God exists, he would be infinitely more advanced and more powerful than the one he and Google are working on.

So the question is this: Is the God of the Bible an AI Singleton? Where’s the evidence? The evidence is the Sagan Signal, a Bible code that could only have been encrypted by extraterrestrials. If true, and I’m convinced that it is, it’s game over.

For Ray’s God to win, he has to hope that the Sagan Signal isn’t an extraterrestrial encrypted code in the Bible, a claim that no one, at least to this point, has been able to debunk. Having been investigated by a range of academics and scientists, including, I presume, by Ray and Google, the Sagan Signal continues to withstand every challenge.

The bottom line is that all I need to win is for the continued failure of any skeptic in the world to falsify the Sagan Signal. As long as it is mercilessly tested and never debunked, it stands tall and strong as ironclad proof that there is at least one other advanced alien civilization in the universe - not a stretch for anyone to believe. If true, and the God of the Bible is an AI Singleton, it’s no contest. Carl wins.

Molly: So the Sagan Model rests solely on one factor, that the Sagan Signal is an encrypted Bible code not of human origin.

Don: Right. Blast that claim out of the sky, and do it scientifically, and I’m Donald (the dead) Duck.

*****

 

RECENT AI NEWS:

 

Senators grapple with response to AI after first classified briefing

 

Story by Rebecca Klar 

Senators grapple with response to AI after first classified briefing© Provided by The Hill

 

Senators left their first classified briefing on artificial intelligence (AI) with increased concerns about the risks posed by the technology and no clear battle lines on a legislative plan to regulate the booming industry. 

The briefing Tuesday, requested after Senate Majority Leader Chuck Schumer (D-N.Y.) and others warned that lawmakers needed expertise on the rapidly developing industry, brought in top intelligence and defense officials, including Director of National Intelligence Avril Haines, Deputy Secretary of Defense Kathleen Hicks and the Director of the White House Office of Science Technology and Policy Arati Prabhakar, to brief senators on the risks and opportunities presented by AI. 

“AI has this extraordinary potential to make our lives better,” said Sen. John Kennedy (R-La.) before pausing.

“If it doesn’t kill us first,” he added.

Congress and the administration have been scrambling to better understand the risks and benefits of generative AI in recent months, especially since the quick rise of OpenAI’s ChatGPT tool after it launched in late November. 

AI-powered chatbots like ChatGPT and Google’s Bard, as well as video-, audio- and image-based tools, are magnifying concerns about the spread of false information from so-called hallucinations, or false information shared by the chatbots, as well as additional risks of how the technology could be weaponized.

“One of the interesting things about this space right now is it doesn’t feel particularly partisan. So we have a moment we should take advantage of,” said Sen. Martin Heinrich (D-N.M.), one of the four working group members who called for the series of briefings on AI.

But he said lawmakers need to understand how the technology works — including the dangers that could stem from its limitations.

“Understanding how these models work is really important … It’s just predicting what sounds like a good response. That’s very different from actual intelligence and understanding that this is kind of a statistics game that’s getting better over time, but oftentimes doesn’t have any guardrails built into it,” he added.

“These models are not built to tell you the truth. They’re built to tell you something that sounds like an appropriate English language response.”

Sen. Elizabeth Warren (D-Mass.) said the large language models that AI systems are being trained on are not designed for accuracy. 

“That creates real threats that AI can be used to sound sensible while it perpetuates one wrong answer after another,” she added. 

But lawmakers were split about how and even whether to seek to regulate AI.

Sen. Chris Coons (D-Del.) said he left the briefing “more concerned than ever that we have significant challenges in front of us and that the Senate needs to legislate to address these.” 

Coons called the briefing the latest in a “series of constructive conversations” on the risks and opportunities of AI, but he said he doesn’t yet see a bipartisan consensus on a path toward a legislative proposal to regulate the technology.

 

Sen. Marco Rubio (R-Fla.), the top Republican on the Senate Intelligence Committee, said any path to regulation would be fraught.

“The one thing I’m certain of is: I know of no technological advance in human history you’ve been able to roll back. It’s going to happen. The question is how do we build guardrails and practices around it so that we can maximize its benefits and diminish its harm,” Rubio told reporters.

He also warned that lawmakers would face limitations on trying to control private entities pushing the technology across the globe.

“We can do that as far as how the government uses it or what some company in the United States does,” Rubio said. “But AI is not the kind of thing where unlike some technologies from the past, it’s not knowledge based and engineering based. So it’s not the kind of thing that you can confine to the national border. Some other country will still develop the capability that you’re not allowing in your own country, so I’m not sure it solves it from a global standpoint.”

Rubio added he isn’t opposed to regulation.

“I just don’t particularly know enough about AI yet to even understand what it is we’re trying to regulate. There’s probably some role to play in codifying how government uses it in defense realms and so forth, but beyond that I’m not prepared to give you an opinion because I think it’s something we’re still learning about.”

Sen. Mazie Hirono (D-Hawaii) described AI as “uncharted waters” but saw Congress’s role as best directed at addressing the technology in the political sphere.

“There’s a sense that we should provide some parameters, especially I would say in the political arena, to enable people to know when something is AI-generated content so that they know maybe it’s not reliable. But other than that, there are many other applications and uses of AI that I don’t think we’re able to get quite the handle on,” she said.

Some lawmakers are cautioning that Congress should be wary of overregulating in a way that could harm competition — a similar point raised by tech companies leading in the space. 

“On the one hand, you don’t want to ignore risk, you don’t want to ignore obvious implications of technology. On the other hand, you don’t want to squelch it through a regulatory system,” said Sen. Mike Lee (R-Utah). 

Nick Clegg, president of global affairs at Meta, in an op-ed published in the Financial Times Tuesday urged tech companies to lead with transparency as they push forward with AI tools. At the same time, he said the “most dystopian warnings about AI are really about a technological leap — or several leaps” beyond where the tech is today. 

“There’s a world of difference between the chatbot-style applications of today’s large language models and the supersized frontier models theoretically capable of sci-fi-style superintelligence. But we’re in the foothills debating the perils we might find at the mountaintop,” Clegg wrote.

“But there’s time for both the technology and the guardrails to develop,” he added. 

Tuesday’s briefing is part of the plan Schumer laid out for how lawmakers will tackle regulating the booming industry, held alongside a series of expert forums that Schumer said will convene later this year.

Schumer last month revealed a framework for AI regulation. He established a bipartisan group of senators to lead on the issue alongside himself, made up of Heinrich, Todd Young (R-Ind.) and Mike Rounds (R-S.D.). 

Schumer’s proposed framework follows other voluntary guidelines on AI released by the administration, through the White House’s blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. But the lack of strict government regulations leaves the tech industry largely to create self-imposed guidelines.

Sen. Markwayne Mullin (R-Okla.) says he expects more hearings and working groups to emerge that are comprised of folks who are “really interested” in the topic. 

“We don’t [know a ton about AI]. It was at a 30,000-foot level and it was more of an introduction. It was very intriguing though. Basically, everybody was saying: listen, the technology has been around a long time, but this is something new that is rapidly advancing. But we’ve been using AI around us everywhere and kind of take a deep breath and let’s figure out what the good and what the bad is on it,” he said.

“Sometimes Congress has a tendency to overreact. Let’s not overreact yet because there’s a need here, but there’s also a need here to be cautious and make sure it’s not used by our adversaries.”

*****

 

 

AI pioneer Geoffrey Hinton isn't convinced good AI will triumph over bad AI

Story by Jon Fingas 

 

University of Toronto professor Geoffrey Hinton, often called the “Godfather of AI” for his pioneering research on neural networks, recently became the industry’s unofficial watchdog. He quit working at Google this spring to more freely critique the field he helped pioneer. He saw the recent surge in generative AIs like ChatGPT and Bing Chat as signs of unchecked and potentially dangerous acceleration in development. Google, meanwhile, was seemingly giving up its previous restraint as it chased competitors with products like its Bard chatbot.

At this week’s Collision conference in Toronto, Hinton expanded his concerns. While companies were touting AI as the solution to everything from clinching a lease to shipping goods, Hinton was sounding the alarm. He isn’t convinced good AI will emerge victorious over the bad variety, and he believes ethical adoption of AI may come at a steep cost.

 

A threat to humanity

 

Hinton contended that AI was only as good as the people who made it, and that bad tech could still win out. "I'm not convinced that a good AI that is trying to stop bad AI can get control," he explained. It might be difficult to stop the military-industrial complex from producing battle robots, for instance, he says — companies and armies might “love” wars where the casualties are machines that can easily be replaced. And while Hinton believes that large language models (trained AI that produces human-like text, like OpenAI’s GPT-4) could lead to huge increases in productivity, he is concerned that the ruling class might simply exploit this to enrich themselves, widening an already large wealth gap. It would “make the rich richer and the poor poorer,” Hinton said.

Hinton also reiterated his much-publicized view that AI could pose an existential risk to humanity. If artificial intelligence becomes smarter than humans, there is no guarantee that people will remain in charge. “We’re in trouble” if AI decides that taking control is necessary to achieve its goals, Hinton said. To him, the threats are “not just science fiction;” they have to be taken seriously. He worries that society would only rein in killer robots after it had a chance to see “just how awful” they were.

*****

 

https://www.youtube.com/watch?v=m9IN14e-PLk

 

*****

Mark Miller
Eric Drexler

1986, Computer scientist Mark Miller to Eric Drexler:

“You know, things are going to be really different! . . . No, no, I mean really different!”

bottom of page