Will AI Make us Safer?

Imagine a computer glitch that sends a signal from the space-based missile launch warning system that an ICBM attack has begun. An automated Artificial Intelligence (AI) data analysis and decision battle management center responds in seconds. The result is the beginning of an exchange of thousands of nuclear tipped missiles that destroys life on this planet.

Do we want high-consequence decision making under time urgent confusing conditions to be made by AI without a human in the loop? Will we trust the judgment of the non-human decisionmaker if it is contrary to the subjective feelings of the humans in charge? In the sci-fi movie 2001, Hal the computer responded to an instruction with the famous response, “I’m sorry, Dave, but I can’t do that.” In the future, Hal may send the message, “I’m sorry, Dave, that I destroyed the world, but I had no choice.”

High-consequence decision making is a challenge for humans under the best of conditions, but often the situation is additionally constrained by sleep deprivation, anxiety, stress, miscommunication, misunderstandings, superstitions, irrational fear, and other demands, mean decisions might be better carried out by Hal. Maybe a decision devoid of feelings of regret would be superior to the judgments of irrational overwrought human beings.

Throughout history, humans have made mistakes with vital decisions. Significant negative outcomes have been the result. In this post I describe several instances where, in my opinion, the key decision makers would have been more successful with a real time extensive data collection and AI analysis coupled with a decision system provided by one or more high performance 20-watt computers, namely their brains. On the other hand, in some cases, humans may not always be the best qualified to make the final decisions due to their deeply held beliefs and biases.

One highly recognized important decision (that some historians describe as the decision that “saved the world”) took place in September 1983. Stanislav Petrov, the person in charge of operating the data monitoring from the Soviet’s satellite early warning system, detected signals indicating an ICBM attack. He decided it was a false alarm and did not initiate the events that could have resulted in nuclear war.

Contrast this with the 1988 incident when the USS Vincennes mistakenly shot down an Iranian passenger jet. The commander faced a mix of accurate and misleading information, resulting in a tragic error. In such a scenario, AI could potentially offer real-time analysis and reduce human error—provided it’s designed to support rather than replace human judgment.

AI has the potential to enhance decision-making, especially when combined with human expertise. There are situations where human biases, deeply ingrained beliefs, and emotional responses may cloud judgment, making AI’s objectivity an asset.

The importance of human decision making played a major role during the 1980s when I was involved in developing the battle management plans for Ronald Reagan’s proposed missile defense program, labeled in the press as Star Wars. When faced with the detection of a Soviet missile attack, Reagan decided it would be better to defend rather than to retaliate. The defense against a highly capable ICBM attack would require deciding in minutes to initiate the launch of a defensive system. AI wasn’t available then, but had it been, would we have turned to it for a data handling, analysis, and decision process to help determine if an attack was real and help plan our response? Perhaps such split-second decisions in the face of overwhelming complexity would have been better handled by a combination of AI and human judgment than by humans alone.

Another example from history illustrates how life and death decisions can be made incorrectly. The question is, would having AI assistance have made any difference in the outcome? In my role as the Strategic Defense Initiative chief scientist, I was assigned to negotiate deals to advance the SDI concept, so I met with Karl Beckurts, the research director of Siemens, one of the largest electronics companies in the world. My assignment was to persuade him to have his company join us. We had a rather fancy lunch, and our discussion was quite pleasant, but he made it clear that he, speaking for his corporate leadership, had decided that they had no interest in working with the United States on the SDI program. That lunch may have cost him his life.

On July 9, 1986, in Bonn, West Germany, Beckurts was killed. A remote-controlled bomb detonated as his limo passed by a tree on his established route to work. A seven-page letter from the Red Army Faction (RAF), undoubtedly under the direction of the KGB, was found near the location of the bombing. The letter said that he had been killed because he was preparing to support Ronald Reagan’s program to develop a space-based missile defense program called the Strategic Defense Initiative.

Car accident

The Soviet decision to take Beckurts’ life is an example of flawed human decision making based on false perceptions and misunderstandings. But, even if AI support had been available to the Soviets during the 1980s, I doubt they would have turned to it. You see, they had already made up their minds about the threat from SDI.

Much like the Germans, both the French and British decided not to work with the United States on developing missile defense. One nation made a different decision. This country was interested in engaging in a joint program to develop a long-range missile defense system, and they were willing to share the cost with us. Their goal was not to help us, but to defend themselves. Their decision process seemed to me to be straightforward, not requiring any complex decision process. They were surrounded by enemies who posed an existential missile threat. The cooperative program was called Arrow, and we established a development program, that has been continually advanced over the years. The most advanced version, Arrow 3, a Boeing/Israel Aerospace Industries program is reported to have successfully intercepted a ballistic missile attack from Iran, and their defense system is now being upgraded. It appears that the Israeli decision paid off.

I want to close by emphasizing that I believe that human decision-making processes are the best way to deal with the ambiguity of complex issues, but in some cases, people could also benefit from technical assistance provided by AI. Ten years ago, I patented a concept for human brain cognitive enhancement. I explore this idea in my latest science fiction novel, The Dragon’s Brain, which will be released in October 2024. I am working on the final book in this series in which I consider whether our decision makers have the wisdom, perspective, and values to make the right decisions to preserve what seems to be becoming an increasingly unstable world. Today Iran is threatening to launch a retaliatory attack on Israel and President Biden is trying to persuade Netanyahu to agree to an end to the war in Gaza. By the time you read this, the decisions of people based on their own brains–without any help from computers—could impact world peace.

The Russian Nuclear Space Weapon

After Mike Turner, the chair of the House Intelligence Committee, warned the public about what he called a “serious national security threat,” and the White House confirmed that the Russians are developing a “troubling anti satellite weapon,” I was motivated to add some historical perspective based on my study of directed energy weapons. 

Garin Death Ray

HG Wells was the first to invent the concept of directed energy weapons also known as death rays in his 1898 sci-fi novel “The War of the Worlds.” He was far ahead of his time. The Soviets were next to join the death ray fiction world. In the 1920s, Alexsey Tolstoy published a sci-fi novel, “Garin’s Death Ray,” that focused not just on the weapon, but also the psychology of the inventor. The novel attracted the attention of the Russian military, but the non-technical issues were prophetic. The hero of the book, Garin, described his invention of a beam weapon that was incredibly destructive, but it also had many detractors that claimed, “This invention smells of higher politics.” 

After the recent revelations, CNN published their “exclusive account” that, “Russia is attempting to develop a nuclear space weapon to destroy satellites with a massive energy wave when detonated potentially crippling a vast swath of the commercial and government satellites that would cross a dangerous Rubicon in the history of nuclear weapons, disruptions to everyday life.” The story was picked up in the European “Pravda” that reported, “The weapon has the potential to destroy entire groups of small satellites such as Space X’s Starlink used by Ukraine in the war with Russia……and Moscow perceives the U.S. statements as attempt to persuade Republicans in Congress to approve assistance for Ukraine.”

This revelation had similarities to my 2023 science fiction techno thriller “The Dragon’s CLAW” that describes a test of a secret low energy nuclear (cold fusion) weapon initially discovered at Los Alamos and stolen by the Chinese and tested for the first time on a remote island in the Pacific with surprising results. I wrote: “The results of the first trial of the Dragon’s CLAW had exceeded all of the researchers’ expectations, but not in a good way. The energy output was 10 times higher than anticipated, and it had created a giant electromagnetic pulse…revealed the existence of a new, tremendously energetic and very dangerous device…. could destroy any nation’s electric grid infrastructure and all space-based communication, along with GPS”.

I wondered if not just Tolstoy, but both CNN as well as my fiction invention were examples of imagination. Then I remembered something about the Soviet secret program I had learned from Sidney Drell, Stanford physics professor and arms control expert, when I was researching the impact of Reagan’s Star Wars program on the end of the Cold War. I documented this in my autobiographical SDI story, “Death Rays and Delusions.”  I learned from the information published in 2007 by Drell and George Shultz, former secretary of state, who was directly involved in the Reykjavik Reagan/Gorbachev summit in 1986, that the Russians were very aware and concerned about the development of nuclear driven electromagnetic weapons.  

Gorbachev was told that that the United States was developing nuclear driven directed energy weapons. He was informed that the “design concept for directed nuclear weapons, work on which began in the U.S. in the 1970s…. weapons consist in transforming part of the energy from a nuclear explosion into powerful streams of directed x-rays or electromagnetic radiation or a stream of high energy particles. No less than three tests were conducted towards the creation of directed electromagnetic radiation weapons.”

He was also told, “Full scale development of these weapons is expected to occur in the second half of the 1990s.” Gorbachev was encouraged to negotiate a “ban on nuclear testing to prevent full scale development of directed energy weapons,” and prevent “military technical superiority of the U.S. in the development of munitions of the new generation for strategic weapons is concerned.” The Soviets’ concern about the possible development of nuclear directed energy weapons continued in the 1990s as evidenced by the writings of the head of the nuclear weapon program who in 1996 called for the end of development of such weapons that he called “an evil Jinn.” 

In my novel, “The Dragon’s CLAW,” the fictional Los Alamos lab director successfully argued, “This is the ideal moment to admit that the competition over space weapons would ultimately be mutually destructive for both countries… We must draft an agreement to end all space weapons development and cooperate with energy research. The future of humanity depends on us.”

I was surprised when my fiction seemed to match some of the recent CNN revelations, but I admit that my creativity can hardly keep up with the thinking of energy weapon advocates that continue to be intrigued by new ideas related to powerful laser death rays and microwave weapons. Some claim that such energy beams are the cause of the Havana Syndrome that messes up the minds of “2000 U.S.  officials in diplomatic posts worldwide … .and a CIA panel of experts concluded some of these have as a plausible explanation a directed pulsed radio frequency energy.” A microwave expert, James Benford, stated the syndrome, “certainly fits with a microwave beam as the attacking element,” but he nor the CIA suggested who might be the supposed foreign adversary. 

I will leave it to the reader of my series of books (see projectzbooks.com), to find out what happens in the world of fictional electromagnetic brain weapons, but the technology development and its consequences continues in my next book “The Dragon’s Brain” to be published in September. My novel describes how electromagnetic energy weapons can attack not just space satellites, but also the minds of diplomats and even the entire population of the world. Who knows, where fiction ends and reality emerges, but the work of HG Wells told us to be prepared for inventions we can hardly imagine.

Truth versus Fiction

Truth may be stranger than fiction, but fiction is more fun.

At the end of 2022 when Lawrence Livermore Laboratory achieved a major fusion breakthrough, my novel, The Dragon’s C.L.A.W. was already at the printers.  This struck me as amusing, since the book tells the story of a fictional clean energy breakthrough. In the novel, scientists at Los Alamos National Laboratory create a compact clean low-cost energy source using electron beams to trigger a Low Energy Nuclear Reaction that generates electromagnetic energy and then directly convert that into electricity.

Russia’s 1975 electron beam fusion reactor

The fictional breakthrough discovery is an accident that generates one thousand times more energetic output.  In addition to intended entertainment, my book’s basic messages are first that surprises happen in research when one’s imagination, creativity and enthusiasm is as important as careful well-founded analysis. My second theme is that discovery of new science is like a knife. A knife can be used to butter your bread or slit someone’s throat. Technology is a literal double-edged sword. I believe that there will always be applications of scientific achievements that are both civilian and military—that can be used for peaceful innovation or for weapons of war. I also believe that there will always be people who can invent and stimulate ideas as well as people who know how to stand in the way of progress. The path to scientific innovation often involves the sort of characters that appear in the pages of The Dragon’s C.L.A.W. 

I spent much of my career striving to achieve a breakthrough that could lead to clean, unlimited energy. Now, as an author I have created a fictional breakthrough that reaches that goal. So, naturally that begs the question—will scientists achieve that fusion goal in real life? When it comes to recent fusion breakthroughs, the rhetoric is exciting and invigorating. Examples of recent not too specific government fusion statements are “a game changer for efforts to achieve President Biden’s goal of a net-zero carbon economy,” and “new ways to power our homes and offices in future decades.” When I read such announcements, I cannot but help remembering Reagan’s Star Wars speech in 1983 that the goal of his missile defense program would make “nuclear weapons obsolete.” The outcome of the Reagan initiative was not technical but a strategic/political event that took place at Reykjavik Iceland in 1986 as told in my Potomac Institute article, It’s Laboratory or Goodbye.

Another famous president’s call for action was Kennedy’s 1962 challenge to “land a man on the moon” by 1970. In my first year as a grad student, after I listened to a detailed Caltech colloquium after the Kennedy speech, I was convinced that the technology was already well developed, the achievement was not that far off and a race with the Soviets would provide plenty of political support for the program. Kennedy’s words shaped public enthusiasm for the space program. Words can change the way people think about science. Words can change the way governments fund science.

This approach to imagining and planning for a very distant future suggested to me a story that begins with “it was dark and stormy night.” The story is about two cave men who sat in the cold, dark, dampness of their cave when a bolt of lightning struck and ignited for the first time in the history of human development, a pile of wet branches at the mouth of their cave. The pile of wet wood was ignited into a growing fire rather than just a thin whisk of smoke they had previously experienced.  One cave man could hardly believe that a lightning bolt could create a roaring fire in wet wood. He was astonished, warm, happy, and started to roast a small rodent on a stick, but the other, probably one of the first human engineers spoke up, “What if the lightning bolt ignited a new reaction that transformed the wood into new materials and created a way to make cheap, clean, inexhaustible energy?”

If you want to spend more time thinking about the scientific process, the quest for inexhaustible energy and the unavoidable connection between peaceful innovation and military applications, pick up a copy of The Dragon’s C.L.A.W. at your local bookstore or order online:

Fiction may be the answer

In 1985, the magazine “Science Digest” featured a debate between me and Hans Bethe, the 1967 Nobel Prize winner in physics and my former Cornell University undergraduate quantum mechanics physics professor. The question was whether President Reagan’s Strategic Defense Initiative, SDI, could be effective against nuclear tipped Soviet missiles. Bethe’s answer was a definite, “No.”

Bethe’s most persuasive argument was, “The entire system could never be tested under circumstances that were remotely realistic.” He did not wish to tackle the psychology of deterrence. He focused on the technical issues instead.

The United States was already living with the concept of mutually assured destruction, which I knew could not be tested either. I argued it was too soon to discuss the effectiveness of any hypothetical defense system. I believed a research program was justified and would be needed in order to influence the perception of a new and safer approach to strategic stability.

There was one area of technology development that concerned me––the requirement that the split-second events in a war would have to be managed by computer software. Back then I was basically Reagan’s Ray Gun Guy, and I did not know anything about testing software. Today, it looks like Bethe was right about the importance of testing. But there’s still something he missed.

Here’s where I think Bethe went astray: testing is all about technology, but deterrence is far more complicated. The vital issues in creating a credible deterrent are not just technology, but economics, social issues, political arrangements and psychology. I learned over the years that such problems really have no final solution, and continuing to pursue the answer often leads to alternating periods of hopeful optimism and depressing pessimism… and sometimes, but not always, real progress. My published opinion was that the outcome of the SDI program would “depend not only on the technology itself, but also on the extent to which the Soviet Union either agrees to mutual defense agreements and offense limitations…no definitive predictions of the outcome can be made.”

My feelings were ambivalent. I struggled to communicate the complexity of the issue to my scientific and political colleagues. I found it even more difficult to explain the questions surrounding SDI to the news media. But one person got it. He was a cartoonist.

In the 1980s, Berkeley Breathed, the cartoonist behind the series Bloom County, created a cartoon about me, the Chief Scientist of Reagan’s SDI, aka Star Wars program. He depicted me as a chubby penguin named Opus, who claimed that enormous sums of money would be needed to develop a “space defense gizmo.” When Opus learned that the unlimited money was not forthcoming, he screamed, “Physicists need Porsches too,” and then mused that maybe “the days of wines and roses are over.” Breathed understood the reality of my job.

I had been challenged with helping to put together a $25 billion, five-year plan for a research program to accomplish Reagan’s goal of “rendering nuclear weapons obsolete.” After the plan was finished and delivered to the Secretary of Defense, I wrote that even if the research was wildly successful, any workable missile defense would have to go along with a comprehensive arms control treaty that greatly reduced our own offensive capabilities as well as the threat. In spite of my published doubts, the following year I was asked by the newly chosen program’s manager, General James Abrahamson, to be his deputy and chief scientist. We brought together a distinguished advisory group including Edward Teller, the “father of the H bomb”, Bernard Schriever, retired four star general and the father of our nation’s first ballistic missiles that responded to the Soviet threat posed by Sputnik in 1957, Simon Ramo, the father of the engineering behind that first ballistic missile technology, Fred Seitz, former head of the National Academy of Sciences, and me.

During my two years in the Pentagon, I was faced not only with many serious detractors, but also with many incidents that could have been the source of high anxiety. I realized the contradictions, irony and exaggeration in the program were inescapable. I managed to approach the many stressful moments with humor that I often expressed in satirical memos and comments that were not always appreciated by my boss. But when dealing with complicated issues, there are no simple solutions. The best you can do is hang on to your sense of humor and keep trying to help other people understand your point of view.

As a cartoonist, Breathed understands that. His fictionalized depiction of the Star Wars dilemma summed up the situation succinctly. Reflecting on his cartoons years later, I wondered if perhaps Breathed had the answer to explaining the ambivalence that I faced during my time in the SDI program. In fact, the contradictory issues related to nuclear deterrence are something all scientists working in national defense face.

So, taking my inspiration from Breathed’s penguin, I have decided to try my hand at writing fiction. This spring, I will launch the first in a series of novels about the complex interaction between science and politics. Stay tuned for more information in future posts.