Can Trump’s Golden Dome Make us Safe?

Last year Donald Trump announced that “we must be able to defend our homeland, our allies, and our military assets around the world from the threat of hypersonic missiles, no matter where they are launched from.” After his election, he called for a program labeled the Golden Dome, and he requested a plan with no limit on cost to achieve his goal. This brought back many memories from 40 years ago.

Although I had been involved and frustrated for many years with the rather slowly advancing R&D related to space-based missile defense, I became intrigued by new ideas after I had a lunch conversation with the brilliant and creative physicist Freeman Dyson. I had become convinced that the tactics and technology needed to counter a massive missile attack would always fail. I was sure that the offense would always have the advantage.  Dyson introduced to me a more interesting way of looking at this complex issue.  Dyson told me about his concept of a quest that would “allow us to protect our national interests without committing us to threaten the wholesale massacre of innocent people.” He argued on moral grounds for “a defensive world as our long-range objective … and the objective will sooner or later be found, whether the means are treaties and doctrines or radars and lasers.”

This quest became my full-time occupation after the March 23, 1983 speech by President Reagan in which he called for a program “to make nuclear weapons impotent and obsolete.” As a result, I was asked by Harold Agnew, the former head of Los Alamos Lab, to help put together a plan to implement the President’s challenge. The plan that was delivered to the President in September 1983 consisted of a collection of poorly defined technologies and called for a five-year $25 billion investment to answer the question of whether could someday be a defense. Because I had helped to create the plan, I was asked in 1984 to become the chief scientist for Reagan’s Strategic Defense Initiative.  I found my assignment was primarily public relations as the actual research work was dominated by the question “will it make us safe,” and I spent many days and weeks trying to explain to the detractors what “it” was.

During this time, I often found myself in debates with notable opponents. I vividly remember my   debate with Hans Bethe, Nobel Laureate in physics, who also happened to have been my quantum mechanics professor in 1961 at Cornell University. Our debate was published in Science Digest in an article, entitled “Can Star Wars make us safe?” Bethe answered no, and he was joined in his opinion by practically all of the academic scientists at the time. They argued that we had no plausible way to accomplish what they thought was Reagan’s goal to protect all of us from the threat of nuclear tipped ballistic missiles. I argued that the answer was yes, but I changed the definition of the goal to become more in keeping with my understanding of what Reagan really wanted, and in keeping with the wisdom of Dyson. Today, the demands for the protection against the threat are much more complex including hypersonic missiles, cruise missiles, anti-satellite weapons, and cyber-attacks. In fact, one of the scariest threats would be contagious bio weapons spread by swarms of crop sprayers launched from submarines near our coasts. But my answer to the question, will “it” make us safe, is still in the affirmative. 

As before, the arms control experts have spoken out to explain “it” just cannot be done. They repeat the same old arguments that it won’t work, it is too expensive, and it will create entirely new strategic instabilities. The question I asked at the time was “what is it,” and I think that is the right question to be considered now.

So, what about now?  Are we still arguing about “it” without understanding what it is? In my view, it is not about how to win the ultimate global war using space-based weapons, but it is to prevent war. Maybe with the recent advances in technology, we can find new ways to accomplish that through a new approach to deterrence that involves a shared approach to a stable combination of defense and offense tech development. We will need to first accomplish a breakthrough in vastly improved trusted communication and decision making in the face of confusion, chaos, threats, and fundamental disagreements. With the proliferation of advanced offensive weapon technology, we need to try to find a new more hopeful path.  Maybe there could be some stable system to prevent war through technology enhanced information sharing, reduced offensive threats, and deterrence that will prevent the initial steps toward war.

But I recall Bethe’s final argument in our debate was that any defense could not be trusted since it could not be tested under realistic conditions.  I argued that we already have learned to live with deterrence that cannot be realistically tested, since that has to be a question of psychology involving human decision making.   It is conceivable that complex reasoning-based information management and decision making can be assisted through AI that could carry out simulated tests of a semi-infinite number of complex combinations of events and human decision making.

I remember when I was asked by Harold Agnew to lead the group to deliver a plan for the beam weapons component of the SDI. He said in a hushed tone that I had to take very seriously his warning that my job would be “very, very dangerous.” He said I could easily be trampled by the stampede of contractors going after funding. He was not encouraging to say the least, and in a matter of weeks he walked away from involvement. He never understood the Reagan goal of the program and was definitely opposed to any thought of nuclear weapons abolition. His concept of safety was the threat of destruction.

The “it” is still hard to define and has not become easier, but President Trump says there should be a way to protect us, and there should not be any limit to the amount of investment.  Maybe the “it” is a safe future world, and then the question is… can the Golden Dome make us safe? Let’s see what “it” is in the plan soon to be delivered to the President.

Time flies, but technology has its own schedule.

The Zenith Star mockup was marketed in 1987 to Ronald Reagan as a near term deployable space based laser.

The purpose of this blog post is to consider the evolution of military technology and explore what has happened in the last 80 years, and what might the future hold. Before World War II, technology was rather primitive by today’s standards, but it was on the verge of dramatic changes driven by the necessities of the war. Nuclear weapons, long distance rockets, and computers were about to appear, and global conflict was the catalyst to create dramatic advances. But then the deployment of new weapons followed a slower schedule with decreased military requirements and with funding competition from non-military investments. Even though technology budgets were somewhat restrained, Eisenhower still warned against “the establishment of a ‘military industrial complex,’” and he worried about the size of the defense industry that had grown in the 1950s,

Within a few decades, however, new technologies began to appear driven by the threats emerging from the Cold War and with the emergence of new weapon concepts, new aircraft, advanced materials, directed energy weapons, and ever-increasing speed and memory storage of computers.  By the 1980s, there were lots of new weapons investments: deployment of intercontinental ballistic missiles carrying multiple warheads, a reuseable space shuttle, directed nuclear weapons, particle beam weapons, high power chemical lasers, and space-based sensors. Many enthusiastic aerospace engineers in the United States and the Soviet Union prowled the halls of their military organizations marketing new missiles as well as space and laser weapons to no avail. In the United States, budgets were tight, and national debt was feared more than the Soviet Union. But then, the announcement by Reagan in 1983 of his goal to eliminate the threat from nuclear weapons rapidly accelerated a new nationwide technology development program that started to receive slowly increasing funding in 1985.

In 1983, Ronald Reagan announced the plan to develop the Strategic Defense Initiative.

Technology advocates soon had some political and technical setbacks. Congress passed the 1985 Balanced Budget Act, and the Pentagon faced many competing investments. Even though the Pentagon had been investing in many new concepts such as space-based lasers, Congress was not enthusiastic about giant programs. Even with the likely suspects, such as Edward Teller, who was selling his nuclear pumped x-ray lasers, the majority of the scientific community was skeptical. Then the Shuttle Challenger that the SDI program planned to use for early deployment of space weapons exploded shortly after liftoff. Nevertheless, many new programs were getting started slowly and Reagan was reelected with support for continuing emphasis on increasing defense budgets, The U.S. technology advances were just getting started but with very slow advances in high power lasers, but with real progress, primarily in the evolution of ground launched defensive missile systems, when the Soviet Union began to fall apart.

One year later, they signed an arms control agreement to ban intermediate range nuclear missiles that represented a serious strategic stability problem because of the short flight times of such missiles to and from Europe. The treaty remained in force for 10 years until Russia violated the treaty. Recently, the technology competition heated up again when Russia attacked Ukraine with such a missile carrying six non-nuclear warheads, each carrying six submunitions and fully capable of delivering nuclear weapons and attacking Europe in minutes. Now the need for missile defense has been made obvious to even the most dedicated advocates of arms control. And even more troublesome is the fact that the Russians have a well-developed capability to deploy methods, called penetration aids, to defeat any ground-based missile defense.

In April, 1986, a reactor at the Chernobyl nuclear power plant exploded contaminating Europe with radioactive material. That event was bad enough, but for the Soviet Union, the bad news just got worse and then even worse. They were already in a rather bad mood with their crumbling economy and had to deal with our active secret programs to blow up their pipelines and mess with their computers and software. At the same time, Gorbachev feared that with the United States increasing funding for our missile defense research, a space arms race would be the final blow in his attempts to save the Soviet economy. Then one of their ballistic missiles carrying submarines sank, and a Russian cruise ship carrying dignitaries collided in clear weather with a freighter in the Black Sea.

Gorbachev’s military experts convinced him that the United States would deploy space weapons in only a few years. Meanwhile, his own military industrial complex was preparing their own giant space laser for initial deployment the next year. As it turned out, there were many feverish aerospace engineers in both countries ready to be rewarded with unlimited funds. They were disappointed when not only was the technology not ready, but Reagan and Gorbachev decided to get along.

I believe that strategic political environment and technology have evolved to the point that we need to reconsider one of our old ideas for defense against nuclear tipped ballistic missiles based on lasers in space. The most advanced laser weapon system at that time was called Zenith Star and Reagan thought we were only a few years from a real defense. In reality, the technology fixes involved were not even close. The laser was one problem, but our computers for battle management were inadequate and the shuttle was much too expensive for a giant space deployment. Today times have changed, and so has technology.

Twenty years ago, practical high-power lasers were just concepts on viewgraphs, but DARPA formulated an ambitious program to create multiple combined fiber lasers with “tens of kilowatts and capable to be scaled to hundreds of kilowatts. A 500 kilowatt laser will exist soon and we are on the way to 1000 kilowatts. Not only are electrically pumped fiber lasers real today, but they have been moved from the research labs and are being used in manufacturing operations. They are also being deployed on ground and sea vehicles to intercept slow missiles and swarms of drones. Those are only some of the advances that indicate we need to take another look at missile defense using high power lasers.

Back in the 1980s, in spite of Reagan’s enthusiasm, the technology for large scale deployment of weapons into space was not a realistic possibility. Even if we had not suffered the loss of the Shuttle, we had no way to afford deployment with the cost of lift close to tens of thousands of dollars per pound. But now Space X is realistically offering lift at thousands of dollars per pound, and predicting 100 times less if everything but the rocket fuel is reuseable. In addition, Space X has launched thousands of Starlink satellites and changed the way people work and play. Elon Musk has certainly revolutionized access to space, and the entire missile defense system concept needs to be rethought without restrictions on deployment cost.

But what about the problem I thought would be the real show stopper, and that was the trusted computer hardware and software needed to allow the decision maker to instantly respond to a warning? When I worked at Sandia National Labs, our fastest computer had a computational speed of one trillion operations per second.  Today the fastest computers are one million times faster, and the practical applications of modern computers and the decision support software is so real that modern industry is investing in technology that has driven the leading supplier to become more than a 3 trillion-dollar capitalization corporation.

So now what?  With many extremely impressive advances in technology, I believe now is the time to ask again if Ronald Reagan’s dream of March 23, 1983 can become a reality. And as usual, even if the technology is enormously successful, there will be unintended strategic consequences that should be carefully explored.

Will AI Make us Safer?

Imagine a computer glitch that sends a signal from the space-based missile launch warning system that an ICBM attack has begun. An automated Artificial Intelligence (AI) data analysis and decision battle management center responds in seconds. The result is the beginning of an exchange of thousands of nuclear tipped missiles that destroys life on this planet.

Do we want high-consequence decision making under time urgent confusing conditions to be made by AI without a human in the loop? Will we trust the judgment of the non-human decisionmaker if it is contrary to the subjective feelings of the humans in charge? In the sci-fi movie 2001, Hal the computer responded to an instruction with the famous response, “I’m sorry, Dave, but I can’t do that.” In the future, Hal may send the message, “I’m sorry, Dave, that I destroyed the world, but I had no choice.”

High-consequence decision making is a challenge for humans under the best of conditions, but often the situation is additionally constrained by sleep deprivation, anxiety, stress, miscommunication, misunderstandings, superstitions, irrational fear, and other demands, mean decisions might be better carried out by Hal. Maybe a decision devoid of feelings of regret would be superior to the judgments of irrational overwrought human beings.

Throughout history, humans have made mistakes with vital decisions. Significant negative outcomes have been the result. In this post I describe several instances where, in my opinion, the key decision makers would have been more successful with a real time extensive data collection and AI analysis coupled with a decision system provided by one or more high performance 20-watt computers, namely their brains. On the other hand, in some cases, humans may not always be the best qualified to make the final decisions due to their deeply held beliefs and biases.

One highly recognized important decision (that some historians describe as the decision that “saved the world”) took place in September 1983. Stanislav Petrov, the person in charge of operating the data monitoring from the Soviet’s satellite early warning system, detected signals indicating an ICBM attack. He decided it was a false alarm and did not initiate the events that could have resulted in nuclear war.

Contrast this with the 1988 incident when the USS Vincennes mistakenly shot down an Iranian passenger jet. The commander faced a mix of accurate and misleading information, resulting in a tragic error. In such a scenario, AI could potentially offer real-time analysis and reduce human error—provided it’s designed to support rather than replace human judgment.

AI has the potential to enhance decision-making, especially when combined with human expertise. There are situations where human biases, deeply ingrained beliefs, and emotional responses may cloud judgment, making AI’s objectivity an asset.

The importance of human decision making played a major role during the 1980s when I was involved in developing the battle management plans for Ronald Reagan’s proposed missile defense program, labeled in the press as Star Wars. When faced with the detection of a Soviet missile attack, Reagan decided it would be better to defend rather than to retaliate. The defense against a highly capable ICBM attack would require deciding in minutes to initiate the launch of a defensive system. AI wasn’t available then, but had it been, would we have turned to it for a data handling, analysis, and decision process to help determine if an attack was real and help plan our response? Perhaps such split-second decisions in the face of overwhelming complexity would have been better handled by a combination of AI and human judgment than by humans alone.

Another example from history illustrates how life and death decisions can be made incorrectly. The question is, would having AI assistance have made any difference in the outcome? In my role as the Strategic Defense Initiative chief scientist, I was assigned to negotiate deals to advance the SDI concept, so I met with Karl Beckurts, the research director of Siemens, one of the largest electronics companies in the world. My assignment was to persuade him to have his company join us. We had a rather fancy lunch, and our discussion was quite pleasant, but he made it clear that he, speaking for his corporate leadership, had decided that they had no interest in working with the United States on the SDI program. That lunch may have cost him his life.

On July 9, 1986, in Bonn, West Germany, Beckurts was killed. A remote-controlled bomb detonated as his limo passed by a tree on his established route to work. A seven-page letter from the Red Army Faction (RAF), undoubtedly under the direction of the KGB, was found near the location of the bombing. The letter said that he had been killed because he was preparing to support Ronald Reagan’s program to develop a space-based missile defense program called the Strategic Defense Initiative.

Car accident

The Soviet decision to take Beckurts’ life is an example of flawed human decision making based on false perceptions and misunderstandings. But, even if AI support had been available to the Soviets during the 1980s, I doubt they would have turned to it. You see, they had already made up their minds about the threat from SDI.

Much like the Germans, both the French and British decided not to work with the United States on developing missile defense. One nation made a different decision. This country was interested in engaging in a joint program to develop a long-range missile defense system, and they were willing to share the cost with us. Their goal was not to help us, but to defend themselves. Their decision process seemed to me to be straightforward, not requiring any complex decision process. They were surrounded by enemies who posed an existential missile threat. The cooperative program was called Arrow, and we established a development program, that has been continually advanced over the years. The most advanced version, Arrow 3, a Boeing/Israel Aerospace Industries program is reported to have successfully intercepted a ballistic missile attack from Iran, and their defense system is now being upgraded. It appears that the Israeli decision paid off.

I want to close by emphasizing that I believe that human decision-making processes are the best way to deal with the ambiguity of complex issues, but in some cases, people could also benefit from technical assistance provided by AI. Ten years ago, I patented a concept for human brain cognitive enhancement. I explore this idea in my latest science fiction novel, The Dragon’s Brain, which will be released in October 2024. I am working on the final book in this series in which I consider whether our decision makers have the wisdom, perspective, and values to make the right decisions to preserve what seems to be becoming an increasingly unstable world. Today Iran is threatening to launch a retaliatory attack on Israel and President Biden is trying to persuade Netanyahu to agree to an end to the war in Gaza. By the time you read this, the decisions of people based on their own brains–without any help from computers—could impact world peace.

The Russian Nuclear Space Weapon

After Mike Turner, the chair of the House Intelligence Committee, warned the public about what he called a “serious national security threat,” and the White House confirmed that the Russians are developing a “troubling anti satellite weapon,” I was motivated to add some historical perspective based on my study of directed energy weapons. 

Garin Death Ray

HG Wells was the first to invent the concept of directed energy weapons also known as death rays in his 1898 sci-fi novel “The War of the Worlds.” He was far ahead of his time. The Soviets were next to join the death ray fiction world. In the 1920s, Alexsey Tolstoy published a sci-fi novel, “Garin’s Death Ray,” that focused not just on the weapon, but also the psychology of the inventor. The novel attracted the attention of the Russian military, but the non-technical issues were prophetic. The hero of the book, Garin, described his invention of a beam weapon that was incredibly destructive, but it also had many detractors that claimed, “This invention smells of higher politics.” 

After the recent revelations, CNN published their “exclusive account” that, “Russia is attempting to develop a nuclear space weapon to destroy satellites with a massive energy wave when detonated potentially crippling a vast swath of the commercial and government satellites that would cross a dangerous Rubicon in the history of nuclear weapons, disruptions to everyday life.” The story was picked up in the European “Pravda” that reported, “The weapon has the potential to destroy entire groups of small satellites such as Space X’s Starlink used by Ukraine in the war with Russia……and Moscow perceives the U.S. statements as attempt to persuade Republicans in Congress to approve assistance for Ukraine.”

This revelation had similarities to my 2023 science fiction techno thriller “The Dragon’s CLAW” that describes a test of a secret low energy nuclear (cold fusion) weapon initially discovered at Los Alamos and stolen by the Chinese and tested for the first time on a remote island in the Pacific with surprising results. I wrote: “The results of the first trial of the Dragon’s CLAW had exceeded all of the researchers’ expectations, but not in a good way. The energy output was 10 times higher than anticipated, and it had created a giant electromagnetic pulse…revealed the existence of a new, tremendously energetic and very dangerous device…. could destroy any nation’s electric grid infrastructure and all space-based communication, along with GPS”.

I wondered if not just Tolstoy, but both CNN as well as my fiction invention were examples of imagination. Then I remembered something about the Soviet secret program I had learned from Sidney Drell, Stanford physics professor and arms control expert, when I was researching the impact of Reagan’s Star Wars program on the end of the Cold War. I documented this in my autobiographical SDI story, “Death Rays and Delusions.”  I learned from the information published in 2007 by Drell and George Shultz, former secretary of state, who was directly involved in the Reykjavik Reagan/Gorbachev summit in 1986, that the Russians were very aware and concerned about the development of nuclear driven electromagnetic weapons.  

Gorbachev was told that that the United States was developing nuclear driven directed energy weapons. He was informed that the “design concept for directed nuclear weapons, work on which began in the U.S. in the 1970s…. weapons consist in transforming part of the energy from a nuclear explosion into powerful streams of directed x-rays or electromagnetic radiation or a stream of high energy particles. No less than three tests were conducted towards the creation of directed electromagnetic radiation weapons.”

He was also told, “Full scale development of these weapons is expected to occur in the second half of the 1990s.” Gorbachev was encouraged to negotiate a “ban on nuclear testing to prevent full scale development of directed energy weapons,” and prevent “military technical superiority of the U.S. in the development of munitions of the new generation for strategic weapons is concerned.” The Soviets’ concern about the possible development of nuclear directed energy weapons continued in the 1990s as evidenced by the writings of the head of the nuclear weapon program who in 1996 called for the end of development of such weapons that he called “an evil Jinn.” 

In my novel, “The Dragon’s CLAW,” the fictional Los Alamos lab director successfully argued, “This is the ideal moment to admit that the competition over space weapons would ultimately be mutually destructive for both countries… We must draft an agreement to end all space weapons development and cooperate with energy research. The future of humanity depends on us.”

I was surprised when my fiction seemed to match some of the recent CNN revelations, but I admit that my creativity can hardly keep up with the thinking of energy weapon advocates that continue to be intrigued by new ideas related to powerful laser death rays and microwave weapons. Some claim that such energy beams are the cause of the Havana Syndrome that messes up the minds of “2000 U.S.  officials in diplomatic posts worldwide … .and a CIA panel of experts concluded some of these have as a plausible explanation a directed pulsed radio frequency energy.” A microwave expert, James Benford, stated the syndrome, “certainly fits with a microwave beam as the attacking element,” but he nor the CIA suggested who might be the supposed foreign adversary. 

I will leave it to the reader of my series of books (see projectzbooks.com), to find out what happens in the world of fictional electromagnetic brain weapons, but the technology development and its consequences continues in my next book “The Dragon’s Brain” to be published in September. My novel describes how electromagnetic energy weapons can attack not just space satellites, but also the minds of diplomats and even the entire population of the world. Who knows, where fiction ends and reality emerges, but the work of HG Wells told us to be prepared for inventions we can hardly imagine.