Time flies, but technology has its own schedule.

The Zenith Star mockup was marketed in 1987 to Ronald Reagan as a near term deployable space based laser.

The purpose of this blog post is to consider the evolution of military technology and explore what has happened in the last 80 years, and what might the future hold. Before World War II, technology was rather primitive by today’s standards, but it was on the verge of dramatic changes driven by the necessities of the war. Nuclear weapons, long distance rockets, and computers were about to appear, and global conflict was the catalyst to create dramatic advances. But then the deployment of new weapons followed a slower schedule with decreased military requirements and with funding competition from non-military investments. Even though technology budgets were somewhat restrained, Eisenhower still warned against “the establishment of a ‘military industrial complex,’” and he worried about the size of the defense industry that had grown in the 1950s,

Within a few decades, however, new technologies began to appear driven by the threats emerging from the Cold War and with the emergence of new weapon concepts, new aircraft, advanced materials, directed energy weapons, and ever-increasing speed and memory storage of computers.  By the 1980s, there were lots of new weapons investments: deployment of intercontinental ballistic missiles carrying multiple warheads, a reuseable space shuttle, directed nuclear weapons, particle beam weapons, high power chemical lasers, and space-based sensors. Many enthusiastic aerospace engineers in the United States and the Soviet Union prowled the halls of their military organizations marketing new missiles as well as space and laser weapons to no avail. In the United States, budgets were tight, and national debt was feared more than the Soviet Union. But then, the announcement by Reagan in 1983 of his goal to eliminate the threat from nuclear weapons rapidly accelerated a new nationwide technology development program that started to receive slowly increasing funding in 1985.

In 1983, Ronald Reagan announced the plan to develop the Strategic Defense Initiative.

Technology advocates soon had some political and technical setbacks. Congress passed the 1985 Balanced Budget Act, and the Pentagon faced many competing investments. Even though the Pentagon had been investing in many new concepts such as space-based lasers, Congress was not enthusiastic about giant programs. Even with the likely suspects, such as Edward Teller, who was selling his nuclear pumped x-ray lasers, the majority of the scientific community was skeptical. Then the Shuttle Challenger that the SDI program planned to use for early deployment of space weapons exploded shortly after liftoff. Nevertheless, many new programs were getting started slowly and Reagan was reelected with support for continuing emphasis on increasing defense budgets, The U.S. technology advances were just getting started but with very slow advances in high power lasers, but with real progress, primarily in the evolution of ground launched defensive missile systems, when the Soviet Union began to fall apart.

One year later, they signed an arms control agreement to ban intermediate range nuclear missiles that represented a serious strategic stability problem because of the short flight times of such missiles to and from Europe. The treaty remained in force for 10 years until Russia violated the treaty. Recently, the technology competition heated up again when Russia attacked Ukraine with such a missile carrying six non-nuclear warheads, each carrying six submunitions and fully capable of delivering nuclear weapons and attacking Europe in minutes. Now the need for missile defense has been made obvious to even the most dedicated advocates of arms control. And even more troublesome is the fact that the Russians have a well-developed capability to deploy methods, called penetration aids, to defeat any ground-based missile defense.

In April, 1986, a reactor at the Chernobyl nuclear power plant exploded contaminating Europe with radioactive material. That event was bad enough, but for the Soviet Union, the bad news just got worse and then even worse. They were already in a rather bad mood with their crumbling economy and had to deal with our active secret programs to blow up their pipelines and mess with their computers and software. At the same time, Gorbachev feared that with the United States increasing funding for our missile defense research, a space arms race would be the final blow in his attempts to save the Soviet economy. Then one of their ballistic missiles carrying submarines sank, and a Russian cruise ship carrying dignitaries collided in clear weather with a freighter in the Black Sea.

Gorbachev’s military experts convinced him that the United States would deploy space weapons in only a few years. Meanwhile, his own military industrial complex was preparing their own giant space laser for initial deployment the next year. As it turned out, there were many feverish aerospace engineers in both countries ready to be rewarded with unlimited funds. They were disappointed when not only was the technology not ready, but Reagan and Gorbachev decided to get along.

I believe that strategic political environment and technology have evolved to the point that we need to reconsider one of our old ideas for defense against nuclear tipped ballistic missiles based on lasers in space. The most advanced laser weapon system at that time was called Zenith Star and Reagan thought we were only a few years from a real defense. In reality, the technology fixes involved were not even close. The laser was one problem, but our computers for battle management were inadequate and the shuttle was much too expensive for a giant space deployment. Today times have changed, and so has technology.

Twenty years ago, practical high-power lasers were just concepts on viewgraphs, but DARPA formulated an ambitious program to create multiple combined fiber lasers with “tens of kilowatts and capable to be scaled to hundreds of kilowatts. A 500 kilowatt laser will exist soon and we are on the way to 1000 kilowatts. Not only are electrically pumped fiber lasers real today, but they have been moved from the research labs and are being used in manufacturing operations. They are also being deployed on ground and sea vehicles to intercept slow missiles and swarms of drones. Those are only some of the advances that indicate we need to take another look at missile defense using high power lasers.

Back in the 1980s, in spite of Reagan’s enthusiasm, the technology for large scale deployment of weapons into space was not a realistic possibility. Even if we had not suffered the loss of the Shuttle, we had no way to afford deployment with the cost of lift close to tens of thousands of dollars per pound. But now Space X is realistically offering lift at thousands of dollars per pound, and predicting 100 times less if everything but the rocket fuel is reuseable. In addition, Space X has launched thousands of Starlink satellites and changed the way people work and play. Elon Musk has certainly revolutionized access to space, and the entire missile defense system concept needs to be rethought without restrictions on deployment cost.

But what about the problem I thought would be the real show stopper, and that was the trusted computer hardware and software needed to allow the decision maker to instantly respond to a warning? When I worked at Sandia National Labs, our fastest computer had a computational speed of one trillion operations per second.  Today the fastest computers are one million times faster, and the practical applications of modern computers and the decision support software is so real that modern industry is investing in technology that has driven the leading supplier to become more than a 3 trillion-dollar capitalization corporation.

So now what?  With many extremely impressive advances in technology, I believe now is the time to ask again if Ronald Reagan’s dream of March 23, 1983 can become a reality. And as usual, even if the technology is enormously successful, there will be unintended strategic consequences that should be carefully explored.

Will AI Make us Safer?

Imagine a computer glitch that sends a signal from the space-based missile launch warning system that an ICBM attack has begun. An automated Artificial Intelligence (AI) data analysis and decision battle management center responds in seconds. The result is the beginning of an exchange of thousands of nuclear tipped missiles that destroys life on this planet.

Do we want high-consequence decision making under time urgent confusing conditions to be made by AI without a human in the loop? Will we trust the judgment of the non-human decisionmaker if it is contrary to the subjective feelings of the humans in charge? In the sci-fi movie 2001, Hal the computer responded to an instruction with the famous response, “I’m sorry, Dave, but I can’t do that.” In the future, Hal may send the message, “I’m sorry, Dave, that I destroyed the world, but I had no choice.”

High-consequence decision making is a challenge for humans under the best of conditions, but often the situation is additionally constrained by sleep deprivation, anxiety, stress, miscommunication, misunderstandings, superstitions, irrational fear, and other demands, mean decisions might be better carried out by Hal. Maybe a decision devoid of feelings of regret would be superior to the judgments of irrational overwrought human beings.

Throughout history, humans have made mistakes with vital decisions. Significant negative outcomes have been the result. In this post I describe several instances where, in my opinion, the key decision makers would have been more successful with a real time extensive data collection and AI analysis coupled with a decision system provided by one or more high performance 20-watt computers, namely their brains. On the other hand, in some cases, humans may not always be the best qualified to make the final decisions due to their deeply held beliefs and biases.

One highly recognized important decision (that some historians describe as the decision that “saved the world”) took place in September 1983. Stanislav Petrov, the person in charge of operating the data monitoring from the Soviet’s satellite early warning system, detected signals indicating an ICBM attack. He decided it was a false alarm and did not initiate the events that could have resulted in nuclear war.

Contrast this with the 1988 incident when the USS Vincennes mistakenly shot down an Iranian passenger jet. The commander faced a mix of accurate and misleading information, resulting in a tragic error. In such a scenario, AI could potentially offer real-time analysis and reduce human error—provided it’s designed to support rather than replace human judgment.

AI has the potential to enhance decision-making, especially when combined with human expertise. There are situations where human biases, deeply ingrained beliefs, and emotional responses may cloud judgment, making AI’s objectivity an asset.

The importance of human decision making played a major role during the 1980s when I was involved in developing the battle management plans for Ronald Reagan’s proposed missile defense program, labeled in the press as Star Wars. When faced with the detection of a Soviet missile attack, Reagan decided it would be better to defend rather than to retaliate. The defense against a highly capable ICBM attack would require deciding in minutes to initiate the launch of a defensive system. AI wasn’t available then, but had it been, would we have turned to it for a data handling, analysis, and decision process to help determine if an attack was real and help plan our response? Perhaps such split-second decisions in the face of overwhelming complexity would have been better handled by a combination of AI and human judgment than by humans alone.

Another example from history illustrates how life and death decisions can be made incorrectly. The question is, would having AI assistance have made any difference in the outcome? In my role as the Strategic Defense Initiative chief scientist, I was assigned to negotiate deals to advance the SDI concept, so I met with Karl Beckurts, the research director of Siemens, one of the largest electronics companies in the world. My assignment was to persuade him to have his company join us. We had a rather fancy lunch, and our discussion was quite pleasant, but he made it clear that he, speaking for his corporate leadership, had decided that they had no interest in working with the United States on the SDI program. That lunch may have cost him his life.

On July 9, 1986, in Bonn, West Germany, Beckurts was killed. A remote-controlled bomb detonated as his limo passed by a tree on his established route to work. A seven-page letter from the Red Army Faction (RAF), undoubtedly under the direction of the KGB, was found near the location of the bombing. The letter said that he had been killed because he was preparing to support Ronald Reagan’s program to develop a space-based missile defense program called the Strategic Defense Initiative.

Car accident

The Soviet decision to take Beckurts’ life is an example of flawed human decision making based on false perceptions and misunderstandings. But, even if AI support had been available to the Soviets during the 1980s, I doubt they would have turned to it. You see, they had already made up their minds about the threat from SDI.

Much like the Germans, both the French and British decided not to work with the United States on developing missile defense. One nation made a different decision. This country was interested in engaging in a joint program to develop a long-range missile defense system, and they were willing to share the cost with us. Their goal was not to help us, but to defend themselves. Their decision process seemed to me to be straightforward, not requiring any complex decision process. They were surrounded by enemies who posed an existential missile threat. The cooperative program was called Arrow, and we established a development program, that has been continually advanced over the years. The most advanced version, Arrow 3, a Boeing/Israel Aerospace Industries program is reported to have successfully intercepted a ballistic missile attack from Iran, and their defense system is now being upgraded. It appears that the Israeli decision paid off.

I want to close by emphasizing that I believe that human decision-making processes are the best way to deal with the ambiguity of complex issues, but in some cases, people could also benefit from technical assistance provided by AI. Ten years ago, I patented a concept for human brain cognitive enhancement. I explore this idea in my latest science fiction novel, The Dragon’s Brain, which will be released in October 2024. I am working on the final book in this series in which I consider whether our decision makers have the wisdom, perspective, and values to make the right decisions to preserve what seems to be becoming an increasingly unstable world. Today Iran is threatening to launch a retaliatory attack on Israel and President Biden is trying to persuade Netanyahu to agree to an end to the war in Gaza. By the time you read this, the decisions of people based on their own brains–without any help from computers—could impact world peace.

Fiction may be the answer

In 1985, the magazine “Science Digest” featured a debate between me and Hans Bethe, the 1967 Nobel Prize winner in physics and my former Cornell University undergraduate quantum mechanics physics professor. The question was whether President Reagan’s Strategic Defense Initiative, SDI, could be effective against nuclear tipped Soviet missiles. Bethe’s answer was a definite, “No.”

Bethe’s most persuasive argument was, “The entire system could never be tested under circumstances that were remotely realistic.” He did not wish to tackle the psychology of deterrence. He focused on the technical issues instead.

The United States was already living with the concept of mutually assured destruction, which I knew could not be tested either. I argued it was too soon to discuss the effectiveness of any hypothetical defense system. I believed a research program was justified and would be needed in order to influence the perception of a new and safer approach to strategic stability.

There was one area of technology development that concerned me––the requirement that the split-second events in a war would have to be managed by computer software. Back then I was basically Reagan’s Ray Gun Guy, and I did not know anything about testing software. Today, it looks like Bethe was right about the importance of testing. But there’s still something he missed.

Here’s where I think Bethe went astray: testing is all about technology, but deterrence is far more complicated. The vital issues in creating a credible deterrent are not just technology, but economics, social issues, political arrangements and psychology. I learned over the years that such problems really have no final solution, and continuing to pursue the answer often leads to alternating periods of hopeful optimism and depressing pessimism… and sometimes, but not always, real progress. My published opinion was that the outcome of the SDI program would “depend not only on the technology itself, but also on the extent to which the Soviet Union either agrees to mutual defense agreements and offense limitations…no definitive predictions of the outcome can be made.”

My feelings were ambivalent. I struggled to communicate the complexity of the issue to my scientific and political colleagues. I found it even more difficult to explain the questions surrounding SDI to the news media. But one person got it. He was a cartoonist.

In the 1980s, Berkeley Breathed, the cartoonist behind the series Bloom County, created a cartoon about me, the Chief Scientist of Reagan’s SDI, aka Star Wars program. He depicted me as a chubby penguin named Opus, who claimed that enormous sums of money would be needed to develop a “space defense gizmo.” When Opus learned that the unlimited money was not forthcoming, he screamed, “Physicists need Porsches too,” and then mused that maybe “the days of wines and roses are over.” Breathed understood the reality of my job.

I had been challenged with helping to put together a $25 billion, five-year plan for a research program to accomplish Reagan’s goal of “rendering nuclear weapons obsolete.” After the plan was finished and delivered to the Secretary of Defense, I wrote that even if the research was wildly successful, any workable missile defense would have to go along with a comprehensive arms control treaty that greatly reduced our own offensive capabilities as well as the threat. In spite of my published doubts, the following year I was asked by the newly chosen program’s manager, General James Abrahamson, to be his deputy and chief scientist. We brought together a distinguished advisory group including Edward Teller, the “father of the H bomb”, Bernard Schriever, retired four star general and the father of our nation’s first ballistic missiles that responded to the Soviet threat posed by Sputnik in 1957, Simon Ramo, the father of the engineering behind that first ballistic missile technology, Fred Seitz, former head of the National Academy of Sciences, and me.

During my two years in the Pentagon, I was faced not only with many serious detractors, but also with many incidents that could have been the source of high anxiety. I realized the contradictions, irony and exaggeration in the program were inescapable. I managed to approach the many stressful moments with humor that I often expressed in satirical memos and comments that were not always appreciated by my boss. But when dealing with complicated issues, there are no simple solutions. The best you can do is hang on to your sense of humor and keep trying to help other people understand your point of view.

As a cartoonist, Breathed understands that. His fictionalized depiction of the Star Wars dilemma summed up the situation succinctly. Reflecting on his cartoons years later, I wondered if perhaps Breathed had the answer to explaining the ambivalence that I faced during my time in the SDI program. In fact, the contradictory issues related to nuclear deterrence are something all scientists working in national defense face.

So, taking my inspiration from Breathed’s penguin, I have decided to try my hand at writing fiction. This spring, I will launch the first in a series of novels about the complex interaction between science and politics. Stay tuned for more information in future posts.

Beware of the swarm

Three years ago, I speculated in my blog that fairly low-tech unmanned aircraft, UAVs, or drones could defeat very expensive missile systems after a giant Saudi oil facility was attacked with high precision causing enough damage to reduce the global oil supply. Even though there was a missile defense system in place, the attack came from a swarm of small low-flying drones and cruise missiles that defeated the existing missile defense system.

I called for an increased emphasis on defense against this type of attack, and since then, there have been many worldwide new programs focused on developing this kind of threat as well as new defense systems. The recent Russian attacks on Ukraine’s infrastructure and the Ukraine attacks against Russian air bases appear to be a demonstration of what I expected, namely a fundamental change in offense and defense.

I pointed out in my post that swarms of such weapons to surprise and exhaust even the most competent defenses could mark a radical change in warfighting. I wrote that “drones could target critical parts of the exposed grid, disperse biological agents, target crowds at sports events, or even parking lots of shopping centers.” Unfortunately, my worst fears have come to pass with the Russians targeting the cities and critical infrastructure of Ukraine. Now Ukraine has struck back, and the nastiness is only going to be even nastier with more attacks from both sides. The balloon has gone up. But wait there’s more. The latest Ukraine innovation is drone killer boats backed up by flying drones to find and strike targets at sea. So the air, sea, and space application of killer drones is going to be the new way of war. But where there are new weapons, there are certain to be new counter-weapons.

With the development of fiber laser weapons with a power level of tens to hundreds of kilowatts, a realistic defense against drone swarms is possible if the tracking, pointing, and fire control system works reliably, and if the power supply is of an ample duration, and if enough of such defense system could become an affordable deployment … and of course, the weather cooperates. Boeing has created “an anti-drone death ray truck” that may defeat the ifs, but there are a lot of ifs and as usual, the offense is already a step ahead of the defense.

What about those new all-weather high-power microwave weapons such as the Ratheon Phaser to attack the controls and brains of the drones so that they become dumb rocks instead of brilliant pebbles?  High-power microwave weapons are being developed by many countries and they will be important.  This will be a story of brains versus beams, and the details will be written as the old game of offense versus defense is repeated again and again. In any case, there is no question that the game has begun and when new technology is created, people will find a way to apply that technology to warfighting.

An eventual development could be the proliferation of low-cost killer drones, and they could become the weapon of choice for ground forces, law enforcement, and maybe terrorists or even your neighborhood crazy guys who already are using weapons developed for the military. It is likely that such killer drones will initially be under the control of an operator, but quite possibly in a few years, they will be employed using artificial intelligence to search out and target predetermined targets when they are recognized by the smart sensor on the killer drones.

Survival of soldiers and military surface systems is possible if they can move, hide, defend, and shoot back, but there is not going to be a so-called “last move” in this contest of energy weapons versus drones. There may have to be an eventual change in the tactics of all surface warfare. It could be just too dangerous for high-value targets to try to survive above ground.  Maybe survival would be achieved by deploying in tunnels and caves. But what about drone swarms used by terrorists against civilian targets?  A logical step would be to ban such weapons, but we have not done this with assault rifles. Instead, children are trained to respond to an active shooter in their schools. I wonder if children will have to return to “duck and cover” when sensors detect a killer drone swarm approaching their playground?