Gerry in garage

Again with the hereafter

As I mentioned in an earlier blog post, now that I’m in my 80s, I frequently find myself dwelling on the hereafter. In other words, I find myself walking into the garage in the middle of an errand, presumably fetching something for my wife, and I ask myself, “What am I here after?” I’ve even tried asking AI, since most people are outsourcing tasks to AI nowadays. Alas, AI (like me) had not listened to my wife.

This leads me to consider AI’s shortcomings (a much better choice than dwelling on my own shortcomings!) But, does AI have shortcomings? As artificial intelligence continues to accelerate in speed and scale, many assume that the future of problem-solving will be machine-dominated. But what if neither AI nor unaided human reasoning is sufficient to address the wicked problems of our age. Wicked problems have no closed end solutions. They defy simple answers and cut across traditional domains of expertise. They require not just faster computation but deeper wisdom—and new ways of thinking.

Before turning to AI for assistance, let’s start by considering what we can do to make people think better. Is there a way to boost human intelligence and cognitive abilities? If you have read my latest novel, The Dragon’s Brain, you are familiar with an invention called the Brainaid, a form of non-invasive brain stimulation and brain wave entrainment that improves focus, creativity, memory, and emotional regulation. If you haven’t read The Dragon’s Brain, what are you waiting for? Head to your favorite bookstore or website and get your copy now. (Here’s the Amazon link.) In any case, the fictional Brainaid is based on a similar brain stimulation device that I patented many years ago. While a fully functional product is yet to hit the market, I firmly believe that science can–and will–enhance human thought. AI is getting smarter, so why not enhance the human side of the equation too?

The current narrative often positions AI as a replacement for human intelligence. But the reality is more nuanced. Machines excel at speed, scale, and pattern recognition. Humans contribute meaning, ethics, empathy, and insight. Each is powerful on its own—but neither is complete.

AI alone can be dangerous. It lacks human intuition and empathy. Perhaps you’ve met a physicist, engineer, or computer scientist who also lacks human intuition and empathy. I earned my bachelor’s degree in engineering physics from Cornell University, where I was trained to think precisely, quantitatively, and rigorously. But two courses outside the engineering curriculum had an even greater long-term impact on how I think. The first was a course in the Philosophy of Science taught by Max Black, a sharp and elegant thinker who challenged us to ask not just how science works, but what we mean when we claim to know something. The second was a course in the Philosophy of Religion taught by Milton Konvitz, who opened my mind to the moral foundations of law, liberty, and human dignity—drawing from both secular and religious traditions. These classes taught me to ask hard questions, tackle wicked problems, and never separate the technically possible from the ethically responsible.

That’s why I propose a purposeful collaboration between humans and AI. There is no need to hand our cognitive duties over to machines, rather, we need to enhance our abilities and learn to use artificial intelligence to help us tackle the wicked problems we’ve been unable to solve.

Imagine a decision system composed of diverse humans enhanced for clarity, openness, and ethical discernment, using AI to optimize data processing, information retrieval, and scenario modeling, Imagine humans working in partnership with AI to develop a problem-solving approach that supports feedback, deliberation, and adaptation with accountability, transparency, and value alignment. This is not a hive mind. It is more like a meta-cortex—a layered neural system in which individual insight is preserved, but multiplied through structured collaboration and augmented intelligence.

If we succeed, we may find ourselves entering a new age: not of superintelligence, but of super-collective wisdom. We may experience:
-A future where leadership is not about who dominates, but about who understands,
-A future where advancing age, like mine—95 in a decade—is not a barrier but an asset,
-A future where the tools of science and technology serve a higher purpose: to help us decide wisely, together.

So, as I stand in the garage asking myself., “What am I here after?”  Maybe just that.

Can Trump’s Golden Dome Make us Safe?

Last year Donald Trump announced that “we must be able to defend our homeland, our allies, and our military assets around the world from the threat of hypersonic missiles, no matter where they are launched from.” After his election, he called for a program labeled the Golden Dome, and he requested a plan with no limit on cost to achieve his goal. This brought back many memories from 40 years ago.

Although I had been involved and frustrated for many years with the rather slowly advancing R&D related to space-based missile defense, I became intrigued by new ideas after I had a lunch conversation with the brilliant and creative physicist Freeman Dyson. I had become convinced that the tactics and technology needed to counter a massive missile attack would always fail. I was sure that the offense would always have the advantage.  Dyson introduced to me a more interesting way of looking at this complex issue.  Dyson told me about his concept of a quest that would “allow us to protect our national interests without committing us to threaten the wholesale massacre of innocent people.” He argued on moral grounds for “a defensive world as our long-range objective … and the objective will sooner or later be found, whether the means are treaties and doctrines or radars and lasers.”

This quest became my full-time occupation after the March 23, 1983 speech by President Reagan in which he called for a program “to make nuclear weapons impotent and obsolete.” As a result, I was asked by Harold Agnew, the former head of Los Alamos Lab, to help put together a plan to implement the President’s challenge. The plan that was delivered to the President in September 1983 consisted of a collection of poorly defined technologies and called for a five-year $25 billion investment to answer the question of whether could someday be a defense. Because I had helped to create the plan, I was asked in 1984 to become the chief scientist for Reagan’s Strategic Defense Initiative.  I found my assignment was primarily public relations as the actual research work was dominated by the question “will it make us safe,” and I spent many days and weeks trying to explain to the detractors what “it” was.

During this time, I often found myself in debates with notable opponents. I vividly remember my   debate with Hans Bethe, Nobel Laureate in physics, who also happened to have been my quantum mechanics professor in 1961 at Cornell University. Our debate was published in Science Digest in an article, entitled “Can Star Wars make us safe?” Bethe answered no, and he was joined in his opinion by practically all of the academic scientists at the time. They argued that we had no plausible way to accomplish what they thought was Reagan’s goal to protect all of us from the threat of nuclear tipped ballistic missiles. I argued that the answer was yes, but I changed the definition of the goal to become more in keeping with my understanding of what Reagan really wanted, and in keeping with the wisdom of Dyson. Today, the demands for the protection against the threat are much more complex including hypersonic missiles, cruise missiles, anti-satellite weapons, and cyber-attacks. In fact, one of the scariest threats would be contagious bio weapons spread by swarms of crop sprayers launched from submarines near our coasts. But my answer to the question, will “it” make us safe, is still in the affirmative. 

As before, the arms control experts have spoken out to explain “it” just cannot be done. They repeat the same old arguments that it won’t work, it is too expensive, and it will create entirely new strategic instabilities. The question I asked at the time was “what is it,” and I think that is the right question to be considered now.

So, what about now?  Are we still arguing about “it” without understanding what it is? In my view, it is not about how to win the ultimate global war using space-based weapons, but it is to prevent war. Maybe with the recent advances in technology, we can find new ways to accomplish that through a new approach to deterrence that involves a shared approach to a stable combination of defense and offense tech development. We will need to first accomplish a breakthrough in vastly improved trusted communication and decision making in the face of confusion, chaos, threats, and fundamental disagreements. With the proliferation of advanced offensive weapon technology, we need to try to find a new more hopeful path.  Maybe there could be some stable system to prevent war through technology enhanced information sharing, reduced offensive threats, and deterrence that will prevent the initial steps toward war.

But I recall Bethe’s final argument in our debate was that any defense could not be trusted since it could not be tested under realistic conditions.  I argued that we already have learned to live with deterrence that cannot be realistically tested, since that has to be a question of psychology involving human decision making.   It is conceivable that complex reasoning-based information management and decision making can be assisted through AI that could carry out simulated tests of a semi-infinite number of complex combinations of events and human decision making.

I remember when I was asked by Harold Agnew to lead the group to deliver a plan for the beam weapons component of the SDI. He said in a hushed tone that I had to take very seriously his warning that my job would be “very, very dangerous.” He said I could easily be trampled by the stampede of contractors going after funding. He was not encouraging to say the least, and in a matter of weeks he walked away from involvement. He never understood the Reagan goal of the program and was definitely opposed to any thought of nuclear weapons abolition. His concept of safety was the threat of destruction.

The “it” is still hard to define and has not become easier, but President Trump says there should be a way to protect us, and there should not be any limit to the amount of investment.  Maybe the “it” is a safe future world, and then the question is… can the Golden Dome make us safe? Let’s see what “it” is in the plan soon to be delivered to the President.

Surprise

Never underestimate the importance of surprise

The importance of surprise has dominated the thinking of military strategists since beginning of conflict. For instance, the famous Chinese strategist Sun Tzu wrote in 500 BC, “Those who are skilled in producing surprise will win. Such tacticians are as versatile as the changes in heaven and earth.” In 1520, Niccolo Machiavelli wrote, “Nothing makes a leader greater than the capacity to guess the designs of the enemy.” In 1832, Prussian general and military theorist Carl von Clausewitz wrote, “Surprise is therefore not only the means to the attainment of numerical superiority, but it is also to be regarded as a substantive principle in itself.”

Modern military strategists take the issue of surprise very seriously when managing investments in technology and preparing for possible future conflicts. Over the years I participated as a contributor in several Defense Science Board studies and often found myself as a member of the red team. My job was to imagine what might be the counter to any of our advances in future technology.  It was never a good way to make friends since my role as a red team member was to invent how and why the best ideas of the blue team would be thwarted.  The goal of the science and technology red team was to imagine and analyze a possible future threat evolution based on analysis of the past and evaluation of present capabilities. Then the blue team was faced with planning to deal with this imagined future.

In 1988, the Pentagon published a document that described the status of Soviet technology and projected what might happen in the future. One future space weapon threat was the use of ground-based lasers and a distribution of space-based relay mirrors to provide lethal blows to space assets and missile launches in their early boost phase.

The Pentagon analysts projected that the space weapon threat shown in the illustration below might be deployed “after the year 2000,” but, as I explained in my last post, at the time neither the Soviets nor the Americans had the needed high power laser technology nor the space launch capabilities for any realistic space-based weapons.  Over the past four decades, however, that technology has advanced to the point where the red team needs to reimagine what might be the new space-based threats as well as our new approaches to offense and defense. In today’s world of Russian attempts to return its empire to greatness, the defense planners are actively attempting to figure out what surprises they may face.

Pentagon illustration of possible Soviet technology, 1988.

It really should not come as a surprise that Russia is now threatening its NATO neighbors with an intermediate range missile called Oreshnik and has demonstrated its operation against a Ukrainian city. This missile is a version of its ICBM capability to deliver multiple hypersonic independently target warheads or MIRV’s to a target. The attacking MIRV’s could be accompanied by multiple light weight decoys so that an effective interception in space would be extremely difficult. So, no surprise there. We have known about this sort of threat for a very long time. Our red teams should have been imagining the threat and persuading our blue teams to figure out how to defend against it.

The fact is that in 1983 when we put together a plan for the Strategic Defense Initiative (SDI) or popularly known as Reagan’s “Star Wars” program, we thought about these threats and concluded that we needed a means for intercepting the threat during the early part of its launch, namely its boost phase, and this required that we had to develop and deploy interceptor platforms in space.

The red team was ready for this hypothetical defense and reported that the defense platforms would be “sitting ducks” that would be easily destroyed by ground launched missiles. The blue team responded with hypothetical offensive and defensive moves. That meant that the result would become a full out space war, which could result in escalating exchanges leading to all out nuclear conflict.

No matter how effective the defense, the realistic outcome would likely be nuclear exchanges and total destruction for both sides. Many long term and experienced strategic thinkers explained that any notion of a victorious last move in an escalation exchange was a fallacy. They suggested that a more positive approach would be arms control agreements. Many people never really understood that both Reagan and Gorbachev were arms control advocates and were anxious to get rid of all nukes.

When I got the job as the chief scientist for the SDI program, I concluded that Reagan’s idea to share defenses and eventually eliminate all nuclear weapons was probably the only way out of this mess, and was unlikely, but was worth a try. When Reagan and Gorbachev met at Reykjavik in 1986, they almost agreed to rid the world of nuclear weapons, but at the last minute, although Gorbachev had no problem with SDI laboratory research, he demanded that the SDI stopped all testing in space.

Reagan’s advisors were surprised at Gorbachev’s emphatic and often repeated insistence that “it is the laboratory or goodbye.” I have concluded that he was afraid that his own military industrial complex would launch its giant Energia booster that was ready and waiting to launch a space laser research platform.  Such a deployment, even if of marginal utility, would be a “Sputnik event.” Cold War historians have missed the point that he was more worried about his own technical experts that were prepared to initiate a space weapons race that would further contribute to the final implosion of the Soviet economy.

As it turned out, the Soviet Union collapsed because of its own weaknesses and without any real help from us and the threat went away.  But now the threat seems to be back, as demonstrated by Russia’s advanced intercontinental ballistic missiles, its nuclear capable intermediate range missiles, and advanced war fighting methods such as information weapons to wage war. To add to this dilemma, China is not sitting idle but is rapidly advancing its strategic war fighting capabilities. The old ideas of two-party agreements seems to be obsolete.

So, as the famous arms control expert Herb York explained…. there really is no winning last move in an arms race. Instead, I believe that we need to consider a new approach—the road not taken. This imagined road will be an unfamiliar path filled with debris and pot holes that may never lead anywhere, but maybe there is a small chance that it might lead to agreements to avoid a global nuclear war. The problem of deterrence in this increasing complex world is so wicked, any success would be unexpected.  In today’s world of technology advances coupled with confusion, chaos, and conflict, such an outcome could be the ultimate surprise.

Will AI Make us Safer?

Imagine a computer glitch that sends a signal from the space-based missile launch warning system that an ICBM attack has begun. An automated Artificial Intelligence (AI) data analysis and decision battle management center responds in seconds. The result is the beginning of an exchange of thousands of nuclear tipped missiles that destroys life on this planet.

Do we want high-consequence decision making under time urgent confusing conditions to be made by AI without a human in the loop? Will we trust the judgment of the non-human decisionmaker if it is contrary to the subjective feelings of the humans in charge? In the sci-fi movie 2001, Hal the computer responded to an instruction with the famous response, “I’m sorry, Dave, but I can’t do that.” In the future, Hal may send the message, “I’m sorry, Dave, that I destroyed the world, but I had no choice.”

High-consequence decision making is a challenge for humans under the best of conditions, but often the situation is additionally constrained by sleep deprivation, anxiety, stress, miscommunication, misunderstandings, superstitions, irrational fear, and other demands, mean decisions might be better carried out by Hal. Maybe a decision devoid of feelings of regret would be superior to the judgments of irrational overwrought human beings.

Throughout history, humans have made mistakes with vital decisions. Significant negative outcomes have been the result. In this post I describe several instances where, in my opinion, the key decision makers would have been more successful with a real time extensive data collection and AI analysis coupled with a decision system provided by one or more high performance 20-watt computers, namely their brains. On the other hand, in some cases, humans may not always be the best qualified to make the final decisions due to their deeply held beliefs and biases.

One highly recognized important decision (that some historians describe as the decision that “saved the world”) took place in September 1983. Stanislav Petrov, the person in charge of operating the data monitoring from the Soviet’s satellite early warning system, detected signals indicating an ICBM attack. He decided it was a false alarm and did not initiate the events that could have resulted in nuclear war.

Contrast this with the 1988 incident when the USS Vincennes mistakenly shot down an Iranian passenger jet. The commander faced a mix of accurate and misleading information, resulting in a tragic error. In such a scenario, AI could potentially offer real-time analysis and reduce human error—provided it’s designed to support rather than replace human judgment.

AI has the potential to enhance decision-making, especially when combined with human expertise. There are situations where human biases, deeply ingrained beliefs, and emotional responses may cloud judgment, making AI’s objectivity an asset.

The importance of human decision making played a major role during the 1980s when I was involved in developing the battle management plans for Ronald Reagan’s proposed missile defense program, labeled in the press as Star Wars. When faced with the detection of a Soviet missile attack, Reagan decided it would be better to defend rather than to retaliate. The defense against a highly capable ICBM attack would require deciding in minutes to initiate the launch of a defensive system. AI wasn’t available then, but had it been, would we have turned to it for a data handling, analysis, and decision process to help determine if an attack was real and help plan our response? Perhaps such split-second decisions in the face of overwhelming complexity would have been better handled by a combination of AI and human judgment than by humans alone.

Another example from history illustrates how life and death decisions can be made incorrectly. The question is, would having AI assistance have made any difference in the outcome? In my role as the Strategic Defense Initiative chief scientist, I was assigned to negotiate deals to advance the SDI concept, so I met with Karl Beckurts, the research director of Siemens, one of the largest electronics companies in the world. My assignment was to persuade him to have his company join us. We had a rather fancy lunch, and our discussion was quite pleasant, but he made it clear that he, speaking for his corporate leadership, had decided that they had no interest in working with the United States on the SDI program. That lunch may have cost him his life.

On July 9, 1986, in Bonn, West Germany, Beckurts was killed. A remote-controlled bomb detonated as his limo passed by a tree on his established route to work. A seven-page letter from the Red Army Faction (RAF), undoubtedly under the direction of the KGB, was found near the location of the bombing. The letter said that he had been killed because he was preparing to support Ronald Reagan’s program to develop a space-based missile defense program called the Strategic Defense Initiative.

Car accident

The Soviet decision to take Beckurts’ life is an example of flawed human decision making based on false perceptions and misunderstandings. But, even if AI support had been available to the Soviets during the 1980s, I doubt they would have turned to it. You see, they had already made up their minds about the threat from SDI.

Much like the Germans, both the French and British decided not to work with the United States on developing missile defense. One nation made a different decision. This country was interested in engaging in a joint program to develop a long-range missile defense system, and they were willing to share the cost with us. Their goal was not to help us, but to defend themselves. Their decision process seemed to me to be straightforward, not requiring any complex decision process. They were surrounded by enemies who posed an existential missile threat. The cooperative program was called Arrow, and we established a development program, that has been continually advanced over the years. The most advanced version, Arrow 3, a Boeing/Israel Aerospace Industries program is reported to have successfully intercepted a ballistic missile attack from Iran, and their defense system is now being upgraded. It appears that the Israeli decision paid off.

I want to close by emphasizing that I believe that human decision-making processes are the best way to deal with the ambiguity of complex issues, but in some cases, people could also benefit from technical assistance provided by AI. Ten years ago, I patented a concept for human brain cognitive enhancement. I explore this idea in my latest science fiction novel, The Dragon’s Brain, which will be released in October 2024. I am working on the final book in this series in which I consider whether our decision makers have the wisdom, perspective, and values to make the right decisions to preserve what seems to be becoming an increasingly unstable world. Today Iran is threatening to launch a retaliatory attack on Israel and President Biden is trying to persuade Netanyahu to agree to an end to the war in Gaza. By the time you read this, the decisions of people based on their own brains–without any help from computers—could impact world peace.