Gerry in garage

Again with the hereafter

As I mentioned in an earlier blog post, now that I’m in my 80s, I frequently find myself dwelling on the hereafter. In other words, I find myself walking into the garage in the middle of an errand, presumably fetching something for my wife, and I ask myself, “What am I here after?” I’ve even tried asking AI, since most people are outsourcing tasks to AI nowadays. Alas, AI (like me) had not listened to my wife.

This leads me to consider AI’s shortcomings (a much better choice than dwelling on my own shortcomings!) But, does AI have shortcomings? As artificial intelligence continues to accelerate in speed and scale, many assume that the future of problem-solving will be machine-dominated. But what if neither AI nor unaided human reasoning is sufficient to address the wicked problems of our age. Wicked problems have no closed end solutions. They defy simple answers and cut across traditional domains of expertise. They require not just faster computation but deeper wisdom—and new ways of thinking.

Before turning to AI for assistance, let’s start by considering what we can do to make people think better. Is there a way to boost human intelligence and cognitive abilities? If you have read my latest novel, The Dragon’s Brain, you are familiar with an invention called the Brainaid, a form of non-invasive brain stimulation and brain wave entrainment that improves focus, creativity, memory, and emotional regulation. If you haven’t read The Dragon’s Brain, what are you waiting for? Head to your favorite bookstore or website and get your copy now. (Here’s the Amazon link.) In any case, the fictional Brainaid is based on a similar brain stimulation device that I patented many years ago. While a fully functional product is yet to hit the market, I firmly believe that science can–and will–enhance human thought. AI is getting smarter, so why not enhance the human side of the equation too?

The current narrative often positions AI as a replacement for human intelligence. But the reality is more nuanced. Machines excel at speed, scale, and pattern recognition. Humans contribute meaning, ethics, empathy, and insight. Each is powerful on its own—but neither is complete.

AI alone can be dangerous. It lacks human intuition and empathy. Perhaps you’ve met a physicist, engineer, or computer scientist who also lacks human intuition and empathy. I earned my bachelor’s degree in engineering physics from Cornell University, where I was trained to think precisely, quantitatively, and rigorously. But two courses outside the engineering curriculum had an even greater long-term impact on how I think. The first was a course in the Philosophy of Science taught by Max Black, a sharp and elegant thinker who challenged us to ask not just how science works, but what we mean when we claim to know something. The second was a course in the Philosophy of Religion taught by Milton Konvitz, who opened my mind to the moral foundations of law, liberty, and human dignity—drawing from both secular and religious traditions. These classes taught me to ask hard questions, tackle wicked problems, and never separate the technically possible from the ethically responsible.

That’s why I propose a purposeful collaboration between humans and AI. There is no need to hand our cognitive duties over to machines, rather, we need to enhance our abilities and learn to use artificial intelligence to help us tackle the wicked problems we’ve been unable to solve.

Imagine a decision system composed of diverse humans enhanced for clarity, openness, and ethical discernment, using AI to optimize data processing, information retrieval, and scenario modeling, Imagine humans working in partnership with AI to develop a problem-solving approach that supports feedback, deliberation, and adaptation with accountability, transparency, and value alignment. This is not a hive mind. It is more like a meta-cortex—a layered neural system in which individual insight is preserved, but multiplied through structured collaboration and augmented intelligence.

If we succeed, we may find ourselves entering a new age: not of superintelligence, but of super-collective wisdom. We may experience:
-A future where leadership is not about who dominates, but about who understands,
-A future where advancing age, like mine—95 in a decade—is not a barrier but an asset,
-A future where the tools of science and technology serve a higher purpose: to help us decide wisely, together.

So, as I stand in the garage asking myself., “What am I here after?”  Maybe just that.

Can Trump’s Golden Dome Make us Safe?

Last year Donald Trump announced that “we must be able to defend our homeland, our allies, and our military assets around the world from the threat of hypersonic missiles, no matter where they are launched from.” After his election, he called for a program labeled the Golden Dome, and he requested a plan with no limit on cost to achieve his goal. This brought back many memories from 40 years ago.

Although I had been involved and frustrated for many years with the rather slowly advancing R&D related to space-based missile defense, I became intrigued by new ideas after I had a lunch conversation with the brilliant and creative physicist Freeman Dyson. I had become convinced that the tactics and technology needed to counter a massive missile attack would always fail. I was sure that the offense would always have the advantage.  Dyson introduced to me a more interesting way of looking at this complex issue.  Dyson told me about his concept of a quest that would “allow us to protect our national interests without committing us to threaten the wholesale massacre of innocent people.” He argued on moral grounds for “a defensive world as our long-range objective … and the objective will sooner or later be found, whether the means are treaties and doctrines or radars and lasers.”

This quest became my full-time occupation after the March 23, 1983 speech by President Reagan in which he called for a program “to make nuclear weapons impotent and obsolete.” As a result, I was asked by Harold Agnew, the former head of Los Alamos Lab, to help put together a plan to implement the President’s challenge. The plan that was delivered to the President in September 1983 consisted of a collection of poorly defined technologies and called for a five-year $25 billion investment to answer the question of whether could someday be a defense. Because I had helped to create the plan, I was asked in 1984 to become the chief scientist for Reagan’s Strategic Defense Initiative.  I found my assignment was primarily public relations as the actual research work was dominated by the question “will it make us safe,” and I spent many days and weeks trying to explain to the detractors what “it” was.

During this time, I often found myself in debates with notable opponents. I vividly remember my   debate with Hans Bethe, Nobel Laureate in physics, who also happened to have been my quantum mechanics professor in 1961 at Cornell University. Our debate was published in Science Digest in an article, entitled “Can Star Wars make us safe?” Bethe answered no, and he was joined in his opinion by practically all of the academic scientists at the time. They argued that we had no plausible way to accomplish what they thought was Reagan’s goal to protect all of us from the threat of nuclear tipped ballistic missiles. I argued that the answer was yes, but I changed the definition of the goal to become more in keeping with my understanding of what Reagan really wanted, and in keeping with the wisdom of Dyson. Today, the demands for the protection against the threat are much more complex including hypersonic missiles, cruise missiles, anti-satellite weapons, and cyber-attacks. In fact, one of the scariest threats would be contagious bio weapons spread by swarms of crop sprayers launched from submarines near our coasts. But my answer to the question, will “it” make us safe, is still in the affirmative. 

As before, the arms control experts have spoken out to explain “it” just cannot be done. They repeat the same old arguments that it won’t work, it is too expensive, and it will create entirely new strategic instabilities. The question I asked at the time was “what is it,” and I think that is the right question to be considered now.

So, what about now?  Are we still arguing about “it” without understanding what it is? In my view, it is not about how to win the ultimate global war using space-based weapons, but it is to prevent war. Maybe with the recent advances in technology, we can find new ways to accomplish that through a new approach to deterrence that involves a shared approach to a stable combination of defense and offense tech development. We will need to first accomplish a breakthrough in vastly improved trusted communication and decision making in the face of confusion, chaos, threats, and fundamental disagreements. With the proliferation of advanced offensive weapon technology, we need to try to find a new more hopeful path.  Maybe there could be some stable system to prevent war through technology enhanced information sharing, reduced offensive threats, and deterrence that will prevent the initial steps toward war.

But I recall Bethe’s final argument in our debate was that any defense could not be trusted since it could not be tested under realistic conditions.  I argued that we already have learned to live with deterrence that cannot be realistically tested, since that has to be a question of psychology involving human decision making.   It is conceivable that complex reasoning-based information management and decision making can be assisted through AI that could carry out simulated tests of a semi-infinite number of complex combinations of events and human decision making.

I remember when I was asked by Harold Agnew to lead the group to deliver a plan for the beam weapons component of the SDI. He said in a hushed tone that I had to take very seriously his warning that my job would be “very, very dangerous.” He said I could easily be trampled by the stampede of contractors going after funding. He was not encouraging to say the least, and in a matter of weeks he walked away from involvement. He never understood the Reagan goal of the program and was definitely opposed to any thought of nuclear weapons abolition. His concept of safety was the threat of destruction.

The “it” is still hard to define and has not become easier, but President Trump says there should be a way to protect us, and there should not be any limit to the amount of investment.  Maybe the “it” is a safe future world, and then the question is… can the Golden Dome make us safe? Let’s see what “it” is in the plan soon to be delivered to the President.