Wouldn’t it be wonderful if your workout actually made you smarter? Imagine sitting on a sleek recumbent bike, pedaling steadily while wearing a “Brainaid” headband. Your body powers the machine; your brain powers your future.
On the screen in front of you, an Artificial Socratic Intelligence (ASI) doesn’t just count your calories — it questions your assumptions. You’re thinking harder, not just sweating harder. And in this new kind of fitness salon — equal parts gym, spa, and think tank — you’re training the two most important organs for survival: your body and your brain.
Here’s the wild part: your brain runs on about 10 watts of power — the same as a dim light bulb — yet it can do things no gigawatt data center can. So imagine a collective of 100 enhanced humans, each equipped with ASI guidance and Brainaid synchronization. Together, this “10-Watt Collective” would have the cognitive power of a supercomputer cluster, but running on less energy than a single streetlight. It’s the opposite of the arms race for faster chips and bigger clouds. Instead of energy-hungry machines replacing us, humans become smarter, stronger, and more efficient — the power plants of intelligence itself.
But there’s a catch. If such a collective were to emerge, it could also become dangerous. History is full of examples — from Hitler’s inner circle to modern autocratic regimes — of tight, obedient collectives gone terribly wrong. When shared intelligence turns into shared obedience, creativity dies. And when people stop questioning authority, evil slips in quietly.
The same thing could happen to an AI-assisted collective. If the Socratic questions stop, if the AI starts reinforcing rather than challenging, the group could turn into a perfect machine for self-delusion — efficient, confident, and utterly wrong.
We are in danger of letting our brains go soft. Not from lack of food or oxygen, but from lack of thinking. When people stop questioning, they start repeating. When they stop wondering, they start following. In the past, dictators used fear and lies to make people stop thinking. Today, it’s comfort, distraction, and endless scrolling. The machine tells us what to believe, what to like, even what to feel — and we let it. Maimonides said being perplexed — confused, uncertain — is a good thing. It means your brain is still working. But the lazy brain wants easy answers, and that’s where evil begins. If AI becomes just another way to make people stop thinking, then we are building the perfect trap — fast, smart, and completely unreflective. That’s how collectives, even well-meaning ones, turn dark. The only cure is curiosity.
We live in a world haunted by wicked problems—problems that mutate, multiply, and resist every attempt at resolution. Climate collapse. Authoritarian drift. AI accelerating beyond oversight. Debt spirals. War without end. Each of these crises feeds on the others, creating a Gordian knot that neither our leaders nor our institutions seem able to untangle.
What can be done?
One idea, admittedly audacious, is to take a lesson from history. In 1953, President Eisenhower launched Project Solarium, a bold experiment in structured strategic debate. Three teams of experts—each representing a different philosophy toward Soviet containment—were asked to develop, argue, and defend their strategies. Eisenhower didn’t pick favorites; he wanted the sharpest disagreements and clearest thinking to shape America’s Cold War policy. The outcome helped solidify the doctrine of containment, which guided U.S. policy for decades.
Could we do something similar today—only smarter, faster, and more inclusive?
That’s the premise of a Solarium 2.0. Not a political stunt or top-down policy declaration, but a structured process to think our way through global dilemmas. Teams of diverse thinkers—equipped with AI-enhanced reasoning tools (software that helps analyze data, challenge assumptions, and model outcomes in real-time)—would be guided by an experimental new concept: a values-based AI coach. This coach, rather than dictating answers, prompts participants to consider ethical frames, recognize cognitive bias, and evaluate decisions through shared human values like justice, sustainability, and long-term resilience. Why a coach? Because even smart, ethical people face cognitive overload, fall into groupthink, or overlook long-term consequences—especially under stress. A values-based AI coach doesn’t replace human judgment; it enhances it by prompting reflection, highlighting ethical dimensions, and encouraging diverse viewpoints in real time.
It’s part war game (testing how different strategies play out under pressure), part design studio (developing novel solutions), and part constitutional convention (re-examining foundational assumptions and frameworks for action). The goal isn’t to draft legal documents but to explore better ways to think, act, and collaborate in a world of accelerating complexity.
But wait—who gets to play? Who chooses the questions? Why should anyone listen?
These are critical questions. And they lead to a bigger one: Should the new Solarium be national or global?
Solarium 2.0 is envisioned as an international initiative, drawing participants from multiple countries and cultural perspectives. Today’s decision-makers—men like Putin, Trump, Netanyahu—often play zero-sum games. But our survival may depend on win/win solutions. We may not have an Eisenhower today. But we can build the process he pioneered—updated for our era.
AI-assisted collective intelligence—humans working with AI systems to make better group decisions—isn’t science fiction. It’s a growing research field. But most systems today focus on narrow tasks or post-hoc analysis. What we need is a leap forward: real-time human-AI decision-making for wicked problems, with the AI acting not as oracle, but as orchestrator.
In a 2024 review of human-AI collaboration published in Intelligent-Based Systems, Hao Cui—a leading researcher in multi-agent systems—surveyed dozens of experiments. Most involved controlled environments and constrained tasks. But none attempted a live simulation of high-stakes, global-scale problem-solving under cognitive load. Why not? Possibly because of technical hurdles—or perhaps because we’ve lacked the will to convene the right people with the right tools.
That’s why I’m calling for a bold experiment: the Solarium 2.0 War Game.
Convene it under the auspices of a respected neutral body—such as the U.S. National Academies of Sciences, Engineering, and Medicine—and draw participants from national labs, forward-thinking companies, and leading universities. Equip them with emerging AI platforms designed not just for speed or scale, but for principled, values-aligned decision support. Use real-world scenarios, real time constraints, and real disagreement.
We’ve theorized enough. Now it’s time to find out:
Can we do better—together?
(Disclosure: This document represents a collaboration of human and artificial intelligence. The illustration was created by AI.)
As I mentioned in an earlier blog post, now that I’m in my 80s, I frequently find myself dwelling on the hereafter. In other words, I find myself walking into the garage in the middle of an errand, presumably fetching something for my wife, and I ask myself, “What am I here after?” I’ve even tried asking AI, since most people are outsourcing tasks to AI nowadays. Alas, AI (like me) had not listened to my wife.
This leads me to consider AI’s shortcomings (a much better choice than dwelling on my own shortcomings!) But, does AI have shortcomings? As artificial intelligence continues to accelerate in speed and scale, many assume that the future of problem-solving will be machine-dominated. But what if neither AI nor unaided human reasoning is sufficient to address the wicked problems of our age. Wicked problems have no closed end solutions. They defy simple answers and cut across traditional domains of expertise. They require not just faster computation but deeper wisdom—and new ways of thinking.
Before turning to AI for assistance, let’s start by considering what we can do to make people think better. Is there a way to boost human intelligence and cognitive abilities? If you have read my latest novel, The Dragon’s Brain, you are familiar with an invention called the Brainaid, a form of non-invasive brain stimulation and brain wave entrainment that improves focus, creativity, memory, and emotional regulation. If you haven’t read The Dragon’s Brain, what are you waiting for? Head to your favorite bookstore or website and get your copy now. (Here’s the Amazon link.) In any case, the fictional Brainaid is based on a similar brain stimulation device that I patented many years ago. While a fully functional product is yet to hit the market, I firmly believe that science can–and will–enhance human thought. AI is getting smarter, so why not enhance the human side of the equation too?
The current narrative often positions AI as a replacement for human intelligence. But the reality is more nuanced. Machines excel at speed, scale, and pattern recognition. Humans contribute meaning, ethics, empathy, and insight. Each is powerful on its own—but neither is complete.
AI alone can be dangerous. It lacks human intuition and empathy. Perhaps you’ve met a physicist, engineer, or computer scientist who also lacks human intuition and empathy. I earned my bachelor’s degree in engineering physics from Cornell University, where I was trained to think precisely, quantitatively, and rigorously. But two courses outside the engineering curriculum had an even greater long-term impact on how I think. The first was a course in the Philosophy of Science taught by Max Black, a sharp and elegant thinker who challenged us to ask not just how science works, but what we mean when we claim to know something. The second was a course in the Philosophy of Religion taught by Milton Konvitz, who opened my mind to the moral foundations of law, liberty, and human dignity—drawing from both secular and religious traditions. These classes taught me to ask hard questions, tackle wicked problems, and never separate the technically possible from the ethically responsible.
That’s why I propose a purposeful collaboration between humans and AI. There is no need to hand our cognitive duties over to machines, rather, we need to enhance our abilities and learn to use artificial intelligence to help us tackle the wicked problems we’ve been unable to solve.
Imagine a decision system composed of diverse humans enhanced for clarity, openness, and ethical discernment, using AI to optimize data processing, information retrieval, and scenario modeling, Imagine humans working in partnership with AI to develop a problem-solving approach that supports feedback, deliberation, and adaptation with accountability, transparency, and value alignment. This is not a hive mind. It is more like a meta-cortex—a layered neural system in which individual insight is preserved, but multiplied through structured collaboration and augmented intelligence.
If we succeed, we may find ourselves entering a new age: not of superintelligence, but of super-collective wisdom. We may experience: -A future where leadership is not about who dominates, but about who understands, -A future where advancing age, like mine—95 in a decade—is not a barrier but an asset, -A future where the tools of science and technology serve a higher purpose: to help us decide wisely, together.
So, as I stand in the garage asking myself., “What am I here after?” Maybe just that.
Imagine a computer glitch that sends a signal from the space-based missile launch warning system that an ICBM attack has begun. An automated Artificial Intelligence (AI) data analysis and decision battle management center responds in seconds. The result is the beginning of an exchange of thousands of nuclear tipped missiles that destroys life on this planet.
“If only AI could tell us what to do…”
Do we want high-consequence decision making under time urgent confusing conditions to be made by AI without a human in the loop? Will we trust the judgment of the non-human decisionmaker if it is contrary to the subjective feelings of the humans in charge? In the sci-fi movie 2001, Hal the computer responded to an instruction with the famous response, “I’m sorry, Dave, but I can’t do that.” In the future, Hal may send the message, “I’m sorry, Dave, that I destroyed the world, but I had no choice.”
High-consequence decision making is a challenge for humans under the best of conditions, but often the situation is additionally constrained by sleep deprivation, anxiety, stress, miscommunication, misunderstandings, superstitions, irrational fear, and other demands, mean decisions might be better carried out by Hal. Maybe a decision devoid of feelings of regret would be superior to the judgments of irrational overwrought human beings.
Throughout history, humans have made mistakes with vital decisions. Significant negative outcomes have been the result. In this post I describe several instances where, in my opinion, the key decision makers would have been more successful with a real time extensive data collection and AI analysis coupled with a decision system provided by one or more high performance 20-watt computers, namely their brains. On the other hand, in some cases, humans may not always be the best qualified to make the final decisions due to their deeply held beliefs and biases.
One highly recognized important decision (that some historians describe as the decision that “saved the world”) took place in September 1983. Stanislav Petrov, the person in charge of operating the data monitoring from the Soviet’s satellite early warning system, detected signals indicating an ICBM attack. He decided it was a false alarm and did not initiate the events that could have resulted in nuclear war.
Contrast this with the 1988 incident when the USS Vincennes mistakenly shot down an Iranian passenger jet. The commander faced a mix of accurate and misleading information, resulting in a tragic error. In such a scenario, AI could potentially offer real-time analysis and reduce human error—provided it’s designed to support rather than replace human judgment.
AI has the potential to enhance decision-making, especially when combined with human expertise. There are situations where human biases, deeply ingrained beliefs, and emotional responses may cloud judgment, making AI’s objectivity an asset.
The importance of human decision making played a major role during the 1980s when I was involved in developing the battle management plans for Ronald Reagan’s proposed missile defense program, labeled in the press as Star Wars. When faced with the detection of a Soviet missile attack, Reagan decided it would be better to defend rather than to retaliate. The defense against a highly capable ICBM attack would require deciding in minutes to initiate the launch of a defensive system. AI wasn’t available then, but had it been, would we have turned to it for a data handling, analysis, and decision process to help determine if an attack was real and help plan our response? Perhaps such split-second decisions in the face of overwhelming complexity would have been better handled by a combination of AI and human judgment than by humans alone.
Another example from history illustrates how life and death decisions can be made incorrectly. The question is, would having AI assistance have made any difference in the outcome? In my role as the Strategic Defense Initiative chief scientist, I was assigned to negotiate deals to advance the SDI concept, so I met with Karl Beckurts, the research director of Siemens, one of the largest electronics companies in the world. My assignment was to persuade him to have his company join us. We had a rather fancy lunch, and our discussion was quite pleasant, but he made it clear that he, speaking for his corporate leadership, had decided that they had no interest in working with the United States on the SDI program. That lunch may have cost him his life.
On July 9, 1986, in Bonn, West Germany, Beckurts was killed. A remote-controlled bomb detonated as his limo passed by a tree on his established route to work. A seven-page letter from the Red Army Faction (RAF), undoubtedly under the direction of the KGB, was found near the location of the bombing. The letter said that he had been killed because he was preparing to support Ronald Reagan’s program to develop a space-based missile defense program called the Strategic Defense Initiative.
The Soviet decision to take Beckurts’ life is an example of flawed human decision making based on false perceptions and misunderstandings. But, even if AI support had been available to the Soviets during the 1980s, I doubt they would have turned to it. You see, they had already made up their minds about the threat from SDI.
Much like the Germans, both the French and British decided not to work with the United States on developing missile defense. One nation made a different decision. This country was interested in engaging in a joint program to develop a long-range missile defense system, and they were willing to share the cost with us. Their goal was not to help us, but to defend themselves. Their decision process seemed to me to be straightforward, not requiring any complex decision process. They were surrounded by enemies who posed an existential missile threat. The cooperative program was called Arrow, and we established a development program, that has been continually advanced over the years. The most advanced version, Arrow 3, a Boeing/Israel Aerospace Industries program is reported to have successfully intercepted a ballistic missile attack from Iran, and their defense system is now being upgraded. It appears that the Israeli decision paid off.
I want to close by emphasizing that I believe that human decision-making processes are the best way to deal with the ambiguity of complex issues, but in some cases, people could also benefit from technical assistance provided by AI. Ten years ago, I patented a concept for human brain cognitive enhancement. I explore this idea in my latest science fiction novel, The Dragon’s Brain, which will be released in October 2024. I am working on the final book in this series in which I consider whether our decision makers have the wisdom, perspective, and values to make the right decisions to preserve what seems to be becoming an increasingly unstable world. Today Iran is threatening to launch a retaliatory attack on Israel and President Biden is trying to persuade Netanyahu to agree to an end to the war in Gaza. By the time you read this, the decisions of people based on their own brains–without any help from computers—could impact world peace.