
Will AI Make us Safer?
Imagine a computer glitch that sends a signal from the space-based missile launch warning system that an ICBM attack has begun. An automated Artificial Intelligence (AI) data analysis and decision battle management center responds in seconds. The result is the beginning of an exchange of thousands of nuclear tipped missiles that destroys life on this planet.

“If only AI
could tell us what to do…”
Do we want high-consequence decision making under time urgent confusing conditions to be made by AI without a human in the loop? Will we trust the judgment of the non-human decisionmaker if it is contrary to the subjective feelings of the humans in charge? In the sci-fi movie 2001, Hal the computer responded to an instruction with the famous response, “I’m sorry, Dave, but I can’t do that.” In the future, Hal may send the message, “I’m sorry, Dave, that I destroyed the world, but I had no choice.”
High-consequence decision making is a challenge for humans under the best of conditions, but often the situation is additionally constrained by sleep deprivation, anxiety, stress, miscommunication, misunderstandings, superstitions, irrational fear, and other demands, mean decisions might be better carried out by Hal. Maybe a decision devoid of feelings of regret would be superior to the judgments of irrational overwrought human beings.
Throughout history, humans have made mistakes with vital decisions. Significant negative outcomes have been the result. In this post I describe several instances where, in my opinion, the key decision makers would have been more successful with a real time extensive data collection and AI analysis coupled with a decision system provided by one or more high performance 20-watt computers, namely their brains. On the other hand, in some cases, humans may not always be the best qualified to make the final decisions due to their deeply held beliefs and biases.
One highly recognized important decision (that some historians describe as the decision that “saved the world”) took place in September 1983. Stanislav Petrov, the person in charge of operating the data monitoring from the Soviet’s satellite early warning system, detected signals indicating an ICBM attack. He decided it was a false alarm and did not initiate the events that could have resulted in nuclear war.
Contrast this with the 1988 incident when the USS Vincennes mistakenly shot down an Iranian passenger jet. The commander faced a mix of accurate and misleading information, resulting in a tragic error. In such a scenario, AI could potentially offer real-time analysis and reduce human error—provided it’s designed to support rather than replace human judgment.
AI has the potential to enhance decision-making, especially when combined with human expertise. There are situations where human biases, deeply ingrained beliefs, and emotional responses may cloud judgment, making AI’s objectivity an asset.
The importance of human decision making played a major role during the 1980s when I was involved in developing the battle management plans for Ronald Reagan’s proposed missile defense program, labeled in the press as Star Wars. When faced with the detection of a Soviet missile attack, Reagan decided it would be better to defend rather than to retaliate. The defense against a highly capable ICBM attack would require deciding in minutes to initiate the launch of a defensive system. AI wasn’t available then, but had it been, would we have turned to it for a data handling, analysis, and decision process to help determine if an attack was real and help plan our response? Perhaps such split-second decisions in the face of overwhelming complexity would have been better handled by a combination of AI and human judgment than by humans alone.
Another example from history illustrates how life and death decisions can be made incorrectly. The question is, would having AI assistance have made any difference in the outcome? In my role as the Strategic Defense Initiative chief scientist, I was assigned to negotiate deals to advance the SDI concept, so I met with Karl Beckurts, the research director of Siemens, one of the largest electronics companies in the world. My assignment was to persuade him to have his company join us. We had a rather fancy lunch, and our discussion was quite pleasant, but he made it clear that he, speaking for his corporate leadership, had decided that they had no interest in working with the United States on the SDI program. That lunch may have cost him his life.
On July 9, 1986, in Bonn, West Germany, Beckurts was killed. A remote-controlled bomb detonated as his limo passed by a tree on his established route to work. A seven-page letter from the Red Army Faction (RAF), undoubtedly under the direction of the KGB, was found near the location of the bombing. The letter said that he had been killed because he was preparing to support Ronald Reagan’s program to develop a space-based missile defense program called the Strategic Defense Initiative.

The Soviet decision to take Beckurts’ life is an example of flawed human decision making based on false perceptions and misunderstandings. But, even if AI support had been available to the Soviets during the 1980s, I doubt they would have turned to it. You see, they had already made up their minds about the threat from SDI.
Much like the Germans, both the French and British decided not to work with the United States on developing missile defense. One nation made a different decision. This country was interested in engaging in a joint program to develop a long-range missile defense system, and they were willing to share the cost with us. Their goal was not to help us, but to defend themselves. Their decision process seemed to me to be straightforward, not requiring any complex decision process. They were surrounded by enemies who posed an existential missile threat. The cooperative program was called Arrow, and we established a development program, that has been continually advanced over the years. The most advanced version, Arrow 3, a Boeing/Israel Aerospace Industries program is reported to have successfully intercepted a ballistic missile attack from Iran, and their defense system is now being upgraded. It appears that the Israeli decision paid off.

I want to close by emphasizing that I believe that human decision-making processes are the best way to deal with the ambiguity of complex issues, but in some cases, people could also benefit from technical assistance provided by AI. Ten years ago, I patented a concept for human brain cognitive enhancement. I explore this idea in my latest science fiction novel, The Dragon’s Brain, which will be released in October 2024. I am working on the final book in this series in which I consider whether our decision makers have the wisdom, perspective, and values to make the right decisions to preserve what seems to be becoming an increasingly unstable world. Today Iran is threatening to launch a retaliatory attack on Israel and President Biden is trying to persuade Netanyahu to agree to an end to the war in Gaza. By the time you read this, the decisions of people based on their own brains–without any help from computers—could impact world peace.
The Here After
As I approach my 85th birthday, I notice that I am spending more time thinking about the hereafter.
This often happens when Jane asks me to go get something from the garage, and (although I don’t respond fast enough to suit her) eventually I wander into the garage and take a look around. Then I exclaim to myself: What am I here after?
I have concluded this is a symptom of mild (I hope) cognitive decline that I have been studying by reading the neuro literature related to brainwave entrainment. This has been a primary subject of my study for the last 15 years.
I became interested in the brain and the subject of decision making under stress as related to the various defense committees I have served on. Through this study, I became convinced that the advanced technology of warfighting, such as high-power lasers, was not as important as the technology of the brain performance in warfighting. Ten years ago, I even convinced the Lockheed Martin Skunk Works to obtain a patent for my concept of cognitive enhancement using brainwave entrainment (Cognitive Enhancement Using Feedback, US 20150297108A1).
Since then, I have become convinced that cognitive enhancement is extremely important for us old folks. The literature on the aging brain is fairly extensive, documenting that the normal brain undergoes a decline in fast reasoning and memory retention with age. Some humans go downhill faster than others. The number of old folks who start with cognitive deficiency, then encounter early dementia, then Alzheimer’s, seems to me to be an increasingly serious societal, political and financial problem. Modern medicine has been able to keep us old folks physically healthy, but what about the brain?
Then 10 years ago, I happened to stumble through the vast neurotech literature and found the brilliant and groundbreaking work of Maiken Nedergaard. Her review paper in 2020 persuaded me that she has figured out a critical problem and hinted at the solution.
I quote the key idea from her work: “Sleep quality decreases as we age, and disruption of the regular sleep architecture is a frequent antecedent to the onset of dementia in neurodegenerative diseases. The glymphatic system degrades with age, suggesting a causal relationship between sleep disturbance and symptomatic progression in the neurodegenerative dementias.” She goes on to say, “Glymphatic failure may constitute a therapeutically targetable final common pathway.”
Nedergaard has become widely recognized for her discovery of the connection between sleep and brain disease, and she recently received an award from the Human Frontier Science Program that states her research has “forever changed the way we understand sleep as an essential biological function that promotes brain health.” Now the National Institutes of Health (NIH) has awarded $15 million to a team of researchers led by Nedergaard to improve the complex processes of brain cleaning. This could provide a pathway for dealing with the increasing widespread tragedy of Alzheimer’s disease,
Before I learned about the work of Nerdergaard, I found literature from almost 15 years ago that showed that strong slow wave brain oscillation and enhanced glymphatic flow could be triggered with an oscillating electric field that entrained slow brain waves. I gave presentation at a neurotech conference 10 years ago at the Potomac Institute for Policy Studies entitled (with some humor) “Zap Your Way to Enhancement.” I suggested that slow wave sleep could be enhanced using noninvasive use of alternating current. I envisioned a practical sleep enhancement application, which I called the ZZZapcap.
I am increasingly hopeful that a product such as the ZZZapcap could be made available to the public soon. I am suggesting that my patent that provides a concept for using feedback to entrain one hertz brain waves could be used to enhance glymphatic flow and clean the brain of the amyloid buildup, and thus prevent dementia and Alzheimer’s.

Since I have no environment to try out this concept, I have resorted to writing fiction to explore my ideas. My soon-to-be published novel, The Dragon’s Brain, delves into the idea of brainwave entrainment. The heroine of the story recovers from a coma after using what I call the Brainaid. Look for The Dragon’s Brain in bookstores this October and get in touch if you know anyone with the connections and capacity to make this revolutionary product a reality.
Glymphatic failure as a final common pathway to dementia – PubMed
Sleep is evolutionarily conserved across all species, and impaired sleep is a common trait of the diseased brain. Sleep quality decreases as we age, and disruption of the regular sleep architecture is a frequent antecedent to the onset of dementia in neurodegenerative diseases. The glymphatic system …
— Read on pubmed.ncbi.nlm.nih.gov/33004510/