Will AI Make us Safer?

Imagine a computer glitch that sends a signal from the space-based missile launch warning system that an ICBM attack has begun. An automated Artificial Intelligence (AI) data analysis and decision battle management center responds in seconds. The result is the beginning of an exchange of thousands of nuclear tipped missiles that destroys life on this planet.

Do we want high-consequence decision making under time urgent confusing conditions to be made by AI without a human in the loop? Will we trust the judgment of the non-human decisionmaker if it is contrary to the subjective feelings of the humans in charge? In the sci-fi movie 2001, Hal the computer responded to an instruction with the famous response, “I’m sorry, Dave, but I can’t do that.” In the future, Hal may send the message, “I’m sorry, Dave, that I destroyed the world, but I had no choice.”

High-consequence decision making is a challenge for humans under the best of conditions, but often the situation is additionally constrained by sleep deprivation, anxiety, stress, miscommunication, misunderstandings, superstitions, irrational fear, and other demands, mean decisions might be better carried out by Hal. Maybe a decision devoid of feelings of regret would be superior to the judgments of irrational overwrought human beings.

Throughout history, humans have made mistakes with vital decisions. Significant negative outcomes have been the result. In this post I describe several instances where, in my opinion, the key decision makers would have been more successful with a real time extensive data collection and AI analysis coupled with a decision system provided by one or more high performance 20-watt computers, namely their brains. On the other hand, in some cases, humans may not always be the best qualified to make the final decisions due to their deeply held beliefs and biases.

One highly recognized important decision (that some historians describe as the decision that “saved the world”) took place in September 1983. Stanislav Petrov, the person in charge of operating the data monitoring from the Soviet’s satellite early warning system, detected signals indicating an ICBM attack. He decided it was a false alarm and did not initiate the events that could have resulted in nuclear war.

Contrast this with the 1988 incident when the USS Vincennes mistakenly shot down an Iranian passenger jet. The commander faced a mix of accurate and misleading information, resulting in a tragic error. In such a scenario, AI could potentially offer real-time analysis and reduce human error—provided it’s designed to support rather than replace human judgment.

AI has the potential to enhance decision-making, especially when combined with human expertise. There are situations where human biases, deeply ingrained beliefs, and emotional responses may cloud judgment, making AI’s objectivity an asset.

The importance of human decision making played a major role during the 1980s when I was involved in developing the battle management plans for Ronald Reagan’s proposed missile defense program, labeled in the press as Star Wars. When faced with the detection of a Soviet missile attack, Reagan decided it would be better to defend rather than to retaliate. The defense against a highly capable ICBM attack would require deciding in minutes to initiate the launch of a defensive system. AI wasn’t available then, but had it been, would we have turned to it for a data handling, analysis, and decision process to help determine if an attack was real and help plan our response? Perhaps such split-second decisions in the face of overwhelming complexity would have been better handled by a combination of AI and human judgment than by humans alone.

Another example from history illustrates how life and death decisions can be made incorrectly. The question is, would having AI assistance have made any difference in the outcome? In my role as the Strategic Defense Initiative chief scientist, I was assigned to negotiate deals to advance the SDI concept, so I met with Karl Beckurts, the research director of Siemens, one of the largest electronics companies in the world. My assignment was to persuade him to have his company join us. We had a rather fancy lunch, and our discussion was quite pleasant, but he made it clear that he, speaking for his corporate leadership, had decided that they had no interest in working with the United States on the SDI program. That lunch may have cost him his life.

On July 9, 1986, in Bonn, West Germany, Beckurts was killed. A remote-controlled bomb detonated as his limo passed by a tree on his established route to work. A seven-page letter from the Red Army Faction (RAF), undoubtedly under the direction of the KGB, was found near the location of the bombing. The letter said that he had been killed because he was preparing to support Ronald Reagan’s program to develop a space-based missile defense program called the Strategic Defense Initiative.

Car accident

The Soviet decision to take Beckurts’ life is an example of flawed human decision making based on false perceptions and misunderstandings. But, even if AI support had been available to the Soviets during the 1980s, I doubt they would have turned to it. You see, they had already made up their minds about the threat from SDI.

Much like the Germans, both the French and British decided not to work with the United States on developing missile defense. One nation made a different decision. This country was interested in engaging in a joint program to develop a long-range missile defense system, and they were willing to share the cost with us. Their goal was not to help us, but to defend themselves. Their decision process seemed to me to be straightforward, not requiring any complex decision process. They were surrounded by enemies who posed an existential missile threat. The cooperative program was called Arrow, and we established a development program, that has been continually advanced over the years. The most advanced version, Arrow 3, a Boeing/Israel Aerospace Industries program is reported to have successfully intercepted a ballistic missile attack from Iran, and their defense system is now being upgraded. It appears that the Israeli decision paid off.

I want to close by emphasizing that I believe that human decision-making processes are the best way to deal with the ambiguity of complex issues, but in some cases, people could also benefit from technical assistance provided by AI. Ten years ago, I patented a concept for human brain cognitive enhancement. I explore this idea in my latest science fiction novel, The Dragon’s Brain, which will be released in October 2024. I am working on the final book in this series in which I consider whether our decision makers have the wisdom, perspective, and values to make the right decisions to preserve what seems to be becoming an increasingly unstable world. Today Iran is threatening to launch a retaliatory attack on Israel and President Biden is trying to persuade Netanyahu to agree to an end to the war in Gaza. By the time you read this, the decisions of people based on their own brains–without any help from computers—could impact world peace.

The Russian Nuclear Space Weapon

After Mike Turner, the chair of the House Intelligence Committee, warned the public about what he called a “serious national security threat,” and the White House confirmed that the Russians are developing a “troubling anti satellite weapon,” I was motivated to add some historical perspective based on my study of directed energy weapons. 

Garin Death Ray

HG Wells was the first to invent the concept of directed energy weapons also known as death rays in his 1898 sci-fi novel “The War of the Worlds.” He was far ahead of his time. The Soviets were next to join the death ray fiction world. In the 1920s, Alexsey Tolstoy published a sci-fi novel, “Garin’s Death Ray,” that focused not just on the weapon, but also the psychology of the inventor. The novel attracted the attention of the Russian military, but the non-technical issues were prophetic. The hero of the book, Garin, described his invention of a beam weapon that was incredibly destructive, but it also had many detractors that claimed, “This invention smells of higher politics.” 

After the recent revelations, CNN published their “exclusive account” that, “Russia is attempting to develop a nuclear space weapon to destroy satellites with a massive energy wave when detonated potentially crippling a vast swath of the commercial and government satellites that would cross a dangerous Rubicon in the history of nuclear weapons, disruptions to everyday life.” The story was picked up in the European “Pravda” that reported, “The weapon has the potential to destroy entire groups of small satellites such as Space X’s Starlink used by Ukraine in the war with Russia……and Moscow perceives the U.S. statements as attempt to persuade Republicans in Congress to approve assistance for Ukraine.”

This revelation had similarities to my 2023 science fiction techno thriller “The Dragon’s CLAW” that describes a test of a secret low energy nuclear (cold fusion) weapon initially discovered at Los Alamos and stolen by the Chinese and tested for the first time on a remote island in the Pacific with surprising results. I wrote: “The results of the first trial of the Dragon’s CLAW had exceeded all of the researchers’ expectations, but not in a good way. The energy output was 10 times higher than anticipated, and it had created a giant electromagnetic pulse…revealed the existence of a new, tremendously energetic and very dangerous device…. could destroy any nation’s electric grid infrastructure and all space-based communication, along with GPS”.

I wondered if not just Tolstoy, but both CNN as well as my fiction invention were examples of imagination. Then I remembered something about the Soviet secret program I had learned from Sidney Drell, Stanford physics professor and arms control expert, when I was researching the impact of Reagan’s Star Wars program on the end of the Cold War. I documented this in my autobiographical SDI story, “Death Rays and Delusions.”  I learned from the information published in 2007 by Drell and George Shultz, former secretary of state, who was directly involved in the Reykjavik Reagan/Gorbachev summit in 1986, that the Russians were very aware and concerned about the development of nuclear driven electromagnetic weapons.  

Gorbachev was told that that the United States was developing nuclear driven directed energy weapons. He was informed that the “design concept for directed nuclear weapons, work on which began in the U.S. in the 1970s…. weapons consist in transforming part of the energy from a nuclear explosion into powerful streams of directed x-rays or electromagnetic radiation or a stream of high energy particles. No less than three tests were conducted towards the creation of directed electromagnetic radiation weapons.”

He was also told, “Full scale development of these weapons is expected to occur in the second half of the 1990s.” Gorbachev was encouraged to negotiate a “ban on nuclear testing to prevent full scale development of directed energy weapons,” and prevent “military technical superiority of the U.S. in the development of munitions of the new generation for strategic weapons is concerned.” The Soviets’ concern about the possible development of nuclear directed energy weapons continued in the 1990s as evidenced by the writings of the head of the nuclear weapon program who in 1996 called for the end of development of such weapons that he called “an evil Jinn.” 

In my novel, “The Dragon’s CLAW,” the fictional Los Alamos lab director successfully argued, “This is the ideal moment to admit that the competition over space weapons would ultimately be mutually destructive for both countries… We must draft an agreement to end all space weapons development and cooperate with energy research. The future of humanity depends on us.”

I was surprised when my fiction seemed to match some of the recent CNN revelations, but I admit that my creativity can hardly keep up with the thinking of energy weapon advocates that continue to be intrigued by new ideas related to powerful laser death rays and microwave weapons. Some claim that such energy beams are the cause of the Havana Syndrome that messes up the minds of “2000 U.S.  officials in diplomatic posts worldwide … .and a CIA panel of experts concluded some of these have as a plausible explanation a directed pulsed radio frequency energy.” A microwave expert, James Benford, stated the syndrome, “certainly fits with a microwave beam as the attacking element,” but he nor the CIA suggested who might be the supposed foreign adversary. 

I will leave it to the reader of my series of books (see projectzbooks.com), to find out what happens in the world of fictional electromagnetic brain weapons, but the technology development and its consequences continues in my next book “The Dragon’s Brain” to be published in September. My novel describes how electromagnetic energy weapons can attack not just space satellites, but also the minds of diplomats and even the entire population of the world. Who knows, where fiction ends and reality emerges, but the work of HG Wells told us to be prepared for inventions we can hardly imagine.

Terrorism is a wicked problem

Seventeen years ago, I was invited to give a presentation to a group of intelligence analysts at the Joint Military Intelligence College. At that time, I was managing the Advanced Concepts Group at Sandia National Labs, and my group was focusing much of our attention on emerging threats. A current issue was what was called “the global war on terrorism.” This war began in Afghanistan in 2001 after the Al-Qaeda attack and continued for 20 years. During that period, it expanded to include Iraq in 2003 with one justification being the belief that Iraq was linked to Al-Qaeda.

The threat of terrorism was very much a major national security issue, and my presentation attempted to address these issues based on my perceived needs for intelligence analysis. I had been increasingly interested in dealing with current complex challenges, and I studied the literature of systems engineering approach to solving relevant problems. What I learned was that most of my career as an engineer and physicist had been dominated by what were called tame problems, and the national security issues of the time were best described as wicked problems that would be long in duration.

I was convinced that the current military issues were best described by a timeline beginning with a long period of increasing threat, and short period of conflict, and a much longer post conflict period of managing the threat. I thought the key to success of this challenge would be in the hands of intelligence analysts that knew how to deal with wicked not tame problems.

Tame problems had been the focus of my training and career, and are the typical challenge for analysts, engineers, and convergent thinkers. Such problems have a well-defined problem statement. For example, a tame problem is figuring out how to build a bridge. A wicked problem includes planning for the bridge, obtaining permission from the community and elected officials to build it, acquiring funding and scheduling, and working with the various individuals and agencies required to build that bridge. The bridge builders know what, where, and how to proceed with a well-defined end point of the task. They can learn from the records of other similar bridges already built and can easily try out various paper designs and choose the one most appropriate approach. They have an orderly approach to analysis, design, and implementation, but do not have the divergent thinking approach that is actually needed to complete all of the tasks involved in building the bridge.

The typical tame approach can lead to disaster if the problem is really wicked. If the problem is defined incompletely, prematurely, or influenced by desperation, ambition, fear, greed, hatred, or other emotions. Being driven to a hurried solution can lead to oversimplified solution options and an early and false belief that the problem is solved. The different perspectives, backgrounds organizations, and prejudices can lead to escalating confusion, conflicts, and paralysis.

A symptom of a wicked problem that is treated as tame is when the leader says, “Let’s get organized, put the right person in charge, get on with the solution, and get it done.”

The tame problem approach is a satisfying and coherent method of increasing knowledge.  Wicked problem solving, on the other hand, can often be characterized by frustrating alternating periods of euphoria and utter depression. So, are wicked problems just another worthless activity that is in the end a hopeless mess?

Well, maybe, but if you know the problem is truly wicked, a wicked engineering analyst can make real progress by spending a great deal of time and effort to comprehensively formulate the approach as a nonlinear spiral instead of a ladder of subsequent steps. The key is also to share the complexity with a group of creative thinkers and communicators that have a diversity of views. It’s important to share ideas frequently as the context of system issues changes and avoiding a focus on the detailed piece of the problems. Since premature belief in success will turn out to be the devil preventing group productive cooperation, the participants need to trust each other as the game changes.

I concluded that without active counterterrorism intervention, the level of terrorist violence will be low until a triggering threshold is passed, and, at some point, conflict will demand increased security emphasis.  If successful, counterterrorism actions can be taken that will lead to a cessation of combat operations. This period will be followed by a long period of stabilization and reconstruction. During the active combat period, the adversary may apply such irregular methods as assassination of leaders; hostage taking; cyber, bio chem, and infrastructure attack. The adversary may also introduce social and psychological methods such as induced chaos, exponential migration, financial attack, and race wars. The symptom of terrorist success would be a disruption of societal stability and stimulation of self-destructive behaviors.

A strength of the wicked engineering group could be the application of ubiquitous information technology, but in the hands of the terrorist, could also accelerate instabilities, so it will be necessary to take advantage of advances in complex computer modeling and simulation as well as the application of neuroscience to enhance cognition and group problem solving.

By gaining a neuro advantage over the adversary, methods of deterrence and dissuasion will become apparent. The advances in the neuro science spectrum can enhance the psychological armor, accelerate learning, cognition, and memory. Use of such methods applied against the terrorist can create confusion, fear, and loss of understanding of the rapidly changing environment. The adversary’s use of such psychological and information warfighting tools can lead to our early failure in dealing with the threat. The positive and negative implications must be understood.

At the time of my presentation, I believed that neuroscience advances in the hands of the adversary (which won’t have the same legal and moral constraints that we have) would have an important impact on the outcome. I also believed that the challenge of dealing with terrorism was open ended and there would likely never be a last move in this contest, so the happy ending to the story was not obvious.

My new novel, The Dragon’s C.L.A.W., also tackles a wicked problem. The protagonist realizes that his breakthrough invention, which has the ability to transform the world by providing clean, affordable, unlimited energy, can also be used to create a deadly weapon. I called upon my understanding of how to wrestle with wicked problems as I described how the character dealt with the conflict the dual nature of his work. Wondering how he resolves the problem? You’ll have to read the book!

Have you ever dealt with a wicked problem? How did you approach it? Did you resolve it? Comment below.

The Costanza Defense

Special Counsel Jack Smith recently presented an indictment of Donald Trump accusing Trump of attempting to overthrow the 2020 presidential election using techniques that Smith described as:  dishonesty, fraud, and deceit. Trump’s legal defense was immediate, namely that Trump was not lying and really believed what he was saying. This can be called “The Costanza Defense” from the comedy Seinfeld, namely “if you believe it, it is not a lie.”

This defense is rather clever since it requires that the prosecution convince the jury that Trump was not sincere about his belief in the stories told to him by his legal team—individuals that the indictment labeled as coconspirators. So now the arguments are all about the sincerity of the defendant. The opportunity for the defense attorneys is to claim that Trump was really sincere in his claim that he was not lying, but he really believed what he was told by the very reputable attorneys that he was paying with big bucks.  And I think his defense is going to win, or at least persuade one member of the jury, if the prosecutors do not understand the difference among truth tellers, liars, and bullshit artists. I learned about this subtle distinction from the 2005 book “On Bullshit” by the Princeton philosophy professor Harry G. Frankfurt.

Frankfurt explained that both truth tellers and liars know what is true and what is false, and they really are sincere in their beliefs.  On the contrary, he explains that BS artists could care less about the truth, but only care about telling stories to win over their audiences.  Frankfurt explains that a really accomplished bullshit artist is able to tell the false story often enough, consistently enough, and forcefully enough that the audience can be totally convinced, and no argument would convince them otherwise.

The loyal followers of Trump really believe that the BS artist is sincere in the believability of his story, and they will defend the false arguments even if they contradict simple logic. I am sure a really good BS artist can convince at least one member of the jury that Trump really believes the story that he is innocent of any crime. Frankfurt explained that the capability of the accomplished BS artist “does not reject the authority of the truth ……he pays no attention to it at all,” but if he convinces the audience, namely in this case, the jury, or at least one member, that he is sincere in his BS, they won’t convict him. Frankfurt ends his book with the disturbing conclusion that “sincerity itself is bullshit.”

As it is said in TV commercials, but wait there is more……and there are lots of highly paid defense lawyers working on adding to the BS. The latest is that Trump’s claims were only aspirational and not really serious lies. One lawyer even said no reasonable person would even take such claims seriously. Maybe Trump was just kidding? So, for completeness, let’s review the
BS arguments that I am sure will be repeated often enough:

1.If you believe it, it is not a lie. 2.But what about that laptop? 3. My lawyers told me. 4. The deranged Special Council is really out to get you, not me, and I will protect you, and 5. I was only kidding.

So what I am suggesting is that the prosecution will have a very tough time convincing the jury that one of the most accomplished BS artists of all time is not sincere when he claims, “But what about somebody else, and that laptop…..I am just a gullible victim of despicable advisors, and I believed what my lying lawyers told me, so convict them,  not me, and you can believe every perfect thing I tell you…..believe me…..and I am sincere….believe me….I am not a liar…..trust me.”