Artificial Intelligence (AI): Is it the Nuclear Warhead of the Digital Age?

My Pop—a devoted Navy man—referred to the Cold War as the “Perfect War”. From an early age, he gave me military science lessons from the different wars he’d lived through—World War II, Korea, Vietnam—breaking down strategies, mistakes, and triumphs. The Cold War was his favorite, because “it was won without hurting anybody.”

The nuclear systems used to demonstrate power and dissuade an attack from our enemies was the hallmark of a winning strategy. He walked me through our small rural town’s remaining Fallout Shelter, talked about how Americans prepared to survive a nuclear attack, and discussed ways technological advancement shaped, and prevented, what he was concerned could have escalated into World War III. The methods of warfare and technologies associated with fighting - and winning - battles had evolved immensely since his time sailing the Pacific.

Thanks to his early introduction to warfare, I became rather infatuated with the Cold War Nuclear Arms Race. Intercontinental Ballistic Missiles were my favorite - the Atlas, the Titan, the Minuteman, the Peacekeeper. When the decommissioned missile launch sites and silos became open to the public as a National Historic Site or in private ownership, I flocked to view these relics from the “Perfect War”. To date, I’ve visited ICBM silos and launch sites in five different states, and always walk away with such a sense of awe at our nation’s ability to innovate in a way that wins (and prevents) wars. 

When visiting a decommissioned Minuteman Missile silo in South Dakota earlier this year, I was reminded of the hopes and aspirations our ICBM program had for the future of warfare, as this was inscribed on the entrance:

"Someday, an ultimate class of warriors will evolve, too strong to be contested. They will win their battles without having to fight, so that at last, the day may be won without shedding a single drop of blood." - Sun Tzu

Wow. Kind of like my hopes for cyberwarfare.

The Rise of AI: Echoes of the Nuclear Age

The rapid emergence of Artificial Intelligence (AI) has been compared to that of nuclear weapons over the past few years. Both represent revolutionary technologies with the potential for immense benefit and catastrophic harm. Nuclear science brought humanity the possibility of eco-friendly energy through nuclear power, but it also gave rise to weapons capable of unparalleled devastation. Similarly, AI has opened doors to breakthroughs in medicine, education, and climate solutions while also posing risks to privacy, security, and, in some cases, even to aspects of democracy.

The analogy between AI and nuclear weapons is not without merit. Both represent pivotal moments in human history where technological advancements significantly outpaced society's ability to fully grasp or regulate their implications. However, there are critical differences between these two forces of innovation that highlight why AI may pose an even greater challenge in the Digital Age.

Here’s what I’m tracking on the great AI vs. nukes debate:

No. 1. The Accessibility of AI

One key difference lies in accessibility. Developing nuclear weapons requires specialized materials (e.g., uranium or plutonium), expensive infrastructure, and significant scientific expertise. This exclusivity has kept nuclear weapons largely confined to nation-states, enabling global treaties like the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) to mitigate their spread. Most folks aren’t going to be able to just jerry-rig a nuclear warhead in their backyard garden shed. Nukes in the hands of terrorists or enemy states is a concern, but it can be difficult to both figuratively and literary fly under the radar with an ICBM. 

AI, in contrast, is widely accessible. Its development requires computational power, data, and programming skills—resources far more attainable than those needed for nuclear weaponry. Open-source AI models, such as GPT or image generators, lower the barriers for individuals, small companies, and even malicious actors to harness AI's power. This accessibility has fueled innovation but also amplified risks, as bad actors can weaponize AI for cyberattacks, disinformation campaigns, and even autonomous weapons. There are lots of folks that can access and leverage AI for malicious intent. 

No. 2. AI’s Low Cost and Limited Regulation

AI is cheaper to develop and deploy than nuclear weapons. Training an AI model might cost millions, but once built, it can be scaled infinitely at minimal cost. This affordability accelerates adoption and proliferation, leaving regulatory bodies scrambling to keep up. It’s a balance - how do we leverage the positive outcomes from AI in a responsible way - a way that limits its application for nefarious use? No easy answer here. 

In contrast, nuclear weapons are heavily monitored and governed by international agencies like the International Atomic Energy Agency (IAEA). AI development lacks similar oversight. While some organizations and governments are working on AI ethics guidelines, enforcement is fragmented and inadequate. The pace of AI evolution far outstrips regulatory efforts.

No. 3. AI Can’t Be Confined to a Silo

Nuclear weapons represent an immediate, physical threat with clear consequences—massive explosions, radiation, and widespread loss of life. AI's dangers are more insidious and diffuse. Its misuse can erode trust in institutions through deepfake videos, destabilize economies by manipulating financial markets, or perpetuate bias in decision-making systems which could have extensive impacts on things like national sentiment and healthcare service quality. Unlike the Cold War’s ICBMs, which remained securely housed in missile silos until activated by trained military professionals, AI is dynamic, widely accessible, and capable of being deployed without comparable safeguards.

The rise of autonomous weaponry powered by AI compounds these challenges, further blurring the line between human oversight and machine autonomy. Unlike nuclear weapons, which require a deliberate and conscious decision to launch, autonomous systems can make decisions independently, raising profound ethical and legal questions about accountability. It’s no longer far-fetched to imagine a future where the “rise of the robots” moves from science fiction into reality within our lifetimes.

Should We Fear AI?

Whenever it becomes a point of conversation that I teach cyberwarfare, the question of “Should we fear AI?” tends to come up. 

Am I scared of it? Hell yeah - not necessarily of the technology itself, just of the potential implications of this technology being operated by folks who are malicious or ignorant.

The lethality potential of AI is undeniable. Just as we wouldn’t distribute Assault Rifles (ARs) to toddlers, we shouldn’t leave AI in the hands of those without the necessary expertise; however, in the case of AI, we already have opened that Pandora’s Box, and we’re going to have to navigate the implications of this precarious, increasingly complex battlefield. 

We lack folks qualified to manage these behemoths of technological advances - advances that can be easily weaponized, such as AI. Additionally, governments, including many in the Western World, appear to be lagging behind in terms of policies and regulations related to the applications of this technology. 

In 1960, we wouldn’t have let my backwoods cousins’ who’s specialty is bird hunting and hauling scrap metal transport, install, and maintain a Minutemen Missile. Yet, our lack of public sector leadership’s awareness and initiative in the area of technology is leaving society increasingly vulnerable to the negative applications of AI. 

Can We Harness AI for Positive Outcomes?

As with nuclear science, the challenge lies in harnessing AI for its benefits while minimizing its risks. If that task was easy, it would already be done. It’s not easy, but due to the weaponization potential of AI, it should not be an optional objective for leadership in national security. 

AI offers immense potential for good—revolutionizing healthcare by diagnosing diseases earlier, optimizing renewable energy systems, and enhancing education through personalized learning tools. However, these advancements must be balanced against the need to mitigate misuse.

Key steps include developing global AI regulations similar to those governing nuclear technology, fostering international collaboration, and prioritizing transparency and ethics in AI design. Tech companies and governments must work together to ensure that AI tools are deployed responsibly and equitably, without exacerbating existing societal inequalities or vulnerabilities. 

But the future of cyberwar should not be restricted to the folks at the top, because the targets of cyberwar aren’t going to be restricted to those in high-level government positions. It’s going to be you, and me, and all our non-techy neighbors just trying to live our lives. We need to enhance our awareness of the realities of cyberwarfare, including how it pertains to the weaponization of AI, and do our part as members of a democracy to shape and influence necessary policy and accountability. 

Final Thoughts: The Weaponization Potential of AI

AI may not produce the same immediate devastation as nuclear weapons, but its accessibility, lack of regulation, and potential for misuse make it an unprecedented challenge. The analogy to nuclear weapons serves as a warning: we must approach AI with caution, forethought, and a commitment to ethical use.

As we navigate this digital age, let us remember the lessons of the Cold War. Peace and prevention of war may be possible or undermined through technology. Like those who went before us, we have a choice: how will AI be utilized by our country to defend our nation and her allies?

Previous
Previous

Finding Meaning in Life After War

Next
Next

An Entrepreneur’s Odyssey: Bootstrapping a Small Business in Government Contracting