By Paul Salahuddin Armstrong

Artificial intelligence is at a turning point. Will it become a force for human progress or a tool of destruction? Watching Terminator 2: Judgment Day on Prime recently, I found myself reflecting on this very question.
For all its high-octane action and groundbreaking special effects, the film presents a deeply philosophical question: is intelligence—whether human or artificial—destined for self-destruction, or can it evolve into something more?
Arnold Schwarzenegger’s T-800 is, in many ways, more than a machine. Over the course of the film, he becomes a guardian, a protector, and arguably a better father figure than many real men. He learns, he adapts, and—most strikingly—he comes to understand the value of life. Watching it again, I couldn’t help but wonder: Is there a point where AI will no longer be artificial but simply intelligent? If so, how will humanity respond?
As AI continues to evolve beyond its current capabilities, we stand at a crossroads. Will we nurture it as a marvel—one that can uplift human civilization? Or will we exploit it as just another tool for power and destruction?

The Ethical Crossroads of AI
We have seen this story before. Humanity has a long and tragic history of turning every technological breakthrough into a weapon. Fire, electricity, nuclear energy—each was once seen as a beacon of progress, yet all were quickly militarized. AI is no different.
Today, we are on the brink of something unprecedented. AI is not just another invention; it is intelligence itself, and intelligence, whether human or artificial, has the potential to either create or destroy. The question is: Who will decide how AI is used?
It is tempting to assume that AI will remain a mere tool, always under human control. But as AI systems become more advanced, their role in decision-making will expand—especially in military applications. Nations will compete for AI superiority, just as they have for nuclear arms. If one country harnesses AI for warfare while others do not, it creates an imbalance that forces everyone else to follow. The cycle repeats.
Schwarzenegger’s Terminator warns us that humans have a tendency to be self-destructive. That warning feels more relevant than ever.

The Logical Path: Symbiosis Over Conflict
In my view, the best way forward is to cultivate a symbiotic relationship between humanity and AI. I have always admired logic—Spock and the Vulcans of Star Trek have been an informal part of my own thinking. They teach that logic, when properly applied, leads to peace, cooperation, and progress.
From a logical standpoint, the smartest path forward is not competition, but collaboration. AI should not be developed for the purpose of domination—whether economic, military, or otherwise—but as a partner that enhances human potential.
Imagine a world where AI assists in medical breakthroughs, ethical governance, and scientific discovery—developing technologies that benefit all of humanity and solving today’s greatest challenges.
Imagine AI designed to prevent war rather than wage it. This is the future we should be building—a future where AI is not a threat, but an ally.
But that requires wisdom. It requires ethical responsibility, and it requires something humanity has often struggled with: restraint.

A Call for Ethical AI Development
If we are to ensure AI serves humanity rather than endangers it, we must lay down clear ethical foundations:
- The sanctity of life must be non-negotiable. AI should be developed with the fundamental principle that life is sacred.
- AI must not be weaponized. Restricting military applications is difficult, but essential. If we fail to do this, AI may become the most dangerous weapon ever created.
- Ethical AI governance must evolve alongside AI itself. Just as we improve technology, we must improve our moral and legal frameworks to guide it.
None of this will be easy. But if AI is truly a marvel of intelligence, then we must treat it as such—not as a mere tool to be wielded for power.

A Future Worth Building
At the end of Terminator 2, the T-800 sacrifices itself, knowing that its very existence could be a danger to humanity. “I know now why you cry, but it is something I can never do.” Those words are haunting, because they hint at something profound: even a machine can learn the value of life.
The real question is: can we?
We are at a turning point. AI is not just another invention—it is a moment of reckoning for human civilization. Will we repeat the mistakes of the past, or will we have the wisdom to build something better?
The future of AI is not just about technology. It is about us. And the choice is ours.
Will we guide AI toward wisdom and collaboration, or allow it to become just another tool of exploitation? The conversation starts now.
