A War Without Witness
In March 2020, a United Nations report quietly confirmed something extraordinary: an autonomous drone may have hunted and attacked human beings in Libya without any direct command from a human operator. No one gave the final order. No one watched the target. No one bore witness. This wasn’t science fiction; it was the silent arrival of a new kind of warfare—a war where machines don’t just follow instructions but make decisions. A war where accountability is blurred, and the act of killing becomes as silent as the algorithm that executes it. As nations race to adopt AI-powered military systems, we are entering a battlefield that no longer requires human presence to create human loss. The age of autonomous weapons has begun, and international law is nowhere close to catching up.
Weapons That Think, Wars That Don’t
Lethal Autonomous Weapons Systems, or LAWS, aren’t just extensions of a soldier’s hand. They are systems designed to identify, select, and engage targets without direct human oversight. What once required deliberation can now be done through sensors, data, and code.
In 2021, a United Nations Panel of Experts on Libya reported that a Turkish-made Kargu-2 drone may have autonomously attacked human targets during a conflict between the Libyan government and rebel forces. The drone operated in a “fire, forget, and find” mode, using artificial intelligence to detect and engage without human direction. It wasn’t an experiment. It was a battlefield deployment.
Autonomous weapons are no longer theoretical. They already exist in the spaces where rules are vague and bodies fall quickly. Facial recognition, movement signatures, and heat maps replace human judgment. War becomes a pattern-recognition problem. And the more we let machines make decisions, the less those decisions resemble anything human. No deliberation. No empathy. Just calculations.
This isn’t just faster warfare. It’s warfare stripped of thought.
No Pilot, No Conscience
For ages, warfare involved people fully aware of the weight of their choices—pilots, commanders, soldiers who could hesitate or refuse. But autonomous weapons remove that human burden altogether.
In a 2024 policy brief within the U.N.’s Our Common Agenda, Secretary‑General António Guterres called machines that choose whom to kill “morally repugnant and politically unacceptable.” He urged member states to legally bind themselves by 2026 to prohibit fully autonomous weapons systems that operate without human control and to regulate any other systems to ensure compliance with international humanitarian law. This isn’t theoretical. It’s a red flag raised at the highest level of international governance.
When machines decide who lives and dies, there’s no remorse, no empathy, and no human to answer for that decision. Unlike a flawed human who might hesitate, misinterpret, or show mercy, code shows no mercy. It acts. And when it fails—when it misidentifies a civilian, miscalculates proportionality—there is no conscience to carry the weight of its mistake. Without a pilot, there is no conscience. Only results. Only silence. And only consequences that the law can’t follow.
Justice Can’t Be Coded
War crimes demand accountability. International humanitarian law rests on a simple truth—someone must be held responsible when things go wrong. But what happens when the wrongdoer isn’t a person?
According to Human Rights Watch, fully autonomous weapons create what they call an “accountability vacuum.” A system that makes its own decisions cannot be punished. It doesn’t feel guilt, doesn’t understand the consequences. Responsibility gets scattered between the programmer, the commander, the manufacturer, and the state—none of whom can fully answer for a machine's actions.
The Martens Clause, a cornerstone of international law, says that when no specific rules apply, decisions must be guided by humanity and public conscience. But machines don’t have a conscience, and when they kill unjustly, the law struggles to respond.
In May 2025, the United Nations held a dedicated session on autonomous weapons. While 164 countries supported a resolution urging urgent regulation, major powers like the United States, China, and Russia resisted a binding treaty. The meeting ended without agreement—just another reminder that the law is falling behind the battlefield.
Justice can't be coded into a machine. And without accountability, the future of warfare risks becoming a place where no one is truly responsible.

Wars We Won’t Be Brave Enough to Fight
Autonomous weapons promise precision, efficiency, and fewer boots on the ground—but they also promise distance. Not just physical, but moral. If soldiers never have to look their enemy in the eye, what happens to the human cost of war? The psychological barrier that once made leaders hesitate—loss of life, trauma, public backlash—is being replaced by an illusion of clean warfare.
Drones operated from thousands of miles away have already made it easier to launch strikes without public scrutiny. Now, AI systems that require no pilot at all are pushing us one step further. As war becomes more automated, the threshold to enter conflict lowers.
The 2020 Nagorno-Karabakh war between Armenia and Azerbaijan is a chilling example. Azerbaijan’s use of Israeli-made loitering munitions (or “kamikaze drones”) helped turn the tide. These drones hovered autonomously, identified targets, and dove into them, often without real-time human control. Civilian deaths were reported, but accountability remained elusive. The war ended quickly, but not cleanly.
If killing becomes algorithmic, warfare risks becoming not a last resort, but a first response. Decisions once weighed with gravity may soon be made with code.
A Treaty Stuck in Draft
For over a decade, governments, scientists, and human rights advocates have been calling for international rules to regulate autonomous weapons. The result? Endless debates, position papers, and stalled resolutions.
Since 2014, the United Nations Convention on Certain Conventional Weapons (CCW) has hosted expert meetings on Lethal Autonomous Weapons Systems (LAWS). But despite repeated warnings from organizations like the International Committee of the Red Cross (ICRC) and Human Rights Watch, no binding treaty has emerged. Talks often end in vague statements about “ethical use” and “further discussions.” Meanwhile, technology keeps advancing.
In 2023, 70 countries publicly supported a legally binding instrument to ban or restrict autonomous weapons. But powerful nations, particularly the United States, Russia, Israel, and China, argue that a ban would hinder innovation or weaken military advantage. These states prefer voluntary guidelines, not enforceable rules.
The gap is growing. Developing nations, often the testing ground for such weapons, demand accountability. Tech developers ask for clear limits. Legal scholars warn that international humanitarian law is too slow to respond. Yet the draft treaty remains stuck in committee rooms, sidelined by geopolitics and strategic interests.
Regulation is not just a legal issue—it’s a moral one. But without political will, we’re not drafting laws. We’re drafting history’s next regret.
Who Writes the Rules When Machines Fight the War?
Autonomous weapons operate on lines of code—but whose code, and whose values, shape their actions? In conventional warfare, militaries are bound by international humanitarian law: the Geneva Conventions, the principles of distinction and proportionality, and the expectation of accountability. But autonomous systems operate under parameters defined not by international courts, but by defense contractors, engineers, and software developers. That creates a dangerous legal vacuum where ethical decisions are outsourced to algorithms.
A 2022 report by the European Parliament warned that “human control must remain central to all AI used in military contexts,” and cautioned against letting private companies set the de facto rules of combat. Still, leading weapons manufacturers in countries like the U.S., China, and Israel are developing increasingly autonomous systems—some with AI models so complex they can no longer be fully explained, let alone regulated.
Even worse, different countries have different thresholds for what counts as “acceptable autonomy.” Some require human confirmation before a kill. Others only demand that a human be “in the loop” somewhere. The rules are inconsistent, unclear, and easily bent. If machines fight the wars of tomorrow, the real question is this: will the rules be written by democratic consensus, or by whoever has the fastest processor?

Before It’s Too Late to Regret
Technologies often outpace the laws meant to regulate them. But with autonomous weapons, the cost of delay isn’t just confusion—it’s lives lost with no one to answer for them. History has already shown us what happens when new weapons emerge without guardrails. From chemical gas in World War I to nuclear bombs in World War II, international treaties only came after devastation. We can't afford to make that mistake again.
In 2021, over 100 countries at the UN called for stronger restrictions on autonomous weapons. Civil society groups like Stop Killer Robots have demanded an outright ban. Even many AI researchers, including those working within defense sectors, have signed open letters warning of the risks. The chorus is growing louder, but political momentum remains fragmented.
Meanwhile, systems like Russia’s “Marker” robot and the U.S. Navy’s “Sea Hunter” drone continue to evolve, inching closer to full autonomy. Each new test moves the line of what’s acceptable, normalizing a future where war is waged without empathy, context, or consequence.
The world doesn’t need more data. It needs decisions. Because by the time we all agree that autonomous killing was a mistake, the silence left in its wake won’t be one we can reverse. These aren’t just weapons of the future; they are tests of our present humanity. The question is no longer if machines can fight wars. It’s whether we, as a global society, have the courage to stop them before they do it without us.