International humanitarian law has governed the conduct of warfare since the first Geneva Convention in 1864. Its foundational principles, distinction between combatants and civilians, proportionality in the use of force, precaution in attack, have survived two world wars, the nuclear era, and the proliferation of asymmetric conflict. They were designed for a world in which human beings make the decisions that result in death. T
hey were not designed for a world in which an algorithm identifies a target, prioritizes its destruction against competing demands, and generates a strike recommendation within seconds and in which a human operator reviews that recommendation under time pressure measured not in hours but in fractions of minutes before authorizing an action that kills people.
Operation Epic Fury has not created this problem. It has demonstrated it at operational scale for the first time, in a conflict whose AI-enabled strike tempo, 5,500 targets across eleven days, has made the abstractions of academic debate concrete. The principle of precaution under Additional Protocol I to the Geneva Conventions requires that those who plan or decide upon attacks take all feasible precautions to avoid, or in any event minimize, incidental civilian casualties.
“Feasible” has historically been interpreted against a baseline of human decision time: the time available to a commander to gather information, assess alternatives, and exercise judgment before striking. When the system generating the strike recommendation operates at machine speed and the human override window is measured in seconds, the interpretation of “feasible” precaution requires a revision that no existing legal framework has yet provided.
Read More: Iran Calls Deadly Strike on Minab Girls School ‘Unforgivable War Crime‘
The Minab school strike is the empirical anchor for this argument. Bellingcat, the New York Times, and eight independent munitions experts attributed it to U.S. Tomahawk missiles. The strike killed over 170 people, the majority of them children. The U.S. military has not confirmed or denied the attribution. No investigation has been announced by any party.
No legal mechanism exists through which the victims’ families, international investigative bodies, or affected states can compel disclosure of the operational record, the targeting inputs, AI recommendations, human review process, and authorization chain that preceded the strike. This accountability vacuum is not a consequence of political obstruction, though political obstruction is also present.
It is a structural consequence of deploying commercial AI targeting systems under classified government contracts with no transparency obligations, no independent audit requirements, and no post-strike accountability framework designed for AI-assisted targeting errors.
The legal gap is precise and documented. The CCW Group of Governmental Experts on LAWS reached provisional consensus in November 2024 on a definition characterizing autonomous weapons as systems that can identify, select, and engage targets without human intervention in execution.
What Epic Fury demonstrated is that the relevant question is not whether a human ultimately authorizes the strike, Admiral Cooper confirmed that humans make the final call, but whether the human authorization step constitutes meaningful control when the recommendation has been generated by an algorithm processing 150 intelligence sources in seconds, when the authorization window is constrained by operational tempo, and when the consequence of delay is the loss of a time-sensitive target.
The Brennan Center for Justice documented in March 2026 that the opacity of proprietary targeting algorithms makes it impossible to inspect them for hidden biases that lead to civilian misidentification. If the system’s recommendation cannot be inspected, the human who authorizes it cannot evaluate it. Authorization without evaluation is not meaningful control by any interpretation of that phrase.
The states whose consent is necessary for a binding legal instrument have consistent incentives to avoid one. The United States has argued that existing IHL is sufficient. Russia has argued no precedent exists for preemptive bans on weapons classes. China has proposed distinguishing “acceptable” from “unacceptable” autonomous weapons without agreeing on the distinction’s content.
Read More: UNESCO Alarmed by Impact of War Escalation on Schools in Middle East
These positions reflect genuine legal disagreements and genuine strategic interests. They also have a common consequence: twelve years of CCW negotiations have produced no binding instrument, while the technology they were negotiating about has been deployed at operational scale.
The November 2025 UN General Assembly First Committee resolution, supported by 156 states, calling for a binding instrument by the 2026 Review Conference represents the clearest signal that the states most affected by autonomous weapons, and most underrepresented in the CCW process, have concluded that the consensus-based multilateral framework is incapable of producing the governance it was convened to create.
The Seventh CCW Review Conference in November 2026 is the last scheduled opportunity for the international community to address this gap before the operational experience of Epic Fury, AI targeting at machine speed, drone swarms with adaptive routing, commercial platforms integrated into classified kill chains, becomes the baseline against which the next conflict begins.
The governance question is not whether autonomous weapons should be banned. Reasonable states disagree on that, and a ban without the participation of the states that deploy these systems produces no protection. The governance question is whether meaningful human control can be defined precisely enough to be operationally enforceable, whether accountability mechanisms can be designed for AI-assisted targeting errors, and whether the precautionary principle can be reinterpreted for a decision environment in which machine speed has replaced human deliberation as the operative tempo of warfare.
These are legal questions with answers available to competent drafters. What is lacking is not the legal capacity to answer them. It is the political will to do so before the next school is struck by a weapon whose recommendation was generated in seconds and whose accountability trail ends at a classified contract.
*The views presented in this article are the authors’ own and do not necessarily reflect the views of The Diplomatic Insight.

Aleena Saif Ullah
Aleena Saif Ullahis an MPhil Scholar in International Relations at the University of Punjab, Lahore. She can be reached ataleenasaifullah68@gmail.com
- Aleena Saif Ullah











