The destruction of a girls’ school in Minab, Iran, in February 2026, where more than a hundred schoolchildren were reportedly killed, may mark a troubling moment in the evolution of warfare. Beyond the immediate tragedy lies a deeper and largely unexplored question: could this be one of the first instances of large-scale civilian death shaped by artificial intelligence in modern war?
Modern warfare increasingly relies on algorithmic systems that process vast streams of intelligence—satellite imagery, drone surveillance, behavioural patterns, and movement data—to identify potential targets. Systems such as Project Maven illustrate how artificial intelligence has become embedded within military targeting infrastructures. These systems detect patterns—clusters of people, unusual movement, or activity around buildings—that might indicate military operations.
However, algorithms do not understand context; they recognize statistical patterns.
Initial explanations of the Minab strike echoed a familiar narrative in contemporary conflict reporting: that civilian infrastructure may conceal military activity. During the Israel–Hamas War (2023–present), hospitals, schools, and mosques were frequently described as potential militant bases. Over time, such narratives became normalised in military discourse.
The danger arises when these narratives shape the assumptions embedded in algorithmic systems.
If artificial intelligence is trained within a framework that treats civilian infrastructure as potential military sites, its interpretation of events may reproduce that assumption. In the chaos following an initial strike, children running for safety inside a school might appear to an algorithm as coordinated movement—precisely the type of activity often associated with militant mobilisation.
In such circumstances, panic may be misread as military behaviour.
This dynamic can be described as “narrative-trained targeting.” It refers to the way political narratives about enemy behaviour become embedded in the datasets and analytical models used by military AI systems. When those narratives guide algorithmic interpretation, civilian spaces risk being transformed into statistically suspicious environments.
The tragedy in Minab therefore raises a profound question about responsibility.
Under the principles of the Geneva Conventions, states and military commanders bear responsibility for protecting civilian infrastructure such as schools. Yet algorithmic warfare complicates this framework. Decisions are increasingly shaped by a network of actors: engineers who design algorithms, corporations that develop AI infrastructure, military institutions that deploy it, analysts who interpret its outputs, and commanders who authorize strikes.
Responsibility becomes distributed across a technological system.
Despite this, technology corporations remain largely outside the accountability structure of war. Governments and militaries face scrutiny, while the companies building the algorithms that structure battlefield intelligence rarely confront legal or moral responsibility.
If warfare is becoming algorithmically mediated, this gap can no longer be ignored.
The tragedy in Minab forces us to confront a new ethical frontier. The question is no longer only who launched the missile. It is also who designed the systems that interpreted the battlefield and identified the target.
As artificial intelligence becomes increasingly central to military decision-making, the future of international law may depend on recognizing that responsibility for algorithmic warfare extends far beyond the battlefield itself.
Muzammil Ahad Dar, Kumaraguru College of Liberal Arts and Science



