AI War Here: How Smart Drone Harop and Heron Get Rid of India’s Action Sindoor

Ukraine maintains a match against powerful traditional Russian forces through Jerryrig’s autonomous and AI-guided drones, and an algorithm-guided small, First Landscape (FPV) attack drones destroy Russian armor than any other weapon category.
Meanwhile, in Gaza, Israelis used advanced algorithms, codes for Gospel and lavender to screen intelligence in real time and set goals in real time. In 2020, the Türkiye-made Kargu-2 attack drone may have autonomously hunted down Libya’s fighter jets without human commands, which could be the first fatal strike for truly autonomous weapons.
In our imagination, the AI war is about the futuristic Terminator robot parades with the futuristic Terminator robot during wartime; in fact, the era of AI warfare has begun. Like everything related to war and artificial intelligence, this war with deadly automatic weapons (law) raises disturbing questions. Laws are machines that can identify, select and kill targets without human intervention. Unlike nuclear weapons, these systems are relatively cheap, scalable and difficult to control once released.
The level of human control may vary, from the “human in circulation” system that requires authorization to participate to “human humans”, where humans can move beyond autonomous actions and finally the “human path” system without any human participation without any human participation. The possibility of this new type of war, a machine that makes a life-death decision, prompts the United Nations to call for a ban on such weapons.
There is a fear of accidental escalation, loss of responsibility, or a full-scale “drone war” between ethicists and human rights institutions. Obviously, the state is not on the same page, as it continues among the major powers of military achievements that put AI in power. Artificial intelligence war transcends tactical advantages to establish policies, and Chinese military doctrine clearly mentions that “smart war” is its future. Although the concept of legal and artificial intelligence warfare is frightening, this article deliberately goes beyond the familiar discourse of “ban or normative” to explore some non-intuitive and counterintuitive perspectives that AI might make war more human.
Can it save human life?
One counterintuitive argument is that outsourcing war to machines can save human life. If the robot can take on the most dangerous tasks, then human soldiers will be immune to injury. Perhaps a young soldier trying to kill another machine would send a one-time machine into the killing zone to fight another machine?
Recent conflicts have hinted at this potential for life-saving: Azerbaijan’s victory over Armenia in the 2020 Nagorno-Karabakh war, for example, was achieved through advanced drones, greatly reducing its own casualties.
This could be in an era of “quality wars” or persistent low-intensity conflicts, primarily driven by AI systems, flying below the threshold that usually triggers major international interventions.
This sounds tempting, but for the shortcomings of having more aspects of these killer machines that make war “risk-free” make leaders increasingly willing to launch military adventures.
Can it make war more moral and precise?
The second counter-trend concept is that by improving accuracy, artificial intelligence may make war more ethical. Most military forces have tried to minimize collateral damage as India has been working hard. Artificial intelligence tools can make “surgery” strikes even clearer. Human soldiers, despite their heroic charity, are prone to errors, fatigue and emotions.
In theory, AI systems can be trained to avoid civilian areas, assess threats more accurately and stop operations when participation rules are violated.
In theory, autonomous AI systems can be programmed to avoid firing in schools or hospitals, and this is followed without any emotion every time. Imagine an AI drone that strikes in flight due to ambulances entering the frame, which human pilots may miss in war. Even the Red Cross acknowledges that AI-enabled decision support systems “can achieve better decisions through humans… minimizing civilian risks.”
The concept of “clean war” that AI has precisely enabled can be a double sword. The same Israeli artificial intelligence system has identified militants from Gaza militants and has also eliminated the list of algorithmic murders with minimal human censorship. If flawed data or biased algorithms view civilians as threats, AI could kill innocent people with ruthless efficiency. Artificial intelligence can enhance compliance with the law of war, but it cannot replace human judgment.
Can it make war transparent?
Operation Sindoor highlights the danger of misinformation, and mainstream media peddled deep effects. AI can change this.
The autonomous system records everything (fixed data, video recording, target decisions), initiates the possibility of “algorithm accountability”, reviews every strike, and every action is reasonable or condemned.
Can it be a new deterrent?
Eric Schmidt and others, a recent paper, “Exterty Strategence: Expert Warterigent” and others also expressed the latest counter-trend view, where they borrowed from the insane or mutually assured destruction of the Cold War-approved Crazy or mutually assured destruction, proposing the notion of Maim or mutually assured AI failure.
The idea is that as AI becomes the core of a military system, countries may hesitate to strike each other because attacking an AI system can create unpredictable ripple effects on both sides.
The inherent vulnerability of complex AI systems to disruption – through cyberattacks, the degradation of training data, and even the dynamic strikes to critical infrastructure such as data centers – creates a state of judicial complementarity among AI superpowers.
Maim flips the script on dystopia: mutual fear of AI rather than AI’s doomed fear of AI can control the aggressive instincts of competitors. If something like this is the case, then AI can actually make war more human, rather than make it more terrifying than ever before, which seems surreal.
The above counter-trend view challenges our intuition, and many people will retreat with the idea of killing mobile robots. But with so many realities, we can no longer avoid these problems.
We can choose to look at this with horrible pessimism, or take a glass semi-complete approach guided by human values that might reduce future wars inhumane. They say everything is fair in love and war, and everything may soon include artificial intelligence.