“This technology is our future threat,” says Serhiy Beskrestnov as he studies a newly intercepted Russian drone. It is no ordinary weapon, he explains. Powered by artificial intelligence, it can locate and attack its target entirely on its own.
Beskrestnov, a consultant to Ukraine’s defence forces, has analysed countless drones since the war began. But this model stands apart. It sends no signals and receives none—making it impossible to jam or trace.
Both Ukraine and Russia are now testing the limits of AI on the battlefield. They use it to find enemy positions, process intelligence and clear mines.
Artificial intelligence becomes a weapon of strategy
For Ukraine, AI is now indispensable. “Our military gets more than 50,000 video feeds from the front every month,” says Deputy Defence Minister Yuriy Myronenko. “Artificial intelligence analyses them, identifies threats and maps out targets.”
The technology gives commanders speed, precision and efficiency. It also helps save lives. But its true power emerges in unmanned systems. Ukrainian units already operate drones that lock onto targets and fly autonomously in the final seconds before impact.
These drones cannot be jammed and are too small for easy detection. Experts predict they will soon evolve into fully autonomous machines capable of attacking without human oversight.
The new generation of self-guided drones
“All a soldier will need to do is press a button on a smartphone,” says Yaroslav Azhnyuk, CEO of the Ukrainian tech firm The Fourth Law. The drone will then find its target, drop explosives, assess the results and return to base. “It won’t even require piloting skills,” he adds.
Azhnyuk believes that automation could greatly strengthen Ukraine’s air defences against Russia’s long-range attack drones like the Shaheds. “A computer-guided system can outperform humans,” he says. “It reacts faster, sees better and makes fewer mistakes.”
Myronenko admits that fully autonomous systems are still in development but insists they are close to completion. “We have already implemented parts of the technology,” he confirms. Azhnyuk predicts thousands of these systems could be deployed by the end of 2026.
Innovation meets uncertainty on the battlefield
Despite the rapid progress, Ukrainian developers remain cautious. The fear is that AI might not distinguish friend from foe. “A Ukrainian and a Russian soldier may wear the same uniform,” warns Vadym, a developer who prefers to remain unnamed.
His company, DevDroid, builds remotely controlled machine guns that use AI to detect and track movement. Yet to avoid friendly fire, the guns do not shoot automatically. “We could activate that feature,” says Vadym, “but we need more experience from the field to ensure safety.”
Moral and legal questions grow louder as AI takes on more battlefield roles. How can machines follow the laws of war? Can they recognise civilians or soldiers who surrender? Myronenko believes humans must make the final call, even if AI assists. Still, he admits there’s no certainty that all nations or groups will respect those limits.
A race without rules
The rise of AI has triggered a new kind of arms race. Traditional defences—like jamming or missile interception—struggle against intelligent drone swarms.
Ukraine’s daring “Spider Web” operation last June, when 100 drones hit Russian air bases, likely used AI coordination. Many in Ukraine now fear Moscow could adopt similar tactics, not just at the front but deep inside Ukrainian territory.
President Volodymyr Zelensky recently told the United Nations that AI is fuelling “the most destructive arms race in human history.” He urged world leaders to create global rules for AI in warfare, calling the issue “as urgent as stopping the spread of nuclear weapons.”
