How AI-Powered Drones Are Transforming Military Operations: Cross-Border Security Challenges
Annual Research Report of the Konrad Adenauer Foundation, based on case studies of the Ukraine and Gaza conflicts, systematically analyzes the profound impact of drone technology on NATO security, international humanitarian law, and global strategic stability.
Detail
Published
22/12/2025
List of Key Chapter Titles
- Introduction
- Methodology
- Definitions and Historical Context
- Application of Artificial Intelligence in Drone Warfare
- Ethical and Legal Controversies
- Impact on NATO Security
- Case Studies: Ukraine and Gaza
- Future Trends
- Non-State Actors
- Conclusion
- About the Author
- References
Document Introduction
This report, jointly released by the Konrad Adenauer Foundation Canada Office and the Montreal Institute for Global Security Studies, aims to delve into how the integration of artificial intelligence (AI) and drone technology is profoundly reshaping modern military operations and posing unprecedented challenges to transnational security. The report begins by citing the perspective of NATO Secretary General Mark Rutte, who notes that drone technology is changing the nature of warfare. It then combines current geopolitical tensions—including the Russian invasion of Ukraine, great power competition, and the impact of U.S. policy uncertainty on the NATO alliance—to pose the core research question: How are unmanned and intelligent technologies altering the nature of conflict and affecting global security?
The report is rigorously structured and comprehensive in content. It first reviews the evolving definitions and historical applications of unmanned aerial vehicles (UAVs) and autonomous weapon systems (AWS), from reconnaissance missions during the Cold War, to their extensive use by the U.S. in counter-terrorism wars, and up to recent systems demonstrating autonomous attack capabilities in conflicts such as Libya and Nagorno-Karabakh (e.g., the Turkish STM company's Kargu-2). The report clearly states that despite the lack of a unified international definition, "meaningful human control" has become central to the ethical and legal debate on military AI applications. The report then provides a detailed analysis of how AI intervenes in the military decision-making cycle through decision support systems (DSS) and autonomous weapon systems (AWS), distinguishing between the fundamental differences of decision assistance and fully autonomous attacks. It uses examples such as the U.S. "Replicator" initiative, China's large-scale investments, and the practical applications by various countries in places like Ukraine and Gaza to illustrate the current state of technological development and the dynamics of an arms race.
The ethical and legal section is a focal point of the report. It systematically outlines the international controversies surrounding "killer robots," including the initiatives of the "Campaign to Stop Killer Robots," the slow progress of the Governmental Expert Group under the Convention on Certain Conventional Weapons (CCW), and the calls from the United Nations and the International Committee of the Red Cross (ICRC) for binding regulations. The report delves into the challenges AI systems face in distinguishing combatants from civilians and assessing the principle of proportionality. Using the example of the Israeli military's use of AI target screening systems like "Lavender" and "Gospel" in Gaza, it reveals how algorithmic bias, data deficiencies, and the "black box effect" can lead to catastrophic civilian casualties and risks of war crimes. The report points out that despite the emergence of regional initiatives like the "Paris Call," establishing an effective global regulatory mechanism still faces significant obstacles against the backdrop of great power competition and a pace of technological development far outstripping legal frameworks.
From a NATO security perspective, the report assesses the strategic opportunities and risks of AI-enabled drones. On one hand, NATO, through formulating an AI strategy, establishing the Defence Innovation Accelerator for the North Atlantic (DIANA), and setting six ethical principles, strives to maintain a technological edge while ensuring the responsible use of AI. On the other hand, NATO has significant shortcomings in areas such as drone interoperability, unified operational doctrine, capability disparities among member states, and diverging regulatory approaches between the U.S. and Europe. The report pays particular attention to the U.S. strategy of planning to deploy the "Hellscape" drone swarms in the Taiwan Strait to deter China, as well as China's rapid catch-up in military AI applications, arguing that this increases the risk of autonomous conflict escalation.
Through in-depth case studies of the conflicts in Ukraine and Gaza, the report reveals how technology is rapidly iterated and applied in actual combat. Ukraine demonstrates the remarkable ability to militarize civilian technology, conduct rapid prototyping, and train AI models based on vast amounts of battlefield data, while Russia uses AI to upgrade Iranian-made drones to enhance penetration capabilities. The Gaza case highlights the severe humanitarian and legal crises triggered by over-reliance on and insufficient verification of AI-assisted target recognition systems in high-pressure combat environments. The report concludes with a warning that as drone technology proliferates to non-state actors, including terrorist organizations, traditional military advantages and the concept of sovereignty are being eroded.
This report is based on a comprehensive review of government documents, military reports, policy briefs, academic literature, and investigative journalism, supplemented by exclusive interviews with policymakers, military analysts, legal scholars, and technical experts. It provides authoritative analysis with both breadth and depth for understanding the military transformation driven by AI and drone technology. Its conclusion emphasizes the urgent need for coordinated international policies, regulations, and enforcement actions to uphold international humanitarian law, protect civilians, and prevent the uncontrolled militarization of AI technology. Failure to do so, it warns, will have profound and lasting consequences for global security, human rights, and the foundations of the international order.