Drones and the Future of Defense: Transnational Security Challenges
This report provides an in-depth analysis of the application of artificial intelligence in drone warfare, focusing on ethical and legal controversies, the security implications for NATO, and real-world case studies from Ukraine and Gaza. It offers a cross-domain security assessment based on publicly available information and expert interviews.
Detail
Published
22/12/2025
List of Key Chapter Titles
- Glossary
- Message from the Director
- Preface
- Introduction
- Methodology
- Definitions and Historical Context
- The Introduction of Artificial Intelligence in Drone Warfare
- Ethical and Legal Debates
- Implications for NATO Security
- Case Studies: Ukraine and Gaza
- Future Trends
- Non-State Actors
- Conclusion
Document Introduction
Against the backdrop of Russia's invasion of Ukraine, escalating geopolitical tensions, and the decline of multilateralism, emerging disruptive technologies are reshaping the nature of warfare and the security landscape at an unprecedented pace. This report, jointly released by the Konrad Adenauer Foundation Canada Office and the Montreal Institute for Global Security Studies, focuses on the intersection of artificial intelligence and drone technology. It systematically explores how autonomous weapon systems are fundamentally altering military operations, decision-making processes, and strategic deterrence, and have become an urgent transnational security challenge.
The report first clarifies the core dilemma of the lack of a unified definition for autonomous weapon systems, tracing their evolution from the use of drones for reconnaissance during the Cold War to contemporary AI-enabled lethal autonomous systems. Through qualitative analysis of open-source literature, government reports, and expert interviews (including conversations with individuals such as Alexander Kmentt, Austria's Chief Negotiator), the report reveals that the current application of AI in the military domain is primarily concentrated in decision support systems, designed to assist rather than replace human decision-makers. However, with the advancement of the US "Replicator" initiative, China's massive investments, and technological progress in countries like Russia and Turkey, fully autonomous "fire-and-forget" systems (such as Turkey's STM Kargu-2) have begun to emerge in actual combat, raising profound concerns about whether "meaningful human control" can be maintained.
The core section of the report delves into the resulting ethical, legal, and governance challenges. Autonomous weapon systems pose a direct challenge to the fundamental principles of International Humanitarian Law—distinction between civilians and combatants, proportionality, and precautions in attack. Although mechanisms like the United Nations Group of Governmental Experts on the Convention on Certain Conventional Weapons have been discussing these issues for years, the process of establishing binding international legal regulations has been severely lagging due to great power competition, rapid technological iteration, and disagreements over the definition of "autonomy." Using the conflicts in Ukraine and Gaza as key case studies, the report concretely demonstrates the real-world impact of technological application: in Ukraine, both sides have used AI for target identification, drone swarm coordination, and naval warfare, driving a rapid innovation cycle; in Gaza, the Israeli military has been accused of using AI systems like "Lavender" for large-scale target screening and strikes, with their high error rates and automated decision-making processes raising serious allegations of war crimes and humanitarian concerns.
Furthermore, the report assesses the strategic opportunities and risks posed by AI drones for NATO security. On one hand, AI enhances NATO's intelligence, surveillance, reconnaissance, and decision-making advantages. On the other hand, insufficient interoperability of AI systems among allies, the lack of a unified doctrine for drone use, and diverging regulatory paths between the US and Europe (such as the EU's AI Act versus the US's innovation-prioritizing framework) constitute internal challenges. Simultaneously, China's rapid progress in military AI applications, Iran's proliferation of "Shahed" drones to Russia, and other dynamics continue to erode NATO's technological and security boundaries. The increasingly widespread acquisition and use of commercial drones by non-state actors further blurs the threshold of war and exacerbates the complexity of the security environment.
The report ultimately emphasizes that the convergence of artificial intelligence and drone technology is lowering the political threshold for the use of force, accelerating conflict escalation, and potentially fueling a new arms race. To prevent instability and indiscriminate killing caused by uncontrolled technology, the international community urgently needs to reach a consensus on a legally binding regulatory framework before the critical 2026 window. This framework must ensure that the development and deployment of any weapon system remains under "meaningful human control" and complies with International Humanitarian Law and ethical standards. NATO and its member states must coordinate policies, strengthen regulation, and promote responsible innovation to uphold global strategic stability and a rules-based international order.