Application of Artificial Intelligence in Military Decision-Making: Seizing Advantages and Mitigating Risks
This report focuses on the application of artificial intelligence in decision support systems at the operational level, constructing an evaluation framework that includes three major dimensions: scope of application, data foundation, and human-machine interaction. It also provides commanders with five specific risk mitigation recommendations.
Detail
Published
22/12/2025
Key Chapter Title List
- Executive Summary
- Introduction
- Historical Context and Efforts in Military Decision-Making Cognition and Prediction
- Current State of Global Adoption of AI for Military Decision Support
- Types of Decision Support
- A Simplified Framework for Commanders to Evaluate AI Military Decision Support Systems
- Scope of Applicability Considerations
- Data Considerations
- Human-Machine Interaction Considerations
- Recommendations
- Conclusion
- Appendix
Document Introduction
With the accelerating integration of artificial intelligence technology into the global military domain, its potential to enhance the quality and speed of decision-making at the operational level has garnered significant attention. Operational commanders face the formidable challenge of making life-and-death decisions amidst massive, rapidly changing, and often incomplete information flows. Artificial Intelligence Decision Support Systems (AI-DSS) are seen as a key tool to pierce this "fog of war." However, enthusiasm for this technology must be balanced with its actual capabilities and limitations to ensure its appropriate and effective deployment. This report aims to systematically examine the current state, opportunities, and risks of AI military decision support, and to provide military decision-makers with a practical framework for evaluation and operation.
The report first traces the historical lineage of military decision support tools, from ancient consultations with oracles to modern complex campaign models and early warning systems, revealing humanity's enduring effort to enhance situational awareness and predictive capability amidst uncertainty. Currently, the world's major military powers, represented by the United States, China, Russia, and NATO, have publicly expressed high expectations for AI-DSS and invested R&D resources. Numerous commercial and military systems with diverse functionalities have also emerged in the market, covering a wide range of tasks from situational awareness and planning execution to predictive analysis. These systems blur the boundaries between tactical, operational, and strategic decision-making, increasing the complexity for commanders in selection and use.
To address this challenge, this report proposes a simplified three-dimensional evaluation framework for commanders to systematically review when considering the deployment of AI-DSS. First is "Scope of Applicability": Is the system operating within clearly defined and understood scenarios? Commanders must be vigilant against "context shift" (i.e., inconsistency between training and operational environments), distinguish between predictions based on physical laws and those involving human behavior, manage the risks posed by flexible but ill-defined systems, and fundamentally accept the "irreducible uncertainty" inherent in military decision-making. Second is "Data Foundation": Can the training data support the system's conclusions? Obtaining high-quality, high-fidelity data is particularly difficult in the military domain. Issues of data skew (arising from sensor limitations, enemy deception, etc.) and scarce data (especially concerning actual combat) can severely impact the reliability of AI system outputs. Finally, "Human-Machine Interaction": What are the capabilities and limitations of the human-machine system as a whole in a specific environment? The report specifically points out the risks that may arise from "false expectations" of Large Language Models (LLMs), human cognitive biases (such as automation bias), and organizational biases (such as an excessive pursuit of speed and resource savings).
Based on the above analysis, the report proposes five personnel- and process-centric risk mitigation recommendations for military organizations: First, establish context- and risk-based deployment standards to ensure clear and reversible conditions of use. Second, implement rigorous training and qualification certification for AI-DSS operators, especially those involved in lethal decision-making. Third, establish a continuous certification cycle to regularly assess system effectiveness and unit proficiency. Fourth, appoint a "Responsible AI Officer" within military units to be responsible for enhancing AI literacy, reporting incidents, and communicating information. Fifth, systematically log AI system failures and human errors, establishing a central knowledge repository to facilitate experience sharing, prevent repeated mistakes, and build transparency.
The report ultimately emphasizes that while AI can enhance the quality and speed of battlefield decision-making, it cannot replace human judgment. Commanders and their teams must deeply understand the strengths and weaknesses of AI-DSS. Guided by a clear framework and through thoughtful human-machine integration, they can effectively harness this technology, capturing its strategic advantages while strictly controlling the accompanying risks.