Files / Emerging technologies

Framework to advance governance and risk management of artificial intelligence in the field of national security

In-depth Analysis of the White House's Policy Directive on the Governance Framework, Risk Management Mechanisms, Compliance Requirements, and Accountability System for the Application of Artificial Intelligence in the National Security Classification System

Detail

Published

22/12/2025

Key Chapter Title List

  1. Overview and Scope
  2. Pillar One: AI Use Restrictions
  3. Prohibited AI Use Cases
  4. High-Risk AI Use Cases
  5. AI Use Cases Impacting Federal Personnel
  6. Pillar Two: Minimum Risk Management Practices for High-Risk and Federal Personnel-Impacting AI Use
  7. Risk and Impact Assessment and Ensuring Effective Human Oversight
  8. Additional Procedural Safeguards for AI Impacting Federal Personnel
  9. Exemption Mechanism
  10. Chief AI Officer Responsibilities
  11. Pillar Three: Cataloging and Monitoring of AI Use
  12. Oversight and Transparency
  13. Pillar Four: Training and Accountability

Document Introduction

This report provides a detailed analysis of the "Framework for Advancing AI Governance and Risk Management in National Security" released by the White House. This framework aims to respond to the requirements of Section 4.2 of the "National Security Memorandum on Advancing U.S. Leadership in Artificial Intelligence, Leveraging AI for National Security Goals, and Promoting AI Safety, Security, and Trustworthiness." It establishes a comprehensive governance and risk management system for AI applications involving national security systems. The framework emphasizes that while AI presents significant opportunities for innovation in national security, its use must be responsible, lawful, and aligned with democratic values, including human rights, civil rights, civil liberties, privacy, and security. Its core objective is to ensure that U.S. government personnel can trust and confidently use AI systems while maintaining public confidence in national security institutions.

The framework is built upon four pillars. Pillar One clarifies restrictions on AI use, specifically defining prohibited use cases, high-risk use cases, and use cases with significant impacts on federal personnel. Prohibited use cases include profiling based on activities protected by constitutional rights, unlawfully suppressing freedom of speech, illegal discrimination based on protected characteristics, inferring an individual's emotional state without legal justification, inferring personal identity attributes solely from biometric data, determining collateral damage assessments without rigorous testing and human oversight, making final determinations on immigration status classifications, and generating and distributing intelligence reports based solely on AI output, among others. High-risk use cases encompass military, intelligence, and defense-related activities that may pose significant risks to national interests and values. Use cases impacting federal personnel primarily involve critical decisions related to personnel management, such as hiring, promotion, and performance evaluation.

Pillar Two stipulates a series of minimum risk management practices for identified high-risk and federal personnel-impacting AI uses. This requires comprehensive risk and impact assessments prior to deployment, including clarifying intended purposes, assessing data quality, testing system performance, identifying and mitigating discriminatory bias risks, ensuring effective human oversight and accountability, training operators, and establishing continuous monitoring and regular review mechanisms. Furthermore, it adds additional procedural safeguards for AI uses impacting federal personnel, such as consulting employees, providing notice and obtaining consent, and offering channels for appeal. The framework also establishes an exemption mechanism overseen by the Chief AI Officer, allowing for temporary waivers of certain minimum practice requirements under specific risk assessments, subject to strict documentation and reporting.

Pillar Three focuses on the systematic cataloging and monitoring of AI use. It requires relevant agencies to annually inventory and report high-risk AI use cases while updating data management policies to support responsible AI use. Regarding oversight and transparency, the framework mandates that agencies appoint a Chief AI Officer and establish an AI Governance Board within specified timeframes, responsible for overseeing compliance, managing risks, and promoting capability building. Oversight roles such as Privacy and Civil Liberties Officers are assigned regular reporting responsibilities to enhance transparency and maintain public trust.

Pillar Four emphasizes the establishment of training and accountability systems. It requires the development of standardized training programs covering all personnel involved in the AI lifecycle, from developers to users. Simultaneously, agencies must update policies and procedures to clearly define risk owners for each stage of the AI lifecycle, establish appropriate accountability mechanisms, incident reporting and investigation processes, and improve whistleblower protections for AI systems. The entire framework aims, through a structured, multi-layered governance design, to ensure that the application of AI technology in the national security domain is both innovative and highly reliable, compliant, and ethical.