U.S. Department of Homeland Security: Report on Mitigating Cross-Cutting Risks at the Intersection of Artificial Intelligence and CBRN Threats
Based on the requirements of Executive Order No. of the year, focusing on the risks of misuse and defensive applications of artificial intelligence in the field of chemical and biological threats, a cross-departmental collaborative governance framework and policy recommendations are proposed.
Detail
Published
23/12/2025
Key Chapter Title List
- Executive Summary
- Introduction
- Background: Trends in Artificial Intelligence Development
- Misuse of Artificial Intelligence in the Research, Development, and Production of Chemical, Biological, Radiological, and Nuclear Threats
- Benefits and Applications of Artificial Intelligence in Addressing Chemical, Biological, Radiological, and Nuclear Threats
- Applications of Artificial Intelligence in Physical and Life Sciences
- Trends in Artificial Intelligence Governance and Oversight
- Key Findings and Corresponding Policy Recommendations
- List of Acronyms
Document Introduction
On October 30, 2023, the Biden Administration signed Executive Order 14110, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," which identified the intersecting risks of artificial intelligence and Chemical, Biological, Radiological, and Nuclear (CBRN) threats as a national security priority, tasking the Department of Homeland Security with leading the assessment of related risks and proposing governance solutions. This report, led and prepared by the Department of Homeland Security's Countering Weapons of Mass Destruction Office (CWMD), is a direct response to the requirements of Section 4.4 of that Executive Order, with a core focus on the dual impact of artificial intelligence in the fields of chemical and biological threats.
The report systematically reviews the current state of artificial intelligence technology development, including application trends in generative AI, foundation models, and biological design tools (BDTs), analyzing the innovative value and potential risks of these technologies in physical and life sciences research. The study clarifies that while AI can accelerate progress in beneficial fields such as drug development and precision agriculture, it may also be misused by non-state actors (e.g., extremists) and state actors for the research, development, and manufacturing of CBRN weapons by lowering technical barriers and democratizing access to dangerous knowledge.
To ensure the comprehensiveness and authority of the analysis, the report preparation process fully integrated cross-departmental resources of the U.S. government while extensively soliciting expert opinions from the Department of Energy, private AI labs, academia, think tanks, and third-party model evaluation institutions, forming a risk assessment and solution set based on multi-source perspectives. The report focuses on analyzing the threat pathways of AI misuse, identifying full-chain risk points from conceptualization, material acquisition, weaponization to attack execution, and proposes a "pathway disruption" approach to risk mitigation.
The core of the report contains 9 key findings, covering core issues such as the lack of cross-agency consensus on risks, security challenges posed by the proliferation of open-source models, limitations of existing regulatory systems, and the necessity of international collaboration. Based on these findings, the report proposes a series of targeted policy recommendations, including strengthening cross-departmental intelligence sharing, establishing safety guidelines for AI model releases, creating vulnerability reporting mechanisms, improving laboratory safety oversight, and advancing international governance coordination, providing a concrete path for building an adaptive, iterative AI safety governance framework.
This report is not only a key policy document for the U.S. government in addressing the intersecting risks of AI and national security but also provides a reference blueprint for balancing AI innovation and safety governance globally. It holds significant decision-support and academic reference value for defense researchers, policymakers, intelligence practitioners, and scholars in related fields.