The Evolution of Artificial Intelligence and Biosecurity Risks: Capabilities, Thresholds, and Interventions
Focusing on catastrophic risks in the intersection with biotechnology, this analysis examines how technological advancements reshape biosecurity and provides policymakers with actionable frameworks for risk prevention and control ().
Detail
Published
23/12/2025
Key Chapter Title List
- Executive Summary
- Introduction
- The Current State of Catastrophic Biological Risks
- AI Safety and Biosecurity
- Core Capabilities Requiring Monitoring
- Policy Recommendations
- Conclusion
- AI-Enabled Biological Design Tools
- Foundation Models and Risk Proliferation
- Technical Challenges in AI Safety
- Integration of AI Tools and Complex Systems
- The AI Development Environment in the Biotechnology Field
Document Introduction
The COVID-19 pandemic has highlighted the fragility of the global biosecurity system, while the rapid development of artificial intelligence technology is fundamentally reshaping the landscape of national biosecurity risks. From warnings by CEOs of top global AI labs about the potential risks of biological catastrophes to U.S. Vice President Harris's concerns about AI-empowered biological weapons threatening human survival, related issues have risen to the highest levels of government and industry decision-making. This report aims to systematically assess the actual impact of artificial intelligence on biological catastrophe risks, providing policymakers with a comprehensive cognitive framework and actionable recommendations.
The report first outlines the history of U.S. biosecurity and the existing risk landscape, clarifying the multiple sources of catastrophic biological risks—including natural origins, accidents in legitimate scientific experiments, and intentional creation by various actors such as states, terrorist organizations, and lone-wolf actors. Simultaneously, the report analyzes the evolutionary trends of biotechnology itself. Breakthroughs in technologies like gene editing, gene sequencing, DNA synthesis, and the rise of cloud labs are lowering the technical barriers to biological weapons development, while existing international norms and safeguard mechanisms still have many gaps.
Based on the four core dimensions of AI safety (new capabilities, technical challenges, complex system integration, development environment), the report delves into the specific pathways through which AI influences biological risks. AI-enabled biological design tools could optimize the lethality, transmissibility, and targeting of biological weapons, while foundation models have the potential to lower the technical barriers for non-state actors to acquire biological weapons. Technical flaws, system integration risks, and R&D competition pressures further exacerbate potential threats. Notably, China's development environment in the fields of AI and biotechnology, due to its unique policy direction, risk control models, and crisis response record, has become a significant variable in global biosecurity risks.
The report identifies four core capabilities requiring focused monitoring: the biological experiment guidance capability of foundation models, the weakening effect of cloud labs on the need for specialized expertise, the dual-use progress in host genetic susceptibility research, and breakthroughs in precise engineering technologies for viral pathogens. Building on this, it proposes five key policy recommendations: strengthening screening mechanisms for cloud labs and gene synthesis service providers, regularly assessing the full lifecycle empowerment capability of foundation models for biological weapons, investing in AI safety technical mechanisms, optimizing the flexibility and adaptability of biodefense systems, and exploring licensing systems for high-risk biological design tools.
Although the current actual impact of artificial intelligence on biosecurity remains relatively limited, the uncertainty of technological evolution necessitates the early deployment of preventive and control measures. Through rigorous factual analysis and logical deduction, this report balances the dual needs of innovation development and risk prevention, providing a scientifically sound and actionable policy blueprint for the United States to fortify its biosecurity defenses in the AI era.