U.S. Department of Homeland Security Public Sector Generative AI Deployment Guidelines ()
Based on the experience of three major pilot projects, build a responsible and trustworthy application framework to empower the national security mission and public sector innovation.
Detail
Published
23/12/2025
Key Chapter Title List
- Message from the Secretary and Chief AI Officer
- Introduction: The DHS Generative AI Pilot Projects
- Pilot 1: Enhancing Search and Summarization with Large Language Models to Strengthen Investigative Leads
- Pilot 2: Assisting Local Governments in Developing Hazard Mitigation Plans
- Pilot 3: Creating New Training Opportunities for Immigration Officers
- Generative AI Use Cases for Enhancing Mission Effectiveness
- Coalition Building and Effective Governance
- Tools and Infrastructure
- Responsible Use and Trustworthiness Considerations
- Measurement and Monitoring
- Training and Talent Acquisition
- Usability Testing and Other Feedback Mechanisms
Document Introduction
The rapid development and commercialization of Generative Artificial Intelligence (GenAI) present significant opportunities and challenges for the U.S. Department of Homeland Security (DHS) and public sector entities at all levels. In response to the requirements for the responsible deployment of AI technologies outlined in Executive Order 14110, DHS has taken the lead in launching Generative AI pilot projects to explore its potential for enhancing mission capabilities while steadfastly upholding core principles of privacy protection, civil rights, and liberties.
This guide systematically synthesizes the practical experience from DHS's three major Generative AI pilot projects, forming a structured framework for the responsible adoption of this technology in the public sector. The three pilots focus respectively on: enhancing the efficiency of search and summarization for investigative leads using large language models, assisting local governments in developing hazard mitigation plans, and providing personalized training solutions for immigration officers. All are guided by the core principle of augmenting employee work rather than replacing human labor, and possess value for cross-departmental and federal government-wide scaling.
The guide constructs a Generative AI deployment pathway from seven core dimensions: identifying mission-aligned use cases, building cross-organizational coalitions and effective governance mechanisms, evaluating suitable tools and infrastructure, establishing principles for responsible use and trustworthiness, designing scientific measurement and monitoring systems, strengthening personnel training and talent pipelines, and implementing continuous usability testing and feedback iteration. Each dimension is accompanied by specific, actionable implementation steps, addressing diverse needs across technical, policy, and management levels.
Regarding risk management, the guide emphasizes the need to proactively identify potential risks of Generative AI in areas such as operations, security, data, privacy, and bias, embedding risk mitigation measures through mechanisms like cross-functional governance, continuous monitoring, and testing. Simultaneously, DHS explicitly opposes using Generative AI outputs as the sole basis for critical decisions, requiring all outputs to undergo human review to ensure the safety and reliability of technology application.
This guide not only summarizes the lessons learned from DHS's experience in Generative AI deployment but also provides a universal framework applicable to various public sector organizations. Regardless of an organization's stage in AI application, it can use this guide to assess resources, build internal consensus, lay the foundation for effective deployment, and ultimately enhance public service capabilities through responsible technological innovation to better serve the American people.