Files / United States

U.S. Department of Homeland Security Public Sector Generative AI Deployment Guide

Based on the experience of three major pilot projects over the years, a responsible and trustworthy application framework has been constructed (Year-Month Edition).

Detail

Published

23/12/2025

Key Chapter Title List

  1. Message from the Secretary and Chief AI Officer
  2. Introduction: The DHS Generative AI Pilot Projects
  3. Pilot 1: Enhancing Search and Summarization Capabilities with Large Language Models to Strengthen Investigative Leads
  4. Pilot 2: Assisting Local Governments in Developing Hazard Mitigation Plans
  5. Pilot 3: Creating New Training Opportunities for Immigration Officers
  6. Generative AI Use Cases for Enhancing Mission Effectiveness
  7. Coalition Building and Effective Governance
  8. Tools and Infrastructure
  9. Responsible Use and Trustworthiness Considerations
  10. Measurement and Monitoring
  11. Training and Talent Recruitment
  12. Usability Testing and Other Feedback Mechanisms

Document Introduction

The rapid development and commercialization of Generative Artificial Intelligence (GenAI) present significant opportunities and challenges for the U.S. Department of Homeland Security (DHS) and public sector entities at all levels. In response to the requirements for responsible AI deployment outlined in Executive Order 14110, DHS has taken the lead in launching Generative AI pilot projects to explore its potential for enhancing mission capabilities.

This guide is compiled based on the experiences from three core pilot projects completed by DHS in 2024, focusing on three key scenarios: strengthening investigative leads, supporting the development of hazard mitigation plans, and innovating training for immigration officers. The pilot projects consistently prioritized enhancing employee work effectiveness, avoiding human replacement, while ensuring the outcomes have broad applicability across departments and at the federal government level, providing practical examples for the public sector.

The guide establishes a structured framework for the responsible adoption of Generative AI, covering seven core action dimensions: mission-oriented use case design, cross-organizational coalition building and governance, tool and infrastructure adaptation, responsible use and risk management, performance measurement and monitoring, talent training and recruitment, and user feedback mechanism development. Each dimension provides concrete, actionable implementation steps, balancing technical, policy, and administrative considerations.

The document emphasizes that Generative AI deployment must be supported by a solid technical foundation, data management capabilities, and human-centered design, while also prioritizing privacy protection and the safeguarding of civil rights and liberties. Through mechanisms such as cross-functional governance processes, continuous monitoring, and testing, DHS proactively mitigates potential risks associated with Generative AI in areas like operations, security, data, and bias, setting a benchmark for responsible AI application in the public sector.

This guide aims to provide a flexible and adaptable action framework for public sector entities at all levels. Regardless of an organization's current stage in AI application, effective deployment of Generative AI can be initiated through steps such as resource assessment, building internal consensus, and laying the groundwork. Through collaboration among government, industry, academia, and civil society, the transformative potential of Generative AI can be fully realized to strengthen public service capabilities and better serve the American people.