Files / Emerging technologies

Key Takeaways from the U.S. Department of Justice Report on Artificial Intelligence in Criminal Justice

In-depth Analysis of the Year-Month Report: Applications, Challenges, and Governance Frameworks of Artificial Intelligence in Identity Recognition, Predictive Policing, Risk Assessment, and Forensic Analysis, Providing a Balanced Path for Technology Deployment and Rights Protection.

Detail

Published

22/12/2025

Key Chapter Title List

  1. Summary
  2. Key Focus Areas
  3. Identification and Surveillance
  4. Forensic Analysis
  5. Predictive Policing
  6. Risk Assessment
  7. Core Recommendations
  8. Conclusion

Document Introduction

This report summary is based on the U.S. Department of Justice (DOJ) report "Artificial Intelligence in Criminal Justice" released in December 2024, aiming to distill its core findings and recommendations. The report was released at a time when U.S. AI governance policy was undergoing a significant shift. In January 2025, Executive Order 14148 revoked the previous Order 14110, and the subsequent Order 14179 established a new policy focus centered on "Removing Barriers to U.S. AI Leadership," emphasizing the enhancement of U.S. strategic advantage in the AI field through market-driven approaches. This policy shift aligns with the $500 billion private-sector "Stargate" project led by SoftBank and OpenAI, highlighting the convergence of large-scale investment in AI infrastructure with national strategic intent. Against this dynamic policy backdrop, the DOJ's 77-page report, with its analysis of the inherent risks and opportunities of AI technology, transcends the timeliness of specific regulatory frameworks, providing enduring value for understanding the fundamental challenges of AI in the criminal justice system.

The report systematically examines four key application scenarios of AI in the criminal justice field: identification and surveillance, forensic analysis, predictive policing, and risk assessment. Regarding identification and surveillance, the report analyzes the efficiency and accuracy gains of technologies such as facial recognition and automatic license plate recognition, while also highlighting performance disparities based on factors like race, gender, and age, and the resulting privacy and civil rights protection challenges. The forensic analysis section explores how AI may drive a potential shift from subjective analysis to more objective, repeatable methods, covering various applications from DNA probabilistic genotyping to AI-generated content detection in digital forensics, and emphasizes ongoing challenges in data quality, validation requirements, interpretability, and human oversight.

Predictive policing and risk assessment are the other two core areas analyzed in the report. The report points out that while predictive policing tools can optimize resource allocation, the historical crime data they rely on may entrench existing biases, disproportionately impact vulnerable communities, and erode public trust. Risk assessment tools are widely used in areas such as pretrial release, sentencing, and prison classification, potentially improving decision-making accuracy and transparency. However, their predictions always carry uncertainty, and performance disparities across different demographic groups may exacerbate systemic injustice. The report emphasizes that balancing accuracy, fairness, and transparency is one of the greatest challenges in using these tools.

To ensure the responsible deployment of AI, the report proposes a comprehensive governance framework and core recommendations. This includes conducting cost-benefit assessments before deployment, clarifying organizational responsibilities, developing detailed usage policies, ensuring data integrity, and performing rigorous testing. Post-deployment, continuous monitoring, regular audits, evaluation of new uses, and maintaining community engagement are necessary. The report particularly emphasizes the central role of human oversight, stating that AI should assist rather than replace human decision-making, especially in high-stakes cases. Ultimately, the report concludes that by establishing robust organizational structures, ensuring public oversight and transparency, and cultivating a workforce with the necessary skills, criminal justice agencies can strive to ensure that AI deployment enhances the system's fairness, effectiveness, and constitutional integrity.


Key Chapter Title List

  1. Summary
  2. Key Focus Areas
  3. Identification and Surveillance
  4. Forensic Analysis
  5. Predictive Policing
  6. Risk Assessment
  7. Core Recommendations
  8. Conclusion

Document Introduction

This report summary is based on the U.S. Department of Justice (DOJ) report "Artificial Intelligence in Criminal Justice" released in December 2024, aiming to distill its core findings and recommendations. The report was released at a time when U.S. AI governance policy was undergoing a significant shift. In January 2025, Executive Order 14148 revoked the previous Order 14110, and the subsequent Order 14179 established a new policy focus centered on "Removing Barriers to U.S. AI Leadership," emphasizing the enhancement of U.S. strategic advantage in the AI field through market-driven approaches. This policy shift aligns with the $500 billion private-sector "Stargate" project led by SoftBank and OpenAI, highlighting the convergence of large-scale investment in AI infrastructure with national strategic intent. Against this dynamic policy backdrop, the DOJ's 77-page report, with its analysis of the inherent risks and opportunities of AI technology, transcends the timeliness of specific regulatory frameworks, providing enduring value for understanding the fundamental challenges of AI in the criminal justice system.

The report systematically examines four key application scenarios of AI in the criminal justice field: identification and surveillance, forensic analysis, predictive policing, and risk assessment. Regarding identification and surveillance, the report analyzes the efficiency and accuracy gains of technologies such as facial recognition and automatic license plate recognition, while also highlighting performance disparities based on factors like race, gender, and age, and the resulting privacy and civil rights protection challenges. The forensic analysis section explores how AI may drive a potential shift from subjective analysis to more objective, repeatable methods, covering various applications from DNA probabilistic genotyping to AI-generated content detection in digital forensics, and emphasizes ongoing challenges in data quality, validation requirements, interpretability, and human oversight.

Predictive policing and risk assessment are the other two core areas analyzed in the report. The report points out that while predictive policing tools can optimize resource allocation, the historical crime data they rely on may entrench existing biases, disproportionately impact vulnerable communities, and erode public trust. Risk assessment tools are widely used in areas such as pretrial release, sentencing, and prison classification, potentially improving decision-making accuracy and transparency. However, their predictions always carry uncertainty, and performance disparities across different demographic groups may exacerbate systemic injustice. The report emphasizes that balancing accuracy, fairness, and transparency is one of the greatest challenges in using these tools.

To ensure the responsible deployment of AI, the report proposes a comprehensive governance framework and core recommendations. This includes conducting cost-benefit assessments before deployment, clarifying organizational responsibilities, developing detailed usage policies, ensuring data integrity, and performing rigorous testing. Post-deployment, continuous monitoring, regular audits, evaluation of new uses, and maintaining community engagement are necessary. The report particularly emphasizes the central role of human oversight, stating that AI should assist rather than replace human decision-making, especially in high-stakes cases. Ultimately, the report concludes that by establishing robust organizational structures, ensuring public oversight and transparency, and cultivating a workforce with the necessary skills, criminal justice agencies can strive to ensure that AI deployment enhances the system's fairness, effectiveness, and constitutional integrity.