California Frontier Artificial Intelligence Policy Report
Research on an Interdisciplinary Policy Framework for the Governance of Generative Artificial Intelligence and Foundational Models, Based on Historical Case Comparisons and Multi-source Evidence Analysis, Provides Forward-Looking Pathway Guidance for California to Balance Innovation Incentives with Risk Prevention and Control.
Detail
Published
22/12/2025
Key Chapter Title List
- Introduction
- Building AI Policy on Evidence and Experience: Understanding the Broader Context
- Transparency
- Adverse Event Reporting
- Scoping
- Summary of Feedback and Revisions
Document Introduction
In September 2024, California Governor Gavin Newsom commissioned Dr. Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered AI; Dr. Mariano-Florentino Cuéllar, President of the Carnegie Endowment for International Peace; and Dr. Jennifer Tour Chayes, Dean of the College of Computing, Data Science, and Society at UC Berkeley, to co-chair a Joint California Policy Task Force. The aim was to develop effective approaches for California to support the deployment, use, and governance of generative AI, and to establish appropriate guardrails to minimize substantive risks. This report is the final product released by this task force on June 17, 2025. It represents the independent academic work of the co-chairs and participating scholars, not the official positions of their affiliated institutions.
The report focuses on the emerging technological paradigm of frontier AI driven by foundation models, aiming to provide California policymakers with an evidence-based framework for policymaking. The report does not advocate for or against any specific legislative or regulatory proposals. Instead, it examines the best available research on foundation models and distills a set of policy principles to guide California on how to approach, evaluate, and govern frontier AI, with a core ethos of "trust but verify." The report recognizes that as a global hub for AI innovation, California has a unique opportunity and responsibility to continue supporting the development of frontier AI while also addressing significant risks that could have profound impacts on the state and the world.
The report employs a multidisciplinary, integrative approach, drawing extensively on diverse evidence from empirical research, historical analysis, and modeling simulations. Its main content revolves around four core governance issues: First, "Transparency." The report points out systemic opacity in key areas of the current AI industry and proposes specific pathways to enhance transparency through disclosure requirements, third-party assessments, and whistleblower protections. Second, "Adverse Event Reporting." The report advocates for a government-led reporting system to continuously monitor the real-world impact of AI, identify unforeseen risks, and provide data support for regulatory decisions by collecting information on harm incidents that occur post-deployment. Third, "Scoping." The report delves into how to reasonably determine the scope of entities covered by policy by setting thresholds, analyzing the pros and cons of setting thresholds based on different dimensions such as developer attributes, cost, model performance, and downstream impact. Finally, the report specifically summarizes the public feedback received since the draft release in March 2025 and the revisions made accordingly.
To solidify the foundation of its policy recommendations, Chapter 2 of the report conducts a comparative analysis of historical case studies, drawing lessons from the early design and governance of the internet, the regulation of consumer products (such as tobacco), and policy responses in the energy sector (addressing climate change). These cases reveal the path-dependent effects of early design choices, the crucial role of transparency in generating comprehensive evidence, and the importance of trusting expertise while verifying through third-party assessments. Furthermore, the report briefly cites successful cases in areas like pesticide regulation, building codes, and automobile seat belts, demonstrating that well-designed governance frameworks can indeed strike a balance between fostering innovation and ensuring public safety.
Overall, this report provides a roadmap for California's policy innovation in the rapidly evolving and uncertain field of frontier AI. This roadmap is based on extensive evidence, considers multiple stakeholder interests, and emphasizes dynamic adaptability. The principles and mechanisms it proposes not only aim to address currently known risks but are also designed to build a governance ecosystem capable of continuously generating evidence and iteratively learning.