Files / Emerging technologies

"Draft Report of the California AI Frontier Joint Policy Working Group"

Construction of a policy framework based on interdisciplinary evidence, focusing on transparency, incident reporting, and regulatory scope definition for frontier artificial intelligence models, to balance innovation incentives and risk management.

Detail

Published

22/12/2025

List of Key Chapter Titles

  1. Introduction
  2. Encouraging Innovation and Implementing Safety Guardrails
  3. The Broader Policy Landscape and California's Potential Impact
  4. Grounding AI Policy in Evidence and Experience: Understanding the Broader Context
  5. Transparency
  6. Adverse Event Reporting
  7. Defining Scope
  8. Next Steps

Document Introduction

This report was commissioned by California Governor Gavin Newsom in September 2024 and co-led by Dr. Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered AI; Dr. Mariano-Florentino Cuéllar, President of the Carnegie Endowment for International Peace; and Dr. Jennifer Tour Chayes, Dean of the College of Computing, Data Science, and Society at UC Berkeley. As a draft for public comment, the report aims to develop an effective approach for California to support the deployment, use, and governance of generative AI, including establishing appropriate safeguards to minimize significant risks. The report emphasizes that its content represents the academic work of the three co-leads and does not represent the positions of their affiliated institutions.

The core objective of the report is to provide an evidence-based policy-making framework to guide California's governance of frontier AI. The report systematically reviews research from multiple disciplines including computer science, economics, engineering, informatics, law, and public policy, distilling eight key policy principles. These principles emphasize that targeted interventions should balance technological benefits with significant risks; policy-making should be based on empirical research and reliable analytical techniques; and early-stage technology design and governance choices are path-dependent and crucial. Furthermore, the report advocates for addressing current systemic information opacity in critical areas through measures such as enhancing transparency, strengthening third-party risk assessments, protecting whistleblowers, and establishing adverse event reporting systems, thereby improving accountability, competitiveness, and public trust.

To construct this framework, the report conducted an in-depth contextual analysis. It draws on three historical case studies—internet development and governance, consumer product regulation, and energy policy—to illustrate the importance of early policy windows, the critical value of public transparency, and the necessity of relying on industry expertise while validating it through independent assessment. The report clearly states that significant evidence gaps exist regarding the capabilities and risks of frontier AI models, particularly concerning malicious use, system failures, and systemic risks, with diverging expert opinions. Therefore, creating mechanisms that can proactively and frequently generate evidence is crucial for enhancing governance effectiveness.

Based on the above analysis, the report then delves into three core governance tool areas. Regarding transparency, the report points out the current widespread lack of transparency among foundation model developers in key areas such as training data, safety practices, and downstream impacts, and proposes specific recommendations for improving information disclosure, enhancing third-party risk assessments, and providing legal protections for whistleblowers. Regarding adverse event reporting, the report argues the benefits and challenges of establishing a government-managed reporting system designed to collect information on post-deployment incidents to identify risks, facilitate coordination, and prevent costly accidents. Regarding defining regulatory scope, the report analyzes the complexity of using thresholds to determine which entities should be subject to policy constraints, evaluates the pros and cons of different threshold design approaches based on developer attributes, cost, model performance, or downstream impact, and notes that training compute is currently the most attractive cost-based threshold but is best used in combination with other metrics.

Overall, this report provides California legislators and regulators with a rigorous, multi-dimensional academic foundation to guide future laws and regulations concerning frontier AI governance. It advocates for a "trust but verify" governance philosophy, seeking a balance between encouraging continued innovation and ensuring public safety, aiming to position California as a leader in the global AI governance landscape.