Department of Homeland Security releases critical infrastructure security artificial intelligence framework
美国国土安全部 (DHS) 发布建议,概述如何在关键基础设施中安全开发和部署人工智能 (AI)。https://www.dhs.gov/news/2024/11/14/groundbreaking-framework-safe-and-secure-deployment-ai-critical-infrastructure
关键基础设施中人工智能的角色和职责框架https://www.dhs.gov/sites/default/files/2024-11/24_1114_dhs_ai-roles-and-responsibilities-framework-508.pdf
These recommendations apply to all participants in the supply chain, from cloud and computing infrastructure providers to developers, and to critical infrastructure owners and operators. Suggestions are also provided for civil society and public sector organizations.
The voluntary recommendations in the "Framework for AI Roles and Responsibilities in Critical Infrastructure" cover various roles across five key areas: protecting the environment, promoting responsible model and system design, implementing data governance, ensuring safe deployment, and monitoring performance and impact. Additionally, technical and process recommendations are proposed to enhance the security, reliability, and trustworthiness of AI systems.
The Department of Homeland Security stated in a press release that artificial intelligence has been employed in various fields to enhance resilience and mitigate risks, such as AI applications used for earthquake detection, stabilizing power grids, and mail sorting.
The framework takes into account the responsibilities of each role: cloud and computing infrastructure providers need to review their hardware and software supply chains, implement robust access management, and protect the physical security of data centers that support the system. The framework also proposes recommendations to support downstream customers and processes, such as monitoring abnormal activities and establishing clear procedures for reporting suspicious and harmful activities.
AI developers should adopt a secure design approach, assess the dangerous capabilities of AI models, and "ensure that models align with human-centric values." The framework further encourages AI developers to implement robust privacy protection measures; conduct assessments to test for potential biases, failure modes, and vulnerabilities; and support independent evaluations of models that pose greater risks to critical infrastructure systems and their consumers.
Key infrastructure owners and operators should securely deploy AI systems, including maintaining robust cybersecurity practices to address risks associated with AI, protecting customer data when fine-tuning AI products, and providing meaningful transparency in using AI to offer goods, services, or benefits to the public.
Civil society, including universities, research institutions, and consumer rights advocates, should continue to work with governments and industry to develop standards and conduct AI evaluation research that considers use cases for critical infrastructure.
Public sector entities, including federal, state, local, tribal, and territorial governments, should enhance the standards of AI safety practices through statutory and regulatory actions.
Homeland Security Secretary Alejandro Mayorkas stated in a declaration that the framework, if widely adopted, would significantly contribute to better securing the provision of critical services such as clean water, stable electricity, and internet access.
The Department of Homeland Security framework proposes a model of shared and independent responsibilities to ensure the secure use of artificial intelligence in critical infrastructure. This framework also relies on existing risk frameworks, enabling entities to assess whether the use of AI in certain systems or applications could pose significant risks that might cause harm. Frankly, we hope this framework becomes a dynamic document that evolves alongside industry developments.