The U.S. has joined the EU in stepping into the artificial intelligence regulatory fray. Last week, President Biden issued a wide-ranging, 100+ page, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“the Order”), which covers everything from powerful models and the compute they are trained on, labelling AI-generated content, copyright, consumer protection, privacy, immigration, antitrust, civil rights, national security, and innovation.
Model Developers Must Provide Reports to the Government. One of the blockbusters in the Order is its invocation of the Defense Production Act to require developers of large scale models that could potentially pose a threat national security, economic security, or health and safety to report on their training activities and results of safety testing. The Order sets a threshold for what models must be reported based on the amount of computing power required to train the model. Current foundational models don’t meet that threshold, but the next generation of models likely will, meaning the government will have far more insight into AI models than it does today.
Providers of large scale data centers, and Infrastructure as a Service and their foreign resellers, will also have reporting requirements. IaaS providers will also need to implement know your customer requirements.
Because this part of the Order was issued pursuant to the Defense Production Act, it has the force of law.
Standards for Safe, Secure, & Trustworthy AI Systems. The Order directs NIST to develop guidelines and best practices for secure development, evaluation and auditing, red teaming, and ensuring that AI models are safe, secure, and trustworthy. NIST published a FAQ on Executive Order, which be found here.
Content Authentication. The Commerce Department is tasked with developing guidance for detecting and watermarking AI-generated content.
Copyright. The Order directs the Copyright Office to publish a report and issue recommendations to the President on potential executive actions relating to copyright and AI; develop a program to mitigate AI-related intellectual property risks by identifying AI-related intellectual property theft and assisting the private sector with mitigating intellectual property theft violations.
Privacy. The Order directs the creation of a Research Coordination Network (RCN) dedicated to advancing privacy research and, in particular, the development, deployment, and scaling of privacy-enhancing technologies. Notably absent is anything about use of personal information in training data sets.
Regulation. The Order calls on regulators to issue AI regulations in various areas, including consumer protection, antitrust, civil rights, education, financial opportunities, transportation, and healthcare. Such regulations have the potential to sweep in a wide variety of businesses and industries.
Protecting Workers. The Order calls for the development of principles to protect workers from harm posed by AI and for federal support and training to those whose jobs are displaced by AI.
Immigration & Developing AI Talent. The Order directs the Department of Homeland Security to streamline the visa and immigration processes for AI experts, the creation of AI Research Institutes with the goal of training 500 new AI researchers by 2025, and the creation of a task force to identify and develop AI talent throughout the federal government.
Cybersecurity. The Order directs the development of standards that govern the software-development lifecycle to minimize AI-enhanced cybersecurity threats. Homeland Security will evaluate how deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber-attacks. The Secretary of the Treasury will issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks. The Secretary of Defense and the Secretary of Homeland Security will report on vulnerabilities that were found and fixed by AI.
National Security. The Order also calls for evaluation of the various national security risks posed by AI, including how AI may generate nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards, and guardrails to reduce such risks. To fight the risk of AI-generated biological weapons, the Order directs the creation of a national security report with recommendations on how to strengthen nucleic acid synthesis procurement screening, including customer screening systems.
Deploy AI Throughout the Economy. Somewhat surprisingly, the Order encourages all areas of the economy to incorporate AI into their work. This includes deploying AI in law enforcement, education, transportation, and healthcare. Additionally, agencies are discouraged from imposing broad general bans on agency use of generative AI. Instead, the Order encourages agencies to train their employees on proper use of AI and only limit AI based on specific risk assessments, while establishing guidelines on the appropriate use of AI in their field. Agencies should particularly use AI at least for the purposes of experimentation and routine tasks that carry a low risk of impacting Americans’ rights.
International AI Frameworks. The Order states that the U.S. will continue to work with other nations to support safe, secure, and trustworthy deployment and use of AI worldwide, including the development of frameworks to harness AI’s benefits and mitigate its risks.
The Order, and the AI Bill of Rights and the NIST AI Risk Management Framework before it, demonstrates that the U.S. is committed to addressing both the benefits and the risks of AI. But aside from the model reporting requirements that were issued pursuant to the Defense Production Act, the Order is largely guidance, but it will likely set off a wide array of regulatory activity over the next year.
To paraphrase Neo from The Matrix, we don’t know the future of AI regulation, or how it’s going to end. But the Order certainly tells us how it’s going to begin.