Biden Administration Releases Artificial Intelligence Executive Order
November 2, 2023, Covington Alert
Earlier this week, the White House issued an expansive Executive Order (Order) outlining a comprehensive strategy to support the development and deployment of safe and secure artificial intelligence (AI) technologies. The Order follows a number of actions by the Biden Administration on AI, including its Blueprint for an AI Bill of Rights and efforts to secure voluntary commitments from certain developers of AI systems.
The Order’s policy statement recognizes that AI “holds extraordinary potential” and can make the world “more prosperous, productive, innovative, and secure,” but it warns that if AI is not developed and used responsibly, it could “exacerbate societal harms.” To address these concerns, the Order sets out requirements to promote safety and security; innovation and competition; protections for workers, consumers, patients, students, and passengers; equity, civil rights, and privacy; federal use of AI; and American leadership abroad. The Order broadly defines AI by reference to the National AI Initiative Act of 2020 as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.
The Order instructs government agencies, and most notably the Department of Commerce, to promulgate rules or otherwise act to require disclosures in some circumstances from companies that develop or provide infrastructure for AI models. The Order states that some of those regulations should be proposed within the next three months. In the few days since the Order was signed, the Office of Management and Budget (OMB) has already released for comment a draft policy on federal agency use of AI, which includes recommendations for “responsible federal procurement of AI” from contractors that could include “tailored risk management requirements in [federal] contracts for generative AI.” Depending on the specifics of the rules involved, the Department of Commerce and other agencies may engage in private-sector consultations or notice-and-comment processes prior to releasing final regulations.
Notably, in issuing the testing and reporting requirements of the Order’s Section 4.2, the Order relies principally on authority under the Defense Production Act (DPA). The DPA is a Korean War-era law that provides the President with authority to expedite the delivery of and expand the supply of critical materials and services necessary for the national defense. While experienced government contractors may be familiar with its priority-rated orders requirements in federal contracts, the DPA includes lesser-known requirements, such as those relating to reporting of information on the defense industrial base. While the Order does not identify any specific authority under the DPA, the most relevant provision appears to be the investigation, records, and reports section, 50 U.S.C. § 4555, which, among other things, grants the President “authority to obtain information in order to perform industry studies assessing the capabilities of the United States industrial base to support the national defense.”
This summary highlights the key components and requirements of the Order. Specifically, it addresses the Order’s requirements related to: (I) ensuring safety and security; (II) protecting patients, workers, consumers, passengers, and students; (III) promoting highly skilled workers and federal governance; and (IV) advancing international leadership. Covington expects to continue to monitor the development and implementation of the Order and may supplement this summary in the future, including through posts on our various blogs.
I. Ensuring the Safety & Security of AI Technology
The Executive Order sets out several safety and security requirements for developers of AI models, including the creation of new standards, guidelines, and best practices, requirements for red-teaming and reporting, synthetic content labeling, and other obligations regarding federal data and model weights.
A. Standards, Guidelines, and Best Practices for AI Models
The Order requires the National Institute for Standards and Technology (NIST) and the Secretary of Commerce to develop best practices and guidance that promote safe, secure, and trustworthy AI.
- NIST Development of Best Practices and Guidance for Safe, Secure, and Trustworthy AI. The Order directs NIST to issue the following within 270 days:
- Guidelines and best practices with the aim of promoting consensus with industry standards for developing and deploying safe AI, including the creation of companion resources for the AI Risk Management Framework and the Secure Software Development Framework and the launch of an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities; and
- Guidelines for conducting red-teaming tests (except where AI is used as a component of a national security system) to assess the safety, security, and trustworthiness of AI systems. The Order defines red-teaming as “a structured testing effort to find flaws and vulnerabilities in an AI system.”
- Secretary of Energy Development of AI Model Evaluation Tools and Test-Beds. The Order requires the Secretary of Energy, in coordination with certain other agencies, to develop and implement a plan for the development of AI model evaluation tools and AI testbeds within 270 days. A test bed is defined as a facility or mechanism equipped for conducting rigorous, transparent, and replicable testing of tools and technologies, including AI and privacy enhancing technologies, to help evaluate the functionality, usability, and performance of those tools or technologies.
B. Reporting on Dual-Use Foundation Models and Red Teaming
The Order requires red-teaming and the reporting of information to the federal government for certain dual-use foundation models; it also imposes requirements on large-scale computing clusters and certain categories of Infrastructure as a Service (IaaS) products made available to foreign individuals. The Order relies on the President’s authority under the DPA for these requirements. In practice, the red-teaming and reporting requirements may not impact the majority of AI systems, as the requirements are scoped to models that present a “serious risk” to security and related topics and that also meet certain technical requirements outlined in Section 4.2(b)(i) of the Order. However, because those technical conditions are subject to agency discretion, it is possible the scope of the models subject to red-teaming and reporting requirements could be expanded.
- Reporting on Development of Dual-Use Models and Red-Teaming. Within 90 days of the Order, the Secretary of Commerce must require companies “developing or demonstrating an intent to develop potential dual-use foundation models” to furnish certain information to the government on an ongoing basis. A dual-use foundation model is defined as an “AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.” Although the definition is clearly aimed at capturing a particular set of models, there remains some ambiguity as to precisely how the federal government will determine what qualifies as a dual-use foundation model. Consequently, it likely will be important for companies to secure guidance from the government on how the definition will be applied given the accompanying reporting requirement.
The Order defines AI red-teaming as a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers. In particular, companies subject to the reporting requirement must provide the “Federal government” (without specification as to which agency or office) with information related to: (1) any ongoing or planned activities related to training, developing, or producing dual-use foundation models; (2) ownership and possession of model weights (“numerical parameter[s] within an AI model that help[] determine the model’s outputs in response to inputs”) of dual-use foundation models; and (3) the results of any developed dual-use foundation models performance in red-teaming exercises that follow NIST guidance and a description of any associated measures the company has taken to meet safety objectives.
- Standards for Large-scale Computing Clusters. The Order requires the Secretary of Commerce to invoke the DPA to require companies, individuals, or other organizations that “acquire, develop, or possess a potential large-scale computing cluster” to report the existence and location of these clusters (defined as a group of computers that function as a system to complete intensive computational tasks like training models) and the total amount of computing power in each cluster within 90 days. The Secretary of Commerce, in consultation with the Secretary of State, the Secretary of Defense, the Secretary of Energy, and the Director of National Intelligence, is empowered to define the scope of large-scale computing clusters subject to the reporting.
- Regulations on Resellers of Infrastructure as a Service. The Order directs the Secretary of Commerce to take two actions with respect to IaaS providers (i.e., cloud services providers):
1. Reporting: Pursuant to regulations that the Secretary of Commerce is directed to propose within 90 days of the Order, U.S. IaaS providers must submit a report to the Secretary of Commerce when a foreign person transacts with the IaaS provider to train a large AI model “with potential capabilities that could be used in malicious cyber-enabled activity.” The U.S. IaaS provider must also prohibit foreign resellers from providing U.S. IaaS products unless the foreign reseller submits to the reporting requirement. The regulations may set the conditions for what constitutes an IaaS product with potential capabilities that could be used in malicious cyber-enabled activity for purposes of informing when a reporting requirement would be triggered.
2. Verification: The Secretary of Commerce must propose regulations requiring that U.S. IaaS providers ensure that foreign resellers of U.S. IaaS Products verify the identity of any foreign person that obtains an IaaS account. The regulations will include minimum standards for what the U.S. IaaS provider must require of the foreign reseller.
These requirements stem from concerns that U.S. IaaS providers may be exploited by foreign malicious cyber actors. Indeed, in a previous 2021 Executive Order, the Department of Commerce was tasked with promulgating substantively similar regulations, with the key distinction being the specific focus on AI training in this Executive Order. An Advanced Notice of Proposed Rulemaking was issued following the 2021 Executive Order, but no subsequent rulemaking steps have since followed. Ultimately, the practical implications of these requirements are potentially very broad, as the language suggests that U.S. IaaS providers may have to adopt what effectively amounts to a know-your-customer program for cloud services.
C. Guarding Against AI-Specific Cybersecurity Risks
Consistent with the Order’s overarching purpose of “addressing AI systems’ most pressing security risks,” the Order makes various references to cybersecurity risks and cyber risk management. For example, the Order directs initiatives to seek industry consensus for auditing AI for potential cyber harms and a description of cybersecurity measures taken to protect information about weights in particular models. It further identifies specific risk mitigation with respect to critical infrastructure, as well as the financial services and defense sectors. These requirements include federal agency assessments of potential AI-related cybersecurity vulnerabilities and dissemination of best practices for guarding against cybersecurity risks. In particular:
- Critical Infrastructure. The Order requires each federal agency to evaluate and provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure within 90 days. Critical infrastructure is defined by reference to the Patriot Act, which outlines the following sectors: agriculture, food, water, public health, emergency services, government, defense industrial base, information and telecommunications, energy, transportation, banking and finance, chemical industry, and postal and shipping. Additionally, the Secretary of Homeland Security, in coordination with the Secretary of Commerce and Sector Risk Management Agencies, must incorporate the NIST AI Risk Management Framework and other appropriate security guidance within 180 days for critical infrastructure. The Secretary of Homeland Security also must establish an AI Safety and Security Board comprising experts from the private sector, academia, and the government, as appropriate, to provide advice, information, and recommendations for improving security, resilience, and incident response for AI used in critical infrastructure.
- Financial Services. The Order requires the Secretary of Treasury to issue within 150 days a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.
- Defense. For all national security systems, the Order requires the Secretary of Defense and the Secretary of Homeland Security to develop plans for and conduct an operational pilot project to identify, develop, test, evaluate, and deploy AI capabilities to aid in the discovery of vulnerabilities in U.S. systems, software, and networks. They must also issue a report on the plans and operational projects required by the Order. The Secretary of Defense is tasked with carrying out these actions for national security systems, and the Secretary of Homeland Security is tasked with non-national security systems.
D. Synthetic Content & Labeling
The Secretary of Commerce, together with other relevant agencies, must within 240 days submit a report identifying standards, tools, methods, and practices for (i) authenticating content and tracking its provenance, (ii) labeling synthetic content, such as by watermarking, (iii) detecting synthetic content, (iv) preventing generative AI from producing child sexual abuse material (CSAM) or non-consensual intimate imagery of real individuals, (iv) testing software for such purposes, and (vi) auditing and maintaining synthetic content. Then, within 180 days, the Secretary of Commerce must issue guidance for federal agencies based on this report that the Federal Acquisition Regulatory Council must consider in amending the Federal Acquisition Regulation.
E. AI in Certain Contexts, Model Weights, and Federal Data for Training
The Order also directs federal agencies to issue reports and solicit input on other topics related to the development and use of AI, including with respect to the misuse of AI related to chemical, biological, radiological, or nuclear (CBRN) threats, the public availability of model weights for dual-use foundation models, and the use of federal data for training.
- Reducing Risks at the Intersection of AI and CBRN Threats. The Order outlines several requirements to minimize the risk that AI is misused to assist in the development or use of CBRN threats through reports and studies. For example, all agencies that fund life-sciences research must establish that as a requirement of funding, synthetic nucleic acid procurement is conducted through providers or manufacturers that adhere to a risk-reduction framework developed by the Director of OSTP in consultation with government agencies. The Order requires that the Director of OSTP establish the framework within 180 days of the Order, and agencies must implement the funding requirement within 180 days of the framework’s establishment.
- Soliciting Inputs on Dual-Use Foundation Models with Widely Available Model Weights. After acknowledging that making model weights for dual-use foundation models publicly available can result both in societal benefits and security risks, the Order directs the Secretary of Commerce, in consultation with the Secretary of State, to solicit input from the private sector, academia, civil society, and other stakeholders through a public consultation process on the appropriate policy and regulatory approaches for dual-use foundation models, including risks associated with public model weights, benefits to innovation and research, and potential voluntary, regulatory, and international mechanisms to minimize apparent risks.
- Promoting Safe Release and Preventing the Malicious Use of Federal Data for AI Training. The Order outlines steps to expand the availability of data in machine-readable formats for model training. To further this objective, the Chief Data Officer Council must develop guidelines for security reviews of federal data and conduct a security review of all data assets within 270 days.
II. Protecting Patients, Workers, Consumers, Passengers, & Students
The Order addresses the impact of AI on patients, workers, consumers, passengers, and students, and outlines requirements for federal agencies intended to minimize potential risks to these populations. It also includes broader provisions related to the civil rights impacts of AI.
A. AI in the Healthcare and Life Sciences Sectors
The Executive Order contains a number of provisions aimed at advancing the responsible and safe deployment and use of AI in the healthcare and life sciences sectors, including:
- HHS AI Task Force. To help advance the responsible use of AI in the healthcare and life sciences sector, the Department of Health and Human Services (HHS) must establish an HHS AI Task Force to develop policies and frameworks for AI-enabled health care technologies. The Task Force must be established within 90 days and, within a year of its creation, identify appropriate guidance and resources in the following areas:
a) development, maintenance, and use of predictive and generative AI-enabled technologies in healthcare delivery and financing, taking into account appropriate human oversight of the application of AI-generated output;
b) long-term safety and real-world performance monitoring of AI-enabled technologies in the health and human services sector;
c) incorporation of equity principles in AI-enabled health and human services technologies and monitoring of algorithmic performance against discrimination and bias in existing models;
d) incorporation of safety, privacy, and security standards into the software development lifecycle;
e) development and maintenance of documentation to help users determine if AI is appropriate and safe for use in local settings;
f) work to be done with state, local, Tribal, and territorial health agencies to advance positive use cases; and
g) identify AI uses to promote workplace efficiency and satisfaction in the health and human services sector, including to reduce administrative burdens.
- Regulation of the Use of AI in Drug Development Processes. In addition to the above topics, within 365 days, HHS must develop a strategy for regulating the use of AI in drug development processes that addresses issues such as principles for appropriate regulation throughout each phase of drug development, areas where future rulemaking or guidance may be necessary, and budget necessary for a regulatory system.
- Evaluation of AI Quality. Within 180 days, the Secretary of HHS must develop a strategy to determine whether AI-enabled technologies in the health and human services sector maintain appropriate levels of quality. Additionally, within 365 days, HHS must establish an AI safety program that, in part, establishes a framework for identifying and capturing clinical errors resulting from AI and disseminates recommendations based on captured data.
- Bias and Discrimination. Within 180 days, HHS must consider actions appropriate to advance the prompt understanding of, and compliance with, federal nondiscrimination laws related to the use of AI.
- Grantmaking. The Order directs HHS to identify and prioritize grantmaking to support responsible AI development and use to help advance responsible AI innovation by healthcare technology developers that benefits patients and healthcare professionals.
Our Covington Digital Health Team plans to issue further analysis of the Executive Order’s potential impacts on the use of AI in the healthcare and life sciences sector, including vis-à-vis health-related regulations, policies and initiatives already in place.
B. Supporting Workers, Consumers, Passengers, and Students
The Order sets forth a number of requirements for federal agencies related to the impact of AI on workers, consumers, passengers, and students across a number of sectors.
- Workers. The Order requires several studies and reports on workforce disruption caused by AI, including principles and best practices developed by the Secretary of Labor (in consultation with outside entities, including labor unions and workers) for employers that could be used to mitigate AI’s potential harms to employee well-being and maximize potential benefits (e.g., around job displacement and career opportunities, job quality, implications of AI-related collection of information). Additionally, the Secretary of Labor is empowered to publish guidance for federal contractors “regarding nondiscrimination in hiring involving AI and other technology-based hiring systems.”
- Education. The Order directs the Secretary of Education to develop resources, policies, and guidance regarding the safe, responsible, and nondiscriminatory uses of AI in education, including the impact of AI on vulnerable and underserved communities.
- Communications. The Federal Communications Commission is encouraged in the Order to consider actions addressing AI’s effects on communications networks and consumers.
C. Promoting Privacy, Equity, and Civil Rights
The Order further encourages federal agencies to take steps to promote privacy, equity, and civil rights in the use of AI.
- Criminal Justice. The Order directs the Attorney General to lead the development of a report addressing AI in the criminal justice system with an emphasis on addressing unlawful discrimination and other harms that may be exacerbated by AI, including by providing guidance, technical assistance, and training to investigators and prosecutors on best practices for investigating and prosecuting civil rights violation and discrimination related to automated systems, including AI.
- Civil Rights in the Broader Economy & Administration of Benefits. The Order directs housing and credit agencies to address unlawful discrimination, including by issuing additional guidance to address the use of tenant screening systems, algorithms to facilitate advertising delivery, and underwriting and credit models. Moreover, the Order directs federal agencies to use their respective civil rights and civil liberties authorities to address unlawful discrimination and other harms resulting from uses of AI in federal benefits programs. The Order additionally directs the Attorney General to convene the heads of federal civil rights offices to discuss comprehensive use of their respective authorities and offices to address discrimination and bias attributable to AI systems. Coordinated enforcement actions may help develop common regulatory standards.
- Competition. The Order directs agencies to promote competition in AI by “addressing risks arising from concentrated control of key inputs, taking steps to stop unlawful collusion and prevent dominant firms from disadvantaging competitors, and working to provide new opportunities for small businesses and entrepreneurs.” The Order further encourages the Federal Trade Commission (FTC) to consider using its rulemaking authority to ensure fair completion in the AI marketplace, which follows the FTC’s guidance addressing potential competition concerns related to generative AI and likely references the FTC’s ability to promulgate rules to address “unfair methods of competition” under Sections 5 and 6(g) of the FTC Act.
- Privacy. The Order outlines several actions to mitigate privacy risks “potentially exacerbated by AI.” For example, the Director of the OMB must evaluate and take steps to identify commercially available information procured by agencies (particularly if it includes personally identifiable information) and issue a request for information to inform potential revisions to guidance implementing privacy provisions of the E-Government Act of 2002. Additionally, the Order directs agencies to advance research, development, and implementation related to privacy enhancing technologies.
III. Promoting Highly Skilled Workers & Federal Governance
The Order also outlines a number of other requirements related to attracting and retaining highly skilled AI workers and coordinating federal AI policy.
- High-Skilled Workers. The Order describes measures intended to attract and retain international talent in AI. Among other steps, the Order requires the Secretaries of State and Homeland Security to streamline visa processing and consider initiating a rulemaking to establish new criteria for certain visas. The Order also seeks to accelerate hiring of federal workers that are highly skilled in AI.
- Federal Governance. The Order requires every federal agency to designate a Chief AI Officer to coordinate that agency’s AI policies. Some agencies will also be required to create internal Artificial Intelligence Governance Boards to manage AI-related issues through senior leaders across the agency.
IV. International Leadership
The Order directs the Secretary of State to lead development of an international framework to manage AI’s risks and benefits and establish a plan for global engagement on prompting and developing AI standards. As part of this effort, agencies including the State Department, United States Agency for International Development, and Department of Homeland Security will be responsible for developing resources and research agendas for the development and use of AI beyond the United States.
* * *
The Order establishes the White House Artificial Intelligence Council to coordinate activities across the federal government. The Deputy Chief of Staff for Policy, Bruce Reed, will lead the White House Artificial Intelligence Council’s efforts to implement the Order across the federal government. The table below summarizes the lead agencies and timeframes for the key actions from the Executive Order described above. It does not include information for actions that are not included in the above summary.
Lead Agency
|
Action
|
Timeframe
|
Section
|
Department of Commerce
|
Require that companies developing certain dual-use foundation models furnish specified information to the government.
|
Within 90 days of the Order
|
4.2
|
Department of Commerce
|
Invoke the Defense Production Act to require entities that acquire, develop, or possess potential large-scale computing clusters to report the existence, location, and computing capacity of those clusters.
|
Within 90 days of the Order
|
4.2
|
Department of Commerce
|
Propose regulations requiring that U.S. IaaS providers submit a report to Commerce when a customer transacts with the provider to train an AI model that could be used in malicious cyber-enabled activities.
|
Within 90 days of the Order
|
4.2
|
Department of Commerce
|
Propose regulations for U.S. IaaS providers to ensure that foreign resellers of IaaS products verify the identities of foreign account holders.
|
Within 180 days of the Order
|
4.2
|
Department of Commerce
|
Submit a report identifying tools, methods, and practices related to managing risks from synthetic content.
|
Within 240 days of the Order
|
4.5
|
Department of Commerce
|
Issue guidance for agencies that the Federal Acquisition Regulatory Council must consider in amending the Federal Acquisition Regulation.
|
Within 180 days of the publication of the report on synthetic content
|
4.5
|
Department of Commerce
|
Solicit public input on regulatory approaches to dual-use foundation models for which model weights are available, and develop a report based on that input.
|
Within 270 days of the Order
|
4.6
|
National Institute for Standards and Technology
|
Publish guidelines and best practices with the aim of promoting consensus industry standards for developing and deploying safe AI.
|
Within 270 days of the Order
|
4.1
|
National Institute for Standards and Technology
|
Publish guidelines for red-teaming tests to assess and manage safety, security, and trustworthiness.
|
Within 270 days of the Order
|
4.1
|
Department of Defense
|
Plan and conduct a pilot project to identify and deploy AI capabilities to aid in discovery of vulnerabilities in U.S. systems.
|
Pilot required within 180 days of the Order, report on pilot required within 270 days of the Order
|
4.3
|
Department of Energy
|
Develop and implement a plan for the development of AI model evaluation tools and AI testbeds.
|
Within 270 days of the Order
|
4.1
|
Department of Health and Human Services
|
Establish an HHS AI Task Force to develop policies and frameworks for AI in healthcare.
|
Task Force must be established within 90 days of the Order, must develop guidelines within 365 days of its creation
|
8
|
Department of Health and Human Services
|
Develop a strategy to determine the quality of AI-enabled technologies in the health and human services sector, and to assess their compliance with federal discrimination law.
|
Within 180 days of the Order
|
8
|
Department of Health and Human Services
|
Develop a strategy for regulating the use of AI in drug development processes.
|
Within 365 days of the Order
|
8
|
Department of Homeland Security
|
Incorporate the NIST AI Risk Management Framework and other appropriate security guidance into guidelines for critical infrastructure owners and operators.
|
Within 180 days of the Order
|
4.3
|
Department of Housing and Urban Development
|
Issue guidance related to unlawful discrimination related to the use of AI.
|
Within 180 days of the Order
|
7.3
|
Department of Justice
|
Submit to the President a report that addresses the use of AI in the criminal justice system.
|
Within 365 days of the Order
|
7.1
|
Department of Labor
|
Develop best practices and several studies on AI-related workforce disruption.
|
Within 180 days of the Order
|
6
|
Office of Management and Budget
|
Issue a request for information to inform potential revisions to guidance implementing the E-Government Act of 2002.
|
Within 180 days of the Order
|
9
|
Office of Management and Budget
|
Issue guidance specifying that agencies must designate a Chief AI Officer.
|
Guidance must be issued within 180 days of the Order; agencies must designate Chief AI Officers within 60 days of the issuance of the guidance
|
10.1
|
Chief Data Officer Council
|
Develop initial guidelines for security reviews of Federal data.
|
Within 270 days of the Order
|
4.7
|
State Department
|
Take steps to streamline visa processing for noncitizens with critical technical expertise.
|
Within 90 days of the Order
|
5.1
|
State Department
|
Consider initiating a rulemaking to establish new criteria for certain visas.
|
Within 180 days of the Order
|
5.1
|
Department of the Treasury
|
Issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.
|
Within 150 days of the Order
|
4.3
|
All agencies that fund life-sciences research
|
Establish that as a requirement of funding, synthetic nucleic acid procurement is conducted through providers that adhere to a risk-reduction framework.
|
Within 180 days of the establishment of a risk-reduction framework to be developed by the Director of OSTP within 180 days of the Order
|
4.4
|
All Federal agencies with authority over critical infrastructure
|
Evaluate and provide to DHS an assessment of AI-related risks to critical infrastructure.
|
Within 90 days of the Order and at least annually thereafter
|
4.3
|