OMB Releases Requirements for Responsible AI Procurement by Federal Agencies
October 24, 2024, Covington Alert
On October 3, 2024, the White House Office of Management and Budget (“OMB”) released Memorandum M-24-18, Advancing the Responsible Acquisition of Artificial Intelligence in Government (“October 2024 OMB Memo” or “Memo”), providing detailed new guidance and requirements for federal agency procurement of Artificial Intelligence (“AI”). The Memo follows dozens of federal agency actions this year to implement the White House’s October 2023 Executive Order 14110 (“AI Executive Order”), including the OMB’s March 2024 Memorandum M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (“March 2024 OMB Memo”). Specifically, the October 2024 OMB Memo implements § 10.1(d)(ii) of the AI Executive Order and the Advancing American AI Act, 40 USC 11301 note, which require OMB to develop an “initial means” to ensure that agency contracts for the acquisition of AI systems and services align with the March 2024 OMB Memo and the Act. Section 5(d) of the March 2024 OMB Memo, in turn, outlined priorities for managing risks in federal AI procurement, and the October 2024 OMB Memo is the “initial means” for contractually implementing those priorities, including the responsible procurement of generative AI and AI-based biometric systems and promoting competition in AI procurement.
The requirements in the October 2024 OMB Memo apply to every agency that seeks to procure Artificial Intelligence systems or services (“AI system or service”). The term AI system, as defined by the Memo, aligns with the definition used in the Advancing American AI Act, as set forth in the Fiscal Year 2023 National Defense Authorization Act. Specifically, the term includes any data system, software, application, tool, or utility that operates in whole or in part using dynamic or static machine learning algorithms or other forms of AI, whether for researching, developing, or implementing AI technology or for integration into another system or agency business process or operational activity. The Memo does not govern AI acquired for use as a component of a National Security System or any AI incidentally used by a contractor during contract performance. (The definitions of these terms, as well as other AI-related terms, are discussed in greater detail in a Covington white paper available here.)
The Memo provides an exception from AI systems for “common commercial product[s] within which artificial intelligence is embedded.” Unlike the Federal Acquisition Regulation’s (“FAR”) “commercial product” definition, however, the Memo’s common commercial product exception does not include “minor modifications” and “modifications of a type customary in the commercial marketplace.” Instead, to determine whether the exception applies, agencies must assess (1) whether the product is widely and publicly available for commercial use and (2) whether the product has “substantial non-AI purposes or functionalities.”
The Memo establishes a series of short-term deadlines for agencies to implement its requirements:
- By November 1, 2024, agencies must identify any contracts associated with the use of “rights- or safety-impacting AI” (terms which are discussed at length below).
- By December 1, 2024, agencies must modify new and existing contracts to implement the Memo’s required “acquisition practices” for rights- and safety-impacting AI, along with the March 2024 OMB Memo’s minimum risk management practices. In solicitations for AI, agencies must disclose, “where practicable,” whether the agency’s planned use of AI is rights- or safety-impacting. Agencies must also consider including contractual terms that ensure that vendors provide appropriate protections, where practicable, against a general use enterprise-wide generative AI system or service being used in ways that are contrary to law and policy.
- By March 23, 2025, agencies must use certain acquisition practices and include certain contract terms in awarding contracts for AI systems and services pursuant to solicitations commenced after that date, and must submit written notification to OMB identifying progress made toward implementing cross functional collaboration and identify any challenges or best practices. They must also submit a plan for ensuring that the Chief Artificial Intelligence Officer (“CAIO”) of each agency coordinates on AI acquisition with relevant officials, such as the Chief Information Officer, Chief Information Security Officer, Chief Financial Officer, and the Senior Official for Privacy.
The October 2024 OMB Memo requires agencies to implement procurement practices and contractual terms when procuring (1) AI systems and services in general, (2) generative AI systems and services and AI-enabled biometric systems, and (3) rights-impacting and safety-impacting AI systems and services—each of which have impacts on government contractors who are currently engaged in, or are thinking about, providing AI to government customers.
I. General Requirements for Procuring AI Systems and Services
Section 4(b) and 4(c) of the October 2024 OMB Memo outline the general requirements for procuring AI systems and services. Agencies are required to ensure that privacy protections are implemented throughout the procurement process to protect users’ civil rights and civil liberties. The Memo also encourages agencies to consider how vendors will protect personally identifiable information in bidder evaluations.
Section 4(c) of the October 2024 OMB Memo focuses on agency practices for managing performance and risks of acquired AI, including risks related to intellectual property (“IP”) rights, data management, and the appropriate protection of agency data. The Memo requires agencies to “develop an approach to IP that considers what rights and deliverables are necessary for the agency to successfully accomplish its mission . . . [and] avoids vendor lock-in.” Additionally, agencies are required to engage in “[c]areful consideration” of IP licensing rights, considering the agency’s “mission, long-term needs, and larger enterprise architecture while avoiding vendor lock-in and maximizing competition.” The Memo notes that an agency may determine that “it needs unlimited rights to certain contractor deliverables based on a long-term approach to IP.” Strategies for avoiding vendor lock-in include ensuring that contracts have language that allows agencies access to “components and their foundational code . . . for use as long as it may be necessary.”
OMB previously voiced similar concerns around vendor lock-in in the March 2024 OMB Memo and even as early as its draft policy issued for comment almost a year ago, when it first issued draft plans for the implementation of the AI Executive Order. Government contractors should anticipate that negotiations over IP data rights may be more contested due to the Memo, as agencies seek to retain expansive government data rights and “select[] the appropriate FAR or agency supplemental clauses” to do so.
The Memo also requires agencies to conduct “due diligence” of the “supply chain of vendor’s data.” This provision may also widen the scope of deliverables that may be asked of government contractors. Under the Memo’s section on “Approvals for Cybersecurity and Appropriate Protections of Agency Data,” agencies must implement contractual requirements that “facilitate the ability to obtain any documentation and access necessary to understand how a model was trained,” including by requesting “training logs from contractors” and “detailed documentation of the training procedures used for the model to demonstrate the model’s authenticity, provenance, and security.” Contractors should prepare to mark these more robust deliverables appropriately and correctly, especially given the volume of what may be asked from them, as well as the Memo’s goal of ensuring expansive government data rights in any IP-related deliverables.
In the same vein, agencies are expected to establish baseline requirements to ensure that they understand how an AI model uses agency data and are encouraged to “invest in infrastructure to evaluate software controls throughout training processes and data sources for projects.” Agencies may use and adopt NIST Special Publication 800-218, Secure Software Development Framework, in complying with these requirements. It is possible that in so doing, agencies will look to the recently released guides on software procurement, including the Cybersecurity and Infrastructure Security Agency’s “Software Acquisition Guide for Government Enterprise Consumers: Software Assurance in the Cyber-Supply Chain Risk Management (C-SCRM) Lifecycle” and the related “Secure By Demand Guide” given the detailed frameworks that those guides establish.
II. Procuring Generative AI and AI-Enabled Biometric Systems and Services
A. Generative AI
Section 4(f) of the October 2024 OMB Memo contains specific requirements for procuring general use enterprise-wide generative AI, though the Memo highlights that agencies are “strongly encouraged” to also include these requirements in all contracts for generative AI systems “where appropriate” and “to the greatest extent practicable.” “General use enterprise-wide generative AI” is defined as “generative AI, in the form of a foundation model or other widely applicable generative AI system, that is acquired for general purposes for which the details are infeasible to define prior to procurement . . . and is acquired for use by end users in more than one agency component . . . or through a contract vehicle that accommodates the requirements of more than one organizational component.” The Memo’s general use enterprise-wide generative AI requirements build on the documentation requirements for AI systems outlined above. Vendors must adequately identify, through watermarks, metadata, or otherwise, any generative AI outputs that are “not readily distinguishable from reality.” Vendors will also be required to document how the general use enterprise-wide generative AI was or will be trained and evaluated.
Additionally, vendors of general use enterprise-wide generative AI may now be required to provide documentation related to pre-deployment testing and evaluations, red-teaming results, and any steps taken to mitigate issues discovered in the course of evaluation of such generative AI. The documentation must be sufficiently detailed so that agencies can understand the “underlying technical and analytical basis for the conclusions of the evaluations, testing, or red-teaming, and reproduce the results where appropriate.” As noted above, government contractors should establish robust marking procedures to ensure that documentation provided to the government—including proprietary information—is adequately protected.
Finally, vendors of general use enterprise-wide generative AI may be required to provide “appropriate protections, where practicable, against the AI systems or services being used in ways that are contrary to law and policy.” Required protections may include methods for monitoring the general use enterprise-wide generative AI and technical safeguards that prevent uses in prohibited or sensitive contexts.
B. AI-Enabled Biometric Systems
Section 4(b)(ii) of the Memo addresses agency procurement of “AI-based biometrics,” i.e., “AI systems that identify individuals using biometric identifiers (e.g., faces, irises, fingerprints, or gait).” To address risks related to the use of AI-based biometrics, agencies must avoid using biometric systems that “rely on unreliable or unlawfully collected information.” In turn, vendors must verify that AI-based biometric systems are “sufficiently accurate to support reliable biometric identification and verification across different groups based on the results of testing and evaluation in operational contexts,” submit AI systems to NIST’s Face Recognition Technology and Facial Analytics Technical Evaluations, and provide documentation or testing results to validate the AI’s “ability to match identities” and the appropriateness of the training data.
The Memo also requires agency AI-based biometric systems to have certain properties and functionalities. Agencies must contractually require biometric systems to (a) use a “configurable minimum similarity threshold for candidate results,” (b) use “minimum quality criteria” or input biometric data/samples, (c) return candidate matches “above the minimum similarity threshold alongside similarity scores” in one-to-many searches, and (d) maintain logs of use for auditing and compliance, “including capturing input and output data in ways that incorporate appropriate protections for PII and other data throughout the information life cycle, and limiting and restricting reuse of PII for other purposes[.]” The last criterion suggests potentially increased auditing risks for industry players who contract with government agencies for AI and AI-enabled biometric systems.
III. Procuring Rights-Impacting and Safety-Impacting AI Systems and Services
In Sections 4(d)-(e), the October 2024 OMB Memo builds upon the requirements imposed by Section 5 of the March 2024 OMB Memo regarding the deployment of “rights-impacting” or “safety-impacting AI” to implement a set of “minimum risk management practices,” including impact assessments and testing, ongoing monitoring and evaluation, and notice and opt-out mechanisms, by December 1, 2024. These requirements are subject to agency extensions and waivers under Sections 5(c)(ii)-(iii) of the March 2024 OMB Memo, which permit agencies to (1) request a one-year extension for implementing minimum risk management practices for specific AI systems and (2) waive one or more requirements for specific AI applications or components upon a written determination that fulfilling the requirement would increase risks to safety or rights or create an unacceptable impediment to critical agency operations.
A. Definitions of Rights-Impacting and Safety-Impacting AI
The October 2024 OMB Memo directly incorporates the March 2024 OMB Memo’s definitions of “rights-impacting AI” and “safety-impacting AI.”
“Rights-Impacting AI” refers to AI whose outputs serve as a “principal basis” for a decision or action concerning a specific individual or entity with “legal, material, binding, or similarly significant effect” on their civil rights, civil liberties, or privacy, equal opportunities, and access to or ability to apply for critical government resources or services. The March 2024 OMB Memo’s Appendix I provides that agency AI is “presumed to be rights-impacting” if used to control or significantly influence the outcomes of 14 agency activities or decisions, including restricting protected speech, making predictions or risk assessments in law enforcement or immigration contexts, replicating a person’s likeness or voice without consent, or determining terms or conditions of employment.
“Safety-Impacting AI” refers to AI whose outputs produce actions or serve as a principal basis for decisions with the potential to significantly impact the safety of (1) human life or well-being, (2) the climate or environment, (3) critical infrastructure, or (4) strategic assets or resources. The March 2024 AI Memo’s Appendix I similarly provides 14 agency activities or decisions for which the use of AI to control or significantly influence the outcome would be “presumed to be safety-impacting,” including controlling safety-critical functions in critical infrastructure, maintaining the integrity of elections and voting infrastructure, controlling hazardous chemicals or biological agents, and controlling industrial emissions and environmental impacts.
B. Transparency, Testing, Monitoring, and Performance Evaluations
Section 4(d) of the October 2024 OMB Memo requires agencies to ensure that vendors who provide rights-impacting or safety-impacting AI deliver “information and documentation necessary to monitor” the AI system’s performance and otherwise allow the agency to implement the March 2024 OMB Memo’s risk management practices. The “level of transparency” required from vendors is left to the agency’s discretion, but should be “commensurate with the risk and impact of the use case,” considering the range of potential agency use cases and whether the vendor is a developer or deployer. The Memo also provides agencies with specific categories of information to consider requiring from vendors, if necessary to assure the agency’s compliance with risk management practices. These include: (1) performance and data protection metrics, (2) the intended purpose of the AI system or service, and (3) information about training data, programmatic evaluations, and testing and input data. To the extent these categories of information are needed to implement minimum risk management practices, agencies must require the submission of these categories of information in agency solicitations or contract documents.
This same section also outlines the testing and monitoring requirements. Expanding on the March 2024 OMB Memo’s requirement that agencies institute procedures to monitor the degradation of AI functionality and changes to impacts on rights and safety, the October 2024 OMB Memo notes that “there are instances when a vendor is best equipped to carry out those activities on the agency’s behalf.” In such situations, an agency must (1) contractually require vendors to closely monitor and evaluate AI performance and risks and (2) require vendors to allow the agency to regularly do so throughout the duration of the contract. Even where vendor monitoring and evaluations are appropriate, agencies must still provide oversight and require information sufficient to comply with the agency’s risk management practices.
To facilitate these contractual oversight requirements, the Memo requires agencies to do the following:
- Use agency-defined datasets for conducting independent evaluations to determine that the AI system is fit for purpose. These datasets should not be accessible to the vendor and should be as similar as possible to real-world data.
- Contractually require vendors to provide sufficient access and time to conduct real-world testing, or must require vendors to conduct and disclose such tests themselves.
- Ensure that contracts allow the agency to disclose the testing methods or results.
- Describe, in contracts, the vendor’s required testing procedures and the frequency of testing.
- If appropriate to the use case, contractually require vendors to provide the results of performance testing for algorithmic discrimination.
C. Contractual Terms for Risk Mitigation and Performance Improvements
The October 2024 OMB Memo further requires agencies to maintain the ability to update “risk mitigation options” and “prioritize performance improvement” for procured AI systems or services. To comply with this requirement, the Memo recommends that agencies consider contractual terms requiring vendors to (1) regularly monitor AI performance and rectify “any unwanted system behavior,” including model retraining or “additional mitigations” triggered by performance or event-based thresholds; (2) meet performance standards prior to deploying a new version of the AI system or service, or “roll-back” new versions that fail to meet performance standards; (3) participate in agency-sponsored program evaluations to assess implementation and effectiveness; and (4) document tools, techniques, coding methods, and testing results to promote interoperability and mitigate vendor lock in. Citing FAR Subpart 16.4, the Memo also recommends that agencies incentivize “improved model performance through performance-based contracting” and “incentive contracts,” rather than more traditional government contracting models.
D. AI Incident Reporting
While noting that agencies have existing reporting requirements related to cybersecurity and other security-related incidents, the October 2024 OMB Memo also requires agencies to contractually mandate vendors to identify and disclose to agencies “serious AI incidents and malfunctions of the acquired AI system or service within 72 hours,” or in a “timely manner based on the severity of the incident,” after the vendor believes the incident occurred.
Notably, the Memo authorizes individual agencies to determine the criteria for “what constitutes a serious AI incident or malfunction,” which may include unexpected malfunctions, unintended outcomes harming rights or safety, serious disruptions to critical infrastructure, material damage to property, loss of life or mission-critical systems, or failure of the agency mission. Although the Memo notes that “interagency collaboration” required by the Memo “can support harmonized implementation” of the AI incident reporting requirement, it does not require such harmonization. Indeed, harmonization has not been achieved for pre-existing cybersecurity incident reporting requirements that have been in place for a significant period of time now. Government contractors should be on the lookout for agency specific requirements related to AI incident reporting and cybersecurity protections that should be imposed for the safeguarding of AI systems.
E. Notice & Appeal Rights
Finally, as part of the March 2024 OMB Memo’s minimum risk management requirements, agencies under the Memo must notify individuals affected by an adverse decision resulting from the agency’s use of rights-impacting AI. Where practicable, agencies that deploy rights-impacting AI must provide timely human consideration and, if appropriate, a fallback and escalation process for individuals who appeal or contest the AI’s negative impacts on them. The October 2024 OMB Memo builds on these individual rights-based practices by requiring agencies to (1) identify, in requirements documents for rights-impacting AI, whether the use of the AI will involve notifying individuals affected by AI-enabled decisions and affording human consideration and remedy, and (2) contractually require from the vendor any additional access, information, or documentation necessary to carry out the agency’s notice and appeal procedures. Agencies are also encouraged to otherwise contractually require vendors to support the agency’s notice and appeal plans “to the greatest extent practicable.”
If you have any questions concerning the material discussed in this client alert, please contact the members of our Government Contracts practice.