On October 24, 2024, the White House issued a National Security Memorandum on the use of AI models and AI-enabled technologies in national security systems and for military or intelligence purposes (“AI NSM”). The AI NSM fulfills § 4.8 of the White House’s October 2023 Executive Order 14110 (“AI Executive Order”), which requires White House national security officials to develop and submit an AI NSM to guide adoption of AI capabilities in support of U.S. national security and address potential uses of AI by adversaries and other foreign actors. The AI NSM is the latest in a series of recent executive branch actions to implement the AI Executive Order, including the Office of Management and Budget’s (“OMB’s”) March 2024 Memorandum M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (“March 2024 OMB Memo”), which we have previously covered here, and October 2024 Memorandum M-24-18, Advancing the Responsible Acquisition of Artificial Intelligence in Government (“October 2024 OMB Memo”), summarized in our recent client alert here.
Acknowledging that the current “paradigm shift within the AI field . . . has occurred mostly outside of Government,” the AI NSM directs the U.S. Government to “act with responsible speed and in partnership with industry, civil society, and academia to make use of AI capabilities in service of the national security mission,” while ensuring “the safety, security, and trustworthiness of American AI innovation writ large.” Failure to act, the AI NSM warns, “risks losing ground to strategic competitors,” undermining U.S. foreign policy objectives, and “erod[ing] safety, human rights, and democratic norms worldwide.” To that end, the AI NSM outlines the goals of: (1) directing actions to strengthen and protect the U.S. AI ecosystem; (2) improving the safety, security, and trustworthiness of AI systems developed and used in the U.S.; (3) enhancing the U.S. Government’s effective adoption of AI in service of the national security mission; and (4) minimizing the misuse of AI worldwide.
The AI NSM’s requirements apply to elements of the Intelligence Community, and to any agency (other than the Executive Office of the President, the Government Accountability Office, or the Federal Election Commission) that uses AI as a component of a “national security system.” A national security system is generally defined by the AI NSM to mean an information system that involves intelligence activities, cryptologic activities related to national security, command and control of military forces, equipment integral to weapons or weapons systems, or the fulfillment of military or intelligence missions, or an information system that is classified in the interest of national defense or foreign policy by Executive Order or an Act of Congress. The AI NSM will therefore directly impact companies that support these systems, as it outlines a number of requirements in that regard. The AI NSM will also likely have broader impacts outside of the government acquisition context, including with regard to the development and testing of AI models and the Government’s investments in and support of emerging AI technologies.
Following the 2024 U.S. elections, the incoming Trump Administration, Republican-controlled Senate, and likely Republican-controlled House are certain to impact the implementation of the AI NSM and the Biden Administration’s other AI initiatives. While President-elect Trump has stated that he will rescind the 2023 AI Executive Order, it remains to be seen whether the AI NSM will be rescinded, replaced, or maintained. Notably, in 2019, the first Trump Administration issued Executive Order 13859, summarized in our blog post here, which directed the OMB to establish guidance on federal agency use of AI and called for an “action plan” for protecting U.S. AI technologies critical to U.S. national security interests from foreign competitors and adversaries. These and other prior Trump Administration AI actions, including the National AI Initiative Act, AI in Government Act, and a 2020 Executive Order that set out principles for agency uses of AI, suggest that there may be some continuities between the two administrations’ approaches to AI policy.
I. U.S. AISI Frontier AI Safety Testing and Proactive AI Testing Infrastructure
Section 3.3 of the AI NSM outlines proactive safety “testing infrastructure” and standards to assess AI risks while “preserving the United States AI leadership.” This section directs the Department of Commerce (“Commerce”), acting through NIST’s U.S. AI Safety Institute (“AISI”) as the primary U.S. point of contact with private sector AI developers, to establish voluntary, unclassified, pre-deployment safety testing of frontier AI models. This safety testing must assess risks related to cybersecurity, biosecurity, chemical weapons, system autonomy, human rights, civil rights, and civil liberties. However, according to § 3.3(c), this capability does not extend to assessments of nuclear risks, which are delegated to the Department of Energy (“DOE”).
AISI’s frontier AI safety testing infrastructure does not preclude agencies from performing their own evaluations of AI systems, including tests performed before systems are released to the public and for the purposes of evaluating suitability for procurement. Notably, AISI’s safety testing responsibilities do not apply to AI systems used for national security purposes. As discussed below, testing and evaluation of AI systems in national security contexts are governed by the AI Framework established in § 4.2(e).
a. Preliminary Testing of Frontier AI Models
Subject to private sector cooperation, AISI must pursue voluntary preliminary testing of at least two frontier AI models prior to and following their public deployment or release, in order to evaluate national security threats. The testing must assess model capabilities to “aid offensive cyber operations, accelerate development of biological and/or chemical weapons, autonomously carry out malicious behavior, automate development and deployment of other models with such capabilities, and give rise to other risks identified by AISI.” AISI must share feedback on risks and mitigations with the Assistant to the President for National Security Affairs (“APNSA”), interagency counterparts, and the model developers prior to deployment.
Relatedly, in August 2024, AISI announced “first-of-their kind” Memoranda of Understanding with two U.S. AI companies to collaborate on AI safety research, testing, and evaluation. The agreements allow AISI to “receive access to major new models from each company prior to and following their public release,” with the goal of enabling “collaborative research on how to evaluate capabilities and safety risk,” including risk mitigation methods. AISI intends to collaborate with the UK AI Safety Institute to provide feedback on model safety improvements.
b. General Guidance on Testing and Risk Management
AISI must issue guidance for AI developers on testing, evaluating, and managing risks of dual-use foundation models, “building on guidelines issued pursuant to subsection 4.1(a) of Executive Order 14110.” In July 2024, NIST took initial steps to fulfill § 4.1(a) with the release of the initial public draft guidelines on “Managing Misuse Risks for Dual-Use Foundation Models.” The AISI testing guidance required by the AI NSM must build on this guidance by addressing: (1) how to measure capabilities relevant to biological and chemical weapons or automated offensive cyber operations, (2) how to address societal risks like misuse to harass or impersonate others, (3) how to develop mitigation measures to prevent malicious or improper use, (4) how to test efficacy of safety and security mitigations, and (5) how to apply risk management practices throughout development and deployment lifecycle. The National Security Agency (“NSA”), DOE, and the Department of Homeland Security (“DHS”) are instructed to perform “complementary voluntary classified testing in appropriate areas of expertise,” as discussed in Part II below.
AISI is also required to recommend benchmarks or other methods for assessing AI capabilities and limitations in science, math, code generation, general reasoning, and other categories of activity AISI deems relevant to “assessing general-purpose capabilities that may affect national security and public safety.” Additionally, the AI NSM directs AISI to serve as the primary point of contact for communications with developers, including communicating determinations that an AI developer’s model has capabilities that could harm public safety “significantly,” as well as any recommendations for risk mitigation.
II. Agency Sector-Specific AI Testing for Cyber, Nuclear, and Radiological Risks
a. Sector-Specific AI Evaluations for Cyber, Nuclear, and Radiological Risks
Section 3.3(f) of the AI NSM requires agencies to collaborate with Commerce, acting through AISI, to implement evaluations of AI systems specific to cyber, nuclear, and radiological risks. All agencies that “conduct or fund safety testing and evaluation of AI systems” are required to share the results with AISI within 30 days of completion, consistent with protections for classified and controlled information.
Additionally, the AI NSM directs the NSA, through its AI Security Center (“AISC”) and in coordination with AISI, to develop the capability to perform “rapid, systematic, classified testing” of AI models’ potential to “detect, generate, and/or exacerbate offensive cyber threats.” These evaluations must be designed to assess the degree to which AI systems, if misused, could “accelerate offensive cyber operations.”
DOE is similarly directed to develop, in coordination with AISI and NSA, capabilities for rapid, systematic testing of AI models’ potential to generate or exacerbate nuclear and radiological risks. This initiative must involve creating and maintaining infrastructure for both classified and unclassified testing, including the use of “restricted data and relevant classified threat information,” automated evaluation processes, an interface for human-led red-teaming, and secure mechanisms for transferring government and proprietary models. As part of this initiative, DOE is required to complete initial evaluations of an AI model’s nuclear and radiological risks within 30 days of the model’s availability and to submit, at least annually, a report to the President through APNSA that includes evaluation findings, recommendations for corrective actions, and an assessment of the adequacy of the tools and methods used to inform evaluations.
b. Classified Evaluations to Reduce Chemical and Biological AI Risks
Section 3.3(g) of the AI NSM directs the U.S. Government to reduce chemical and biological risks that could emerge from AI through “classified evaluations of advanced AI models’ capacity to generate or exacerbate deliberate chemical and biological threats.” As part of this initiative, DOE, DHS, and AISI, in consultation with the Department of Defense (“DOD”) and other relevant agencies, are required to develop a roadmap for future classified evaluations of advanced AI models, shared with APNSA. This roadmap must outline the “scope, scale, and priority of classified evaluations,” ensure proper safeguards, and maintain secure testing of sensitive and/or classified information. It must also establish sustainable methods for implementing evaluation methodologies.
Furthermore, DOE is required to “establish a pilot project to provide expertise, infrastructure, and facilities capable of conducting classified tests” for chemical and biological AI risks.
Upon publication of AISI’s biological and chemical safety guidance, all agencies developing relevant dual-use foundation AI models that are (1) made available to the public and (2) significantly trained on biological or chemical data must incorporate this guidance into their practices.
In addition, DOD, the Department of Health and Human Services ("HHS"), DOE, DHS, the National Science Foundation (“NSF”), and other relevant agencies involved in the development of AI systems substantially trained on biological and chemical data are instructed to prioritize biosafety and biosecurity by:
- Developing tools to evaluate virtual chemical/biological research and technologies;
- Creating algorithms to monitor and screen synthesized nucleic acids;
- Building secure and reliable software frameworks to support new biotechnologies;
- Screening full data streams or orders from cloud-based labs and bio-manufacturing facilities, and
- Developing strategies to mitigate risks, including the creation of medical countermeasures.
Finally, NSF, in coordination with DOD, AISI, HHS, DOE, the Office of Science and Technology Policy (“OSTP”), and other relevant agencies, must convene academic research institutions and scientific publishers to develop “voluntary best practices and standards for publishing computational biological and chemical models, data sets, and approaches.” This effort aims to address AI applications “that could contribute to the production of knowledge, information, technologies, and products that could be misused to cause harm,” in line with activities outlined in the 2023 AI Executive Order.
III. National Security AI Risk Management Framework
To provide appropriate safeguards, accountability, and control in the use of AI for national security, § 4.2 of the AI NSM establishes a set of AI governance and risk management practices for national security uses. These practices, outlined in the AI NSM’s companion “Framework to Advance AI Governance and Risk Management in National Security” (“AI Framework”), are intended to “serve as a national security-focused counterpart” to the March 2024 OMB Memo and its minimum risk management practices for rights-impacting and safety-impacting AI outside the national security context. Accordingly, these practices apply to agencies that use AI as part of a national security system. Although there are similarities between this framework and the principles in the March 2024 OMB Memo, the AI Framework does not apply to agency acquisitions or uses of AI for non-national security systems.
The AI Framework establishes a broad set of governance practices and safeguards, categorized in four “pillars”: (1) AI use restrictions, (2) minimum risk management practices for “high-impact” and “federal personnel-impacting” AI, (3) cataloguing and monitoring the use of AI, and (4) agency workforce training and accountability in the development and use of AI. Although these pillars share similarities with the requirements of the March and October 2024 OMB Memos and the OMB’s August 2024 Agency AI Reporting Guidance, the Framework also contemplates a number of novel requirements and restrictions for AI used in national security systems. These Framework pillars collectively satisfy the AI governance and risk management requirements of § 4.2 of the AI NSM.
a. Prohibited, High-Impact, and Federal Personnel-Impacting AI Uses
Prohibited AI Use Cases. Pillar I of the AI Framework sets out a list of prohibited AI use cases that pose “unacceptable levels of risk” or that could violate “domestic or international law obligations.” Specifically, agencies may not use any AI system “with the intent or purpose” to:
- Profile, target, or track individuals’ exercise of legal and constitutional rights
- Unlawfully suppress or burden free speech rights or the right to an attorney
- Unlawfully disadvantage individuals based on protected categories
- Detect, measure, or infer emotional states using personal data
- Infer or determine individuals’ personal characteristics based solely on biometric data
- Determine collateral damage and casualty estimations “prior to kinetic action” without rigorous testing and assurance and oversight by trained personnel
- Adjudicate or render final determinations of immigration classification, entry, or admission into the United States
- Produce and share intelligence based solely on AI outputs without notice to readers
- Remove human-in-the-loop oversight for actions “critical to informing and executing decisions by the President to initiate or terminate nuclear weapons employment”
Although some of these prohibited use cases parallel those in legislation like the pending PREPARED for AI Act (which would prohibit agencies from developing or procuring AI for emotion recognition, social scoring, inference of personal characteristics, or other uses deemed by agencies to pose unacceptable risks), as noted, the March and October 2024 OMB Memos do not contain categorical prohibitions related to non-national security uses of AI.
High-Impact AI Use Cases. In addition to outlining prohibited AI uses, Pillar I also defines certain categories of AI that may only be deployed by agencies with specific safeguards and limitations. “High-impact” AI use cases are defined by the AI Framework to “include AI whose output serves as a principal basis for a decision or action that could exacerbate or create significant risks to national security, international norms, democratic values, human rights, civil rights, civil liberties, privacy, or safety.”
While agencies must evaluate each use of AI to determine whether it meets this definition, the AI Framework provides a “non-exhaustive list” of high-impact activities that, if AI is used to control or significantly influence their outcome, are “presumed to be high impact.” These include the list of AI uses that are “presumed to be safety-impacting” under Appendix I of the March 2024 OMB Memo if they occur in the United States, impact U.S. persons, or affect U.S. immigration processes, entry, or admission. Other presumed high-impact AI use cases include:
- Real-time tracking or identifying individuals using biometrics for military or law enforcement action
- Classifying individuals as known or suspected terrorists, insider threats, or other national security threats to inform decisions affecting certain rights and opportunities
- Determining immigration classification or entry or admission into the United States
- Developing, testing, managing, or decommissioning sensitive chemical, biological, radiological, or nuclear materials, devices, or systems with the “risk of being unintentionally weaponizable”
- Deploying malicious software that allows AI to write code without human oversight in ways that risk “unintended performance or operation, spread autonomously, or cause physical damage to or disruption of critical infrastructure”
- Using AI as a “sole means” of producing and sharing “finished intelligence analysis”
Federal Personnel-Impacting AI Use Cases. Finally, Pillar I of the AI Framework establishes another new category of AI use cases—“federal personnel-impacting” AI—that, like high-risk AI, require agencies to implement certain safeguards prior to deployment. Federal personnel-impacting AI is defined to include “AI whose output serves as a significant basis for a decision or action resulting in a legal, material, binding, or similarly significant effect” on military service members, federal government workers, or individuals offered employment by a federal agency.
Agencies must also review each AI use case to determine if it qualifies as federal personnel-impacting. The AI Framework also lists AI uses that are “automatically presumed to impact Federal personnel” if the AI is used to control or significantly influence the outcomes of:
- Hiring decisions, including determining pay or benefits
- Decisions to promote, demote, or terminate employees
- Decisions determining job performance, physical health, or mental health diagnoses or outcomes for U.S. government personnel.
The AI Framework requires Department Heads to add new AI use cases to its lists of prohibited or “presumed” high risk or federal personnel-impacting AI uses, as needed, and to maintain unclassified public lists of AI uses deemed prohibited or high-impact.
b. Minimum Risk Management Practices and Safeguards for High-Impact and Federal Personnel-Impacting AI
Just as the March 2024 OMB Memo requires agencies to implement minimum risk management practices for safety- and rights-impacting AI in non-national security contexts, Pillar II of the AI NSM’s AI Framework establishes a “minimum baseline” of safeguards for managing risks arising from national security uses of AI that are deemed high-impact or federal personnel-impacting uses.
Minimum Risk Management Practices for High-Risk AI. Agencies must implement a large set of testing, documentation, oversight, and other safeguards prior to deploying high-impact AI. Similar to the March 2024 OMB Memo, the AI Framework requires agencies to (1) conduct “AI risk and impact assessments,” including the intended purpose and expected benefit, potential risks and mitigations, and quality and appropriateness of relevant data; (2) performance testing in “realistic” contexts; and (3) independent evaluations of the intended purpose and deployment.
Additionally, agencies must identify and mitigate unlawful discrimination, harmful bias, overreliance on AI, and other emerging risks; provide appropriate training and assessments for operators; ensure human oversight of AI decisions and actions; and conduct regular monitoring, testing, and human reviews. Finally, agencies that deploy high-risk AI must maintain appropriate internal channels for reporting improper AI uses and obtaining senior-leadership approval for AI that could pose “significant degrees of risk,” harm the reputation or foreign policy interests of the United States, or significantly affect “international norms of behavior.”
Although these minimum risk management practices are required only for high-risk AI uses, the AI Framework encourages agencies to apply these practices to all AI use cases “to the extent practicable and appropriate.”
Procedural Safeguards for Federal Personnel-Impacting AI. Pillar II also requires agencies that deploy federal personnel-impacting AI to implement certain safeguards. Specifically, agencies that use AI that impact federal personnel must (1) consult and incorporate feedback from the workforce when developing and deploying federal personnel-impacting AI; (2) notify and obtain consent from affected individuals; (3) notify individuals when AI is used to inform an adverse employment-related decision or action that concerns them; and (4) provide timely human consideration and potential remedy when individuals appeal or dispute AI decisions.
c. AI Inventories, Data Management, and Oversight
The AI NSM’s AI Framework establishes agency inventory and documentation requirements similar to the OMB’s Agency AI Reporting Guidance. Specifically, Pillar III of the AI Framework requires agencies to conduct annual inventories of all high-impact AI use cases, which must be reported to the Assistant to the President for National Security Affairs and must include descriptions of the AI’s purpose, benefits, and risks, and the agency’s mitigations.
Pillar III also requires agencies to establish or update data management policies and procedures to “prioritize[e] enterprise applications and account[] for the unique attributes of AI systems,” with “special consideration” for high-impact AI. Updated data management policies and procedures must address evaluations of AI training data and related risks, best practices and standards for training data and prompts, and the handling of AI models with multiple uses or trained on sensitive, inaccurate, or ill-gotten information. These data management policies must also include guidelines for using AI to make automated, mission-critical determinations and in ways that protect civil liberties, privacy, and human rights, and standards for evaluating and auditing AI.
Finally, Pillar III of the AI Framework implements certain internal agency oversight and transparency previously established by the March 2024 OMB Memo. As outlined in the March 2024 OMB Memo, agencies must appoint Chief AI Officers (“CAIOs”) with necessary skills and expertise to provide advice, institute governance and oversight, and manage a host of other responsibilities related to agencies’ uses of AI and compliance with the AI NSM. Agencies must also establish AI Governance Boards for reviewing and mitigating barriers to AI development and use, and must designate officials to provide oversight of agency AI activities, such as reviewing, reporting, and documenting incidents of misuse. On at least an annual basis, agencies’ privacy and civil liberties officers or other oversight officials must submit reports on AI oversight activities to the head of their respective agencies, in an unclassified form “to the greatest extent practicable.”
d. Agency Workforce Training and Accountability for AI
To ensure that agencies have sufficient training and expertise to carry out the functions above, agencies must establish workforce training requirements and guidelines for the responsible use and development of AI, including AI risk management training and AI training for privacy and civil liberties officers.
Such policies and procedures must be updated as needed to ensure adequate accountability. Agencies may not deploy AI systems without updated accountability policies and procedures, which must identify personnel responsible for assessing risks across the AI lifecycle, establish mechanisms for holding personnel accountable for contributions to and uses of AI decisions, require documentation and reporting, and provide channels for reporting AI misuse, investigations, and corrective actions.
IV. Acquisition and Procurement of AI for National Security Purposes
a. Enabling Effective and Responsible Use of AI
To “accelerate the use of AI in service of its national security mission,” § 4.1(d) of the AI NSM directs the U.S. Government to implement “coordinated and effective acquisition and procurement systems” for AI. This includes an increased capacity to “assess, define, and articulate AI-related requirements for national security purposes” and enhanced accessibility for AI companies that “lack significant prior experience working with the United States Government.”
Furthermore, § 4.1(e) outlines specific actions to support these goals. DOD and the Office of the Director of National Intelligence (“ODNI”), in coordination with OMB and other relevant agencies, must establish a working group focused on procurement issues for DOD, the Intelligence Community, and national security systems. This working group may consult with the NSA Director in forming “recommendations for acquiring and procuring AI” for national security systems.
The AI NSM requires this working group to submit written recommendations to the Federal Acquisition Regulatory Council (“FARC”) regarding regulatory changes to promote the following objectives related to DOD and Intelligence Community AI acquisitions. These recommendations should promote the objectives of:
- Establishing clear standards to assess and encourage the safety, security, and reliability of AI systems;
- Streamlining the process for acquiring AI while upholding necessary safety measures;
- Simplifying contracting procedures to make it easier for companies with limited government experience to participate while simultaneously supporting a competitive AI industry;
- Designing procurement competitions that encourage broad participation and focus on technical quality to ensure the government receives optimal value;
- Enabling agencies to share AI resources where appropriate to maximize utility across government; and
- Allowing agencies with unique mandates to adopt additional policies as needed to fulfill their specific missions.
The FARC must then consider amendments to the Federal Acquisition Regulation to codify recommendations from the working group.
Additionally, DOD and ODNI are tasked with engaging, “on an ongoing basis with diverse United States private sector stakeholders,” including AI technology and defense companies and the U.S. investor community, in order to understand emerging capabilities that could support or impact the national security mission.
b. Sharing and Interoperability of AI Functions on National Security Systems Across Agencies
Section 4.1(j) emphasizes the need for better internal coordination across the U.S Government regarding AI use in national security systems to facilitate interoperability, resource sharing, and economies of scale offered by advanced AI models.
In turn, § 4.1(k) outlines actions to achieve these goals. DOD and ODNI must regularly issue or update guidance to improve the consolidation and interoperability of AI-related functions across national security systems in order to ensure effective coordination and resource sharing where permitted by law. This guidance must focus on:
- Recommending organizational practices that enhance AI research and deployment across multiple national security entities to create consistency in these practices wherever possible;
- Facilitating centralized efforts in research, development, and procurement of general-purpose AI tools and infrastructure to enable shared access among agencies, while safeguarding sensitive information as needed;
- Standardizing AI-related national security policies agencies where appropriate and legally permissible; and
- Establishing protocols for sharing information between DOD and the Intelligence Community when contractor-developed AI systems present risks to safety, security, trustworthiness, or raise concerns about human rights, civil rights, civil liberties, or privacy.
c. Agency Guidance on AI Governance and Risk Management for National Security Systems
Section 4.2(g) provides agency guidance of AI governance and risk management for national security systems. The heads of the Department of State, Treasury, Commerce, DOD, the Department of Justice (“DOJ”), DOE, DHS, ODNI, and other agencies using AI in national security systems must issue or update guidance for AI governance and risk management aligned with the policies in the AI NSM, the AI Framework discussed above, and other applicable policies. Agencies must review and revise this guidance annually, which should remain unclassified and available to the public, as appropriate, with an option for a classified annex if needed. APNSA must, in turn, organize an annual interagency meeting to promote consistency in AI governance and risk management across agencies, while respecting each agency’s unique roles and responsibilities.
Areas that APNSA must target for alignment include:
- Risk management practices for high impact AI;
- Standards and activities for AI and AI systems, including training, testing, accreditation, security, and cybersecurity; and
- Additional matters impacting interoperability of AI and AI systems across agencies.
V. Additional Provisions
The AI NSM addresses a wide range of issues and priorities relevant to the use of AI to advance U.S. national security. In addition to the assessment, risk management, and procurement frameworks discussed above, the AI NSM directs agencies to make progress on a number of government priorities for AI, including attracting and retaining AI talent, promoting and protecting assets critical for AI infrastructure, and collaborating with U.S. allies on AI, while protecting U.S. AI-related assets from foreign adversaries.
- Attracting and Retaining AI Talent in Government. Section 3.1(c) of the AI NSM requires APNSA to convene relevant agencies to “explore actions for prioritizing and streamlining administrative processing operations for all visa applicants working with sensitive technologies.” Relatedly, § 4.1(c) directs the Intelligence Community elements and the Departments of State, Defense, Energy, Justice, and Homeland Security to review their hiring and retention policies and strategies to accelerate AI adoption, education, and training.
- Promoting AI Semiconductors and Computational Infrastructure. Recognizing that the “current paradigm of AI development depends heavily on computation resources” like AI semiconductors and AI-dedicated computational infrastructure, § 3.1(e) instructs the National Science Foundation to use the National AI Research Resource (NAIRR) pilot, established by the 2023 AI Executive Order, to “distribute . . . critical assets for AI development to a diverse array of actors that otherwise would lack access to such capabilities.” The White House Chief of Staff must also coordinate efforts to “streamline permitting, approvals, and incentives for the construction of AI-enabling infrastructure, as well as surrounding assets supporting the resilient operation of this infrastructure.”
- Protecting U.S. AI from Foreign Intelligence Threats. In response to foreign state efforts to “obtain and repurpose the fruits of AI innovation in the United States to serve their national security goals,” including through the use of “gray-zone methods” to obtain U.S. AI-related intellectual property (referred to as “critical technical artifacts”), § 3.2 of the AI NSM directs ODNI to identify critical nodes and plausible risks of disruption or compromise in the AI supply chain. This section also requires CFIUS to consider whether covered transactions involve foreign access to proprietary information on AI training techniques and other proprietary insights on the creation and use of powerful AI systems.
- Rapid Development and National Security Use of AI. In addition to mitigating risks, the AI NSM aims to accelerate effective national security uses of AI. Section 4.1(g) directs DOD and the Intelligence Community to review and revise policies and procedures to enable the effective use of AI, accounting for use of personal information or IP in datasets, risks of algorithmic bias or other AI failure modes, and other issues. These agencies must also consider future “guidance that shall be developed by DOJ, in consultation with DOD and ODNI, regarding constitutional considerations raised by the IC’s acquisition and use of AI.” These changes must be consistent with national security system policies and OMB guidance governing AI security on non-national security systems.
- Co-Development and Co-Deployment of AI with Allies and Partners. To invest in and enable “co-development and co-deployment of AI capabilities with select allies and partners,” § 4.1(i) directs DOD to evaluate the feasibility of advancing the co-development and shared use of AI and AI-enabled assets with select allies and partners, including a list of foreign states for potential co-development or co-deployment and a list of bilateral and multilateral fora for outreach.
If you have any questions concerning the material discussed in this client alert, please contact the members of our Government Contracts practice.