The Trump administration revealed its long awaited AI Action Plan (the plan) today, ordering the federal government to accelerate the development of artificial intelligence (AI) in the United States and “remove red tape and onerous regulation” while ensuring that AI is free of “ideological bias.” The plan’s epigraph, signed by President Trump, states “it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance.” Doubling down on the national security, economic, and trade competition contexts, the introduction of the plan, which is titled “Winning the Race,” states that “the United States is in a race to achieve global dominance in AI” and calls for the US to win this race “just like we won the space race.”
The plan presents AI as a technological breakthrough that will lead to “[a]n industrial revolution, an information revolution, and a renaissance—all at once.” It does not linger on documented AI risks, such as trust and safety, accuracy, intellectual property, privacy, cybersecurity, or bias and discrimination. Indeed, the word “safety” appears in the document just once. The plan is signed by Michael Kratsios, assistant to the president for science and technology; David Sacks, special advisor for AI and crypto; and Marco Rubio, in his capacity as assistant to the president for national security affairs.
The plan comprises three pillars: (i) Accelerate AI Innovation; (ii) Build American AI Infrastructure; and (iii) Lead in International AI Diplomacy and Security.
Pillar I: Accelerate AI Innovation
The first pillar, Accelerate AI Innovation, calls for extraordinary deregulatory measures, including several never seen before. It requires the “Federal government to create the conditions where private-sector-led innovation can flourish.” Under the subheading “Remove Red Tape and Onerous Regulation,” the plan orders the Federal Trade Commission (FTC), an independent administrative agency created by Congress in 1914 under the FTC Act, to review “investigations commenced under the previous administration to ensure that they do not advance theories of liability that unduly burden AI innovation.” In a marked departure from traditional practice, it calls for the FTC to reopen, modify, or set aside existing orders, consent decrees, and injunctions “that unduly burden AI innovation.”
Echoing the recently defunct initiative of a federal AI regulatory moratorium, the plan states, “The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds,” though it adds that “[it] should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.” It requires the Office of Management and Budget to work with federal agencies that have AI-related discretionary funding programs “to ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.” While this language leaves much room for interpretation, it portends a policy of channeling federal funding to states that do not enact AI policies that are at odds with the Trump administration’s. It will be interesting to see how the government assesses the recent Texas AI law according to this standard.
Under the subheading “Ensure that Frontier AI Protects Free Speech and American Values,” the plan orders the Department of Commerce through the National Institute of Standards and Technology (NIST) to revise the NIST AI Risk Management Framework, a foundational policy and governance document in the AI space, to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” It further requires an update to federal procurement guidelines to “ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.”
The plan encourages an environment of open-source and open-weight AI models, weighing in on an issue that has been fiercely contentious in industry, with different industry leaders, such as OpenAI and Meta, advocating opposing views. It pushes for more rapid adoption of AI in sectors ranging from healthcare to energy and agriculture, advocating for what it calls “a dynamic, ‘try-first’ culture for AI across American industry” and lamenting the current “distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards.”
Recognizing the profound — and potentially ominous — implications of AI for the labor force, the plan calls to “empower American workers in the age of AI.” It requires federal agencies to enhance AI literacy, skill development, and training; conduct research to assess AI’s impact on the labor market; and leverage available discretionary funding to fund rapid retraining for individuals impacted by AI-related job displacement.
Other initiatives under the first pillar include prioritizing investment in a wide range of new products powered by AI, including “autonomous drones, self-driving cars, robotics, and other inventions for which terminology does not yet exist”; increasing investment in AI research; building “world-class scientific datasets” (noting that “other countries, including our adversaries, have raced ahead of us in amassing vast troves of scientific data”); and combating deepfakes, particularly insofar as they can be used as evidence in legal proceedings.