On Monday, President Biden will issue an executive order outlining the federal government’s first regulations on artificial intelligence systems. They include requirements that the most advanced artificial intelligence products be tested to ensure they cannot be used to produce biological or nuclear weapons, and that the findings of those tests be reported to the federal government.
The testing requirements are small but the centerpiece of what Mr. Biden, in a speech scheduled for Monday afternoon, will describe as the most sweeping government action to protect Americans from the potential risks brought by huge leaps in AI over the past several years.
The regulations will include recommendations, but not requirements, that photos, video and audio produced by such systems be watermarked to make it clear they were created by AI. It reflects growing fears that AI will make it far easier to create “deep fakes” and convincing disinformation, especially as the 2024 presidential campaign gathers pace.
The United States recently restricted exports of high-performance chips to China to slow its ability to produce so-called large language models, the data hoarding that has made programs like ChatGPT so effective at answering questions and speeding up tasks. Similarly, the new regulations will require companies that manage cloud services to tell the government about their foreign clients.
Mr. Biden’s order will be issued days before a gathering of world leaders on artificial intelligence security hosted by British Prime Minister Rishi Sunak. On artificial intelligence regulation, the United States has lagged behind the European Union, which has been preparing new laws, and other nations, such as China and Israel, which have issued proposed regulations. Since ChatGPT, an AI-powered chatbot, exploded in popularity last year, lawmakers and global regulators have grappled with how AI could change businesses, spread misinformation and potentially develop its own kind of intelligence.
“President Biden is implementing the strongest set of actions that any government in the world has ever taken on AI safety, security and trust,” said Bruce Reed, White House deputy chief of staff. “It’s the next step in an aggressive strategy to do everything on all fronts to take advantage of AI and mitigate risks.”
The new US rules, some of which take effect within the next 90 days, are likely to face many challenges, some legal and some political. But the order is aimed at the most advanced future systems and largely does not address immediate threats to existing chatbots that could be used to spread disinformation about Ukraine, Gaza or the presidential campaign.
The administration did not release the language of the executive order on Sunday, but officials said some of the steps in the order would require approval from independent agencies, such as the Federal Trade Commission.
The order applies only to US companies, but because software development takes place worldwide, the United States will face diplomatic challenges in enforcing the regulations, which is why the administration is trying to encourage allies and adversaries to develop similar rules. Vice President Kamala Harris is representing the United States at a conference in London this week on the topic.
The regulations also aim to impact the technology sector by setting standards for safety, security and consumer protection. Using the power of their wallets, the White House’s directives to federal agencies aim to force companies to adhere to standards set by their government clients.
“This is an important first step and, more importantly, the executive orders set the norm,” said Lauren Kahn, senior research analyst at Georgetown University’s Center for Security and Emerging Technologies.
The order directs the Department of Health and Human Services and other agencies to create clear safety standards for the use of AI and to streamline systems to facilitate the purchase of AI tools. Directs the Department of Labor and the National Economic Council to study the impact of artificial intelligence on the labor market and to enact potential regulations. And it calls on agencies to provide clear guidance to employers, government contractors and federal benefits programs to prevent discrimination from algorithms used in AI tools.
But the White House is limited in its powers, and some of the directives are not enforceable. For example, the order calls on agencies to strengthen internal guidelines for protecting consumers’ personal information, but the White House also acknowledged the need for privacy laws to fully ensure data protection.
To spur innovation and strengthen competition, the White House will ask the FTC to strengthen its role as a watchdog for consumer protection and antitrust violations. But the White House has no authority to direct the FTC, an independent agency, to create regulations.
Lina Khan, the president of the trade commission, has already signaled her intention to act more aggressively as an AI watchdog. In July, the commission opened an investigation into OpenAI, the maker of ChatGPT, for possible violations of consumer privacy and allegations of spreading false information about individuals.
“Although these tools are new, they are not exempt from existing rules, and the FTC will vigorously enforce the laws we are charged with enforcing, even in this new market,” Ms. Khan wrote in a guest essay in The New York Times in May.
The technology industry has said it supports the regulations, although companies disagree on the level of government oversight. Microsoft, OpenAI, Google and Meta are among 15 companies that have agreed to voluntary safety and security commitments, including having third parties stress test their systems for vulnerabilities.
Mr. Biden has called for regulations that support the capabilities of artificial intelligence to help with medical and climate research, while also creating guardrails to protect against abuse. He emphasized the need to balance regulation and support for American companies in the global race for AI leadership. And to that end, the order directs agencies to streamline the visa process for highly skilled immigrants and nonimmigrants with expertise in AI to study and work in the United States.
The central regulations for protecting national security will be set out in a separate document, called the National Security Memorandum, which will be drafted by next summer. Some of those regulations will be public, but many are expected to remain confidential — particularly those regarding steps to prevent foreign nations or non-state actors from exploiting AI systems.
A senior Energy Department official said last week that the National Nuclear Security Administration has already begun investigating how these systems can accelerate nuclear proliferation by addressing complex issues in building nuclear weapons. Many officials have focused on how these systems could allow a terrorist group to gather what is needed to produce biological weapons.
Still, lawmakers and White House officials caution against moving too quickly to write legislation for rapidly changing AI technologies. The EU did not take into account the large language models in its first legislative drafts.
“If you move too fast on this, you can screw it up,” Sen. Chuck Schumer, Democrat of New York and the majority leader, said last week.