The AI Age Enters the Age of AI Regulation (source)
Yesterday afternoon, the Biden Administration debuted a landmark executive order on artificial intelligence. At 111 pages, the order is both exhaustive and comprehensive. Most if not all agencies in the US federal government will either be asked to write new AI safety rules or, alternatively, to begin active consideration of integrating AI into their systems to (hopefully) improve how the government does bussiness. While it’s too early to tell how this order will be implemented, and there hasn’t been much time to digest the expansive new regulation, there are a few initial takeaways.
Biden is serious
Past Biden administration executive actions have often been “messaging orders,” designed to plant a flag in the ground in terms of beliefs and principles but lacking substance beneath the veneer. This order is the opposite. Many of the asks are specific, impactful, and wide reaching—notable are new AI reporting requirements, “know your customer” rules for foreign actors operating on US servers, requested rules to crack down on algorithmic bias in healthcare, housing, and other areas, biosecurity regulation, immigration reforms, and possible financial stability regulation. Other provisions suggest that this is not only an attempt to “go big” on AI, but that AI policy is truly being centered as a core administration priority.
In the order, earlier commitments are frequently redirected toward AI. Previously, the administration had dedicated the Technology Modernization Fund (TMF), a government IT slush fund of sorts, to funding government IT needs and specifically the implementation of the 21st century IDEA act (an as-yet fully implemented bill that requires the full digitization of government forms). Now, the 21st Century IDEA Act is clearly taking a back seat as the fund will prioritize AI “and particularly generative AI.”
Government digitization is out, generative AI is in.
The TMF redirection is only one example. Many other pre-exsisting priorities are directly being trumped by this new push into AI, and it’s clear the administration is trying to make AI policy one of its legacy issues. No doubt we will see more commitments redirected toward this effort as implementation begins and details emerge.
National Security Takes Center Stage
Perhaps the centerpiece of the bill is the activation of the Defense Production Act (DPA), a bill intended to provide “national mobilization capacity to bring the industrial might of the U.S. to bear on broader national security challenges.” In the past 24 hours, many have over-exaggerated the “threat” the activation of this bill poses, often citing its Korean War origins and the DPA’s “commandeering” authority (authorities that require companies to accept government manufacturing contracts in times of crisis; these authorities were not invoked in yesterday’s order). Many of these takes are either deceptive or needlessly provocative. First, the current DPA is not a Korean War era bill—it was reauthorized in amended form in 2019. Second, the DPA is realistically a multi-purpose, commonly used bill with a range of powers other than the commandeering authority. For instance, in DPA Title III we find the source of the government’s power to provide direct loans for defense projects, make purchase commitments, and provide loan guarantees. Per the Department of Defense, there are currently 19 DPA Title III projects in progress with authorizations from both recent presidents. Invoking the DPA is a regular practice and is currently being used to develop a range of technologies, from hypersonic missiles to large capacity batteries and next generation UAVs.
In the AI executive order, the Biden Administration has specifically invoked the DPA’s Title VII industry assesment powers to require regular reporting from AI companies on a range of information including what AI technologies they own and the results of safety assesments. Again, such action is not beyond the pale. In 2012, the Obama administration used this authority in a similar manner, requiring telecommunications companies to report use of foreign hardware in an effort to crack down on cyber-espionage. Since then, congress has twice reauthorized this law and these provisions, affirming congressional intent for this type of application.
The president is clearly backed by precedent and the legislature. Still, that’s not to say the use of this power doesn’t have issues. While I personally believe a degree of industry assesment is indeed needed for AI (though the DPA invocation I imagine would be more tailored, focused less on AI R&D and more on AI diffusion), I have reservations about the significant privacy risks this power would create if overapplied and the potential for chilling effects on industry. In the executive order’s current form, we don’t have a full picture of the scope of application, and to assess these risks properly, we will have to wait until implementation details are known.
A New Defense Industrial Resource
The key takeaway here isn’t that this is an abuse of power, unprecedented, or absurd but rather that the administration is conceiving of AI technology as a new defense industrial resource to be both protected and cultivated. To the Biden administration, AI is not just an industry, it is an asset. By invoking the DPA, it is assessing the size, shape, and scope of that asset. How good is our AI ecosystem? How does it compare to China? Are we competitive?
Under the Department of Commerce, this new program will be enforced by the Bureau of Industry and Security (BIS). As the name suggests, this is not a consumer watch dog, this is a national security agency. More specifically, one half of the BIS’s dual mandate is “promoting continued U.S. strategic technology leadership.” Given this mission the data’s use will no doubt be tilted towards the interests, knowledge base and priorities of the BIS: informing national security, assessing technological competitiveness, and cultivating this asset for defense purposes.
A possible regulatory future
Make no mistake, this reporting could inform regulation – but a very specific type of regulation with very specific goals. According to Undersecretary for Industry and Security Alan Estevez, who will be leading the effort “The Bureau of Industry and Security stands ready to develop the regulations and procedures mandated by today’s executive order that will enhance safety and protect our national security and foreign policy interests (emphasis added).” The further emphasis on chemical, biological, radiological, and nuclear threats in the executive order confirms that the concern of this reporting is not run of the mill AI bias or consumer welfare, but WMDs, cybersecurity and other major threats to the national defense.
While it is indeed possible this information may find additional uses in investigation and consumer facing regulation, data tends to have a hard time escaping the national security sector. Once it is collected, it will be classified. Even if declassified, information sharing agreements will have to be agreed on between the agencies before data is shared. All of this is certainly possible, but diffusing information like this tends to be difficult, and as a result, consumer welfare regulations based on this reporting are unlikely in the near future.
Diffusion on the Mind
I’ll conclude with what has me most excited about this order: its clear emphasis on federal government AI diffusion. Originally, I had been expecting attention to focus on taming this technology and putting the brakes on its overapplication. While these elements certainly exist, the administration clearly sees AI as lightning that they want to capture in a bottle. Throughout the order, we find countless provisions devoted to “AI Capacity ”; they are embarking on a whole-government push to educate, bring in talent, and assess IT with the aim to create a modern, AI-driven government that (hopefully) is more efficient, approachable, and fair. On AI.gov we already see the tip of this iceberg; the administration has already begun asking citizens to join what they call “the National AI Talent Surge,” an effort backed by new postings, eased clearance requirements for both citizens and (uniquely) noncitizens and prioritized hiring. All these efforts are encouraging, and exactly what is needed. The best way to create AI abundance and compete with China is to go big on AI diffusion and reap AI potential. This order is a great first step.
Note that both implementation design and competing priorities may hinder this big AI capacity push. While countless provisions are devoted to capacity building, countless more are devoted to standards, regulations, and ethical limits. Limits and standards are certainly needed, but so too is regulatory caution. If procurement and use of AI is overregulated, the red tape could slow down and impede this roll out, just as it did with the federal government’s digitization efforts. If we want AI capacity, developers and staff will need the freedom of action to explore, innovate, create, and implement. Thankfully, some early signs suggest an understanding of this balance. The administration explicitly states that “agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI” and to provide employees generative AI services (with limits of course) for “purposes of experimentation.” There is little Luddism to be found here. While they have reasonable reservations, the administration clearly wants to make use of this tech.
More to Say
I’ll stop here but note that there is so much more to say. Countless provisions require dedicated thought and analysis, beyond what is possible within just 24 hours. In future months, expect much more on this executive order as more details emerge, implementation begins, and policies are written.