The OMB Regulating Public Sector AI From Above. Image source.
At the end of March, the Office of Management and Budget (OMB) released its long-promised AI executive order implementation policy geared at the “governance, innovation and risk management” of AI in federal agencies. For those following AI policy from a distance, this document represents follow-through on October’s landmark AI Executive Order, the Biden administration’s commitment to setting the rules of the road on how and when the federal government uses AI. There is a lot in this document, including the clarification of authorities of a corps of newly created agency Chief AI Officers, promotion of interagency AI collaboration, and, most importantly, an emphasis on AI talent development. The centerpiece, however, is numerous new AI safety provisions, restrictions, and requirements to govern government AI.
At a high level, these new restrictions represent a potential catastrophe for public sector AI diffusion. By going all in AI safety rules the administration is failing to consider budgetary and administrative realities and, as a result, is risking slamming the breaks on federal AI all together.
Process Risks
In brief, the new policy sets out a range of processes and limits for two new categories of AI systems including:
1. “Rights impacting” AI: any system that could impact access to government services including healthcare, finance, housing, social services, transport, and others; any AI that impacts civil rights, human autonomy, discrimination protections or excessive surveillance; and any AI that impacts equitable opportunities.
2. “Safety impacting” AI: systems that could impact human safety or well-being, through chemical/biological risks, harassment or abuse, mental health risks, and occupational hazards, among others; systems that harm the climate or environment; any systems that threaten critical infrastructure and critical government assets.
To approve and use AI systems that fall into these categories, agencies must now complete numerous “minimum practices” – requirements including independent evaluation, impact assessments, consultation with affected communities and the public, transparency requirements, human review and opt-out processes, real world scenario testing, and ongoing monitoring, among many others.
Each requirement is a rabbit hole. Hiding within is extensive time, money, paperwork, coding, and bureaucratic manpower. The exact impact of each rule cannot be calculated at this moment; however, the history of government IT regulations is unfortunately a history of rules causing delays, fiscal waste, and harmful failure. As these new AI processes only layer rules on top of the pre-existing rules already causing government failure, it’s truly hard to imagine these regulations doing anything but slowing or completely halting the use of covered systems.
Unfortunately, newly covered systems include most AI of substance. Systems that provide access to government services, for instance represents a startlingly broad category. Anything that is being used to categorize, rank, or sort applications, forms and info could easily amble into this bucket. As National AI Advisory Committeeman Daniel Ho recently noted, this breadth could risk interrupting many critical basic services. Postal delivery, he notes, relies “on a crude form of AI to read ZIP codes to route letters and verify postage.”
While I’d like to be skeptical the government wouldn’t lean towards overinclusion (it’s usually best to avoid maximalist reads on policies) and bind systems like the postal service, recent government IT history tells us that overinclusion under IT regulations is the rule, not the exception. For instance, officials have over-interpreted the Paperwork Reduction Act (PRA), which mandates a lengthy approval process for any public-facing government forms, to mean IT user experience (UX) testing requires White House approval. The PRA does not call out user experience as regulated, however, because it could be interpreted as covering UX testing overly cautious officials decided to be better safe than sorry. The result has been years of massive delays, and poorly tested systems. Given the administration’s clear emphasis on AI regulation, it’s not unlikely lower level continue this pattern in the case of these rules to avoid stepping on toes. Administrative reality means that countless systems will hit process burdens, restricting the ability of the government to effectively wield AI.
Process Meets Budget
These new rules may go further than binding AI rollouts, they may hinder them entirely when the cost of these processes meet budgetary reality. Just as these rules come into force, IT budgets are being heavily squeezed. In the FY2024 proposed budget for the Technology Modernization Fund (TMF) for instance, we find deep cuts from 200 billion in funding to 75, a 62.5% reduction. These cuts are particularly notable as the TMF fund is explicitly deputized in the AI executive order as the fiscal grease for rapid federal AI diffusion and capacity building. As the OMB is asking more of IT directors in AI implementation, they are simultaneously restricting the very funds they have prioritized to aid those implementations.
Perhaps more impactful than these TMF reductions, however, will be the across-the board cuts to most other agencies. According to the Federal News Network, most civilian agencies are seeing either IT funding decreases or budgetary freezes.
Source: Jason Miller, "For 2025 budget request, federal IT prioritizing AI, CX," Federal News Network, March 12, 2024, https://federalnewsnetwork.com/budget/2024/03/for-2025-budget-request-federal-it-prioritizing-ai-cx/.
Zeroing in on one of these, Veteran’s Affairs (VA), we can start to see the challenge this creates for beneficial AI diffusion. The VA in particular is interesting because their existing and planned AI inventory is both extensive – including 130 AI applications – and (likely) deeply impactful. Publicly documented VA AI applications include polyp detection algorithms to improve endoscopy success, patient vital sign monitoring and notification tools, suicidal ideation monitoring tools, and a range of AI tools that are embedded in standards tools such as MRI machines.
The VA’s extensive AI use is no surprise as healthcare is an area where long-standing AI hype appears to be transforming into real, lifesaving promise. A recent report in The Economist, for instance, disclosed as-yet-unpublished results from a British Government study showing that e-Stroke, an AI analytics system, has “has reduced the time between hospital admission and treatment for stroke patients by more than an hour,” speed which “has tripled the number of people achieving functional independence after a stroke from 16% to 48%.” This is truly the stuff of medical miracles and it’s no wonder the VA is trying to keep pace with AI.
In the face of new rules and cuts, this promise and interest may not matter, however. In FY2025, the VA is proposed to receive 20% fewer IT funds – a massive reduction. Faced with the one-two-punch of new process constraints and deep budget cuts VA officials will very quickly come to question the ROI of big AI implementation projects. These new administrative burdens will make it hard for AI to be the cheaper option, heavily incentivizing cost-cutting officials to default to legacy options. Given the pace of change in AI healthcare, these burdens, if they persist, may cause an AI-constrained VA to very quickly fall behind. If potentially transformational systems like e-Stroke become standard, a VA unable to quickly adopted the latest AI health tech risks making itself a medical pariah – a health system to be avoided if you want modern medicine.
The State Capacity Balancing Act
The VA is just one example, but across the roll call of other agencies we can likely find similar stories. The administration has stated again and again a goal to enable rapid AI diffusion – what they call “AI Capacity” – and wield this tech to serve the public effectively. By pushing such these extensive rules amid cuts, however, the OMB is failing to recognize that the state capacity to achieve effective, good government is a balancing act between regulation and funding. Success demands that when funds are low, regulations must loosen.
I am quite concerned that the OMB has not considered balance, trade-offs, or opportunity costs in these rules. In a follow-up, I plan to dig further into the questionable assumptions behind this new policy and specifically the trade-offs we incur through such restrictions.
Excellent perspective. This is why so many AI startups will accept the invitation to launch outside the U.S. More funding, fewer ridiculous and superfluous hurdles, and governments eager to welcome U.S. innovators will result in yet another export we may never get back. The government's attitude of 'caution' aligns with the ridiculous decision to close many U.S. public schools closed yesterday (4/8) to protect children from the 'dangerous' solar eclipse. What a huge missed opportunity for embracing science and community! This AI 'safety policy' will guarantee yet another big missed opportunity.
The rights impact and safety impact are exceptionally broadly defined, to the point where they could make it into anything