The Pitfalls of AI Governance by Purchasing Power
Using government acquisitions to shape AI markets is misguided, will inhibit federal AI diffusion, and is bound for failure
Administrators considering how dollars can regulate AI. Image Source
In coming weeks, or even days, the Biden administration is set to release an executive order on artificial intelligence. For AI regulation, this could be a first of its kind omnibus regulatory effort to set the trajectory of AI development and AI governance.
Unfortunately, as Politico’s Mark Scott recently bemoaned, details are “maddeningly difficult to find.” To truly understand what is coming, we’ll have to wait and see what exactly is in the package, and critically, how those rules are implemented. That said, we do know two high-level details, and both are well worthy of discussion.
First, the core goal of the order is to set guidelines on the government’s application and use of AI. Second, according to Politico and other sources, the strategic goal of the order is to use the government’s purchasing power “to shape American standards for a technology that has galloped ahead of regulators.” Commenting on the October 18th edition of Politico Tech, Suresh Venkatasubramanian, the administration official who authored the AI Bill of Rights, expanded saying that government purchasing power can act as part of a “multi-pronged approach to bringing [AI] governance in” while expressing a belief that purchasing power should be used to “change the market structure and change incentives.”
The first goal - setting standards on the government use, development, and purchase of AI tools is certainly appropriate. However, the second - using these rules to try and simultaneously shape society, is risky. As of today, it’s unclear whether the federal government has the market power to succeed and unclear what ‘market shape’ officials are imagining. Crucially - it also likely this secondary goal will water down the benefits we could otherwise reap from just focusing on responsibly diffusing AI in the federal government and improving how the government does bussiness.
The Size and Splintering Problems
Perhaps the biggest challenge facing this strategy are the current limits of federal artificial intelligence spending – the size problem. In a recent interview, White House National AI Advisory Committee member Daniel Ho noted that such a strategy hinges on the idea that “The government is going to be one of the largest purchasers of AI, so the standard that it sets will have a pronounced impact on responsible AI innovation.” Going to be seems to be the key phrase. Today, government spending on AI represents a relatively meager slice of the overall AI pie. According to the Human Centered AI Institute, 2022 government AI spending totaled roughly $3.3 billion. Large figures to be sure, but big enough to be market steering? Perhaps not. On its own, OpenAI -the creator of ChatGPT - is currently raking in an estimated 1.3 billion dollars annually. Expanding out, Bloomberg estimates that the generative AI market alone represents a $40 billion market size. Even if the U.S. government were to pour all its current AI spending into just this slice of the overall AI market, that would only add up to about 8% of all generative AI spending.
This points to a related challenge: the splintering problem. Federal spending is simply not this concentrated, limiting its ability to meaningfully shape the “AI market.”
In the figure above, drawn from Stanford HAI’s 2023 AI Index Report, we find federal AI spending is already fragmented across multiple product categories. Within each category, there exists even more fragmentation, given the countless AI companies, system-types, and use cases. AI is not a monolith and in reality there is not just one AI market, but many distinct markets officials would need to shape for success. Given this splintering, any attempt to set broad standards through purchasing power will be uneven, watered down, or ineffectual. It is indeed possible that the government has enough market power to shape certain niches – but it’s just as likely the government will yield little to no impact on other subcategories. In sum: if government spending is fragmented, so too is government market shaping power.
In this splintered array of purchase categories is one notable absence: generative AI. Certainly, generative AI could be rolled into one of these exsisting buckets in some form or another, but spending on this type of AI applications is likely small given the lack of a dedicated category and the fact that these AI procurement decisions often have years long delays. As previous White House AI regulatory actions has thus far placed special emphasis on generative AI– this is likely the AI category regulators most want to steer. Ironically, generative AI is also the category in which they will find their leverage most limited.
Today, appropriations are unlikely big enough and concentrated enough across all categories to shape a market both of this size and diversity. Meanwhile, any hope for near term funding increases which would match government market power to AI market size, and keep up with AI market growth, seems remote in a congress too divided to elect a Speaker of the House.
The Prediction and Goal Problems
Even if the federal government can be successful using procurement to shape the market, it still runs into the challenge of picking a battle - the goal problem. At this point, it’s unclear what the administration specifically is trying to do. In a sector as diverse as AI, there are already many potential governance targets that have been cited by officials, and beyond these many other worthy challenges we can imagine. Is the goal to set privacy and safety standards, as the U.S. Chief Information Security Officer recently suggested? Or perhaps, as he also suggested, to set the standard for agile AI assessment? Perhaps the priority is to develop and deploy AI in select high impact areas - maybe healthcare, as another recent White House release emphazised. Perhaps its creating an information sharing standard, developing watermarking technology, demarcating unacceptable applications, or developing a market/system for third party risk and vulnerability identification – all of which have also been identified as administration priorities. Under each of these there is greater diversity, directions and subproblems that could be tackled. At this moment, priorities are both many, and poorly defined. Proceeding on this multitude of fronts, especially if these goals continue to be ill-defined, will create a too many cooks in the kitchen problem. A lack of focus will water down potential impact.
Compare this wide-ranging list of AI governance priorities to past governmnet spending programs that have successfully incentivized solutions to engineering-related challenges. Operation Warp Speed, which helped birth the COVID vaccine, and the DARPA Grand Challenge, which helped catalyze the driverless car industry, are two recent example successes. In both cases the core variable that created success, as Institute for Progress Scholar Arielle D'Souza argues, was setting a “clear, measurable product goal.” Operation Warp Speed wanted a COVID vaccine that was better than nothing, while the DARPA Grand Challenge wanted driverless cars to meet a specified performance benchmark. These are clear, simple, concise goals that can be easily explained on just a postcard.
Given the unwieldy diversity of AI, and the diversity of the administration’s apparent regulatory goals, such specificity of purpose is unlikely. What is more likely is the role out of competing, inconsistent and sometimes contradictory goals which will water down the overall impact while hampering procurements with competing incentives. If the administration opts instead to tackle just one piece of this puzzle - the DARPA Grand Challenge’s driverless car success illustrates that prize challenges can indeed work for AI. Unfortunately – probably due to its inherent incrementalism - this evidenced model doesn’t seem to be the chosen path.
The final challenge is that not only is the goal unclear, but so is AI’s current direction – the prediction problem. AI is moving at a breakneck pace, and it’s unclear how it will change, what companies will be involved, and how it will be used in five years’ time. Trying to shape and govern a market in a constant state of transformation is to fight a losing battle. As ChinaTalk’s Jordan Schneider and I recently argued, (you can find a breakdown of the essay here) AI policy is driving in the dark and strategic choices must embrace, not resist this fact. This plan, however, would do the opposite. Making decisions that will successfully shape the state-of-the-art market is hard enough, but this task likely becomes impossible when acquisition timelines cause IT purchase and implementation to lag the market by three years. Under such conditions, it’s truly difficult to imagine success.
Unwanted costs
Unfortunately, this losing battle isn’t a costless exercise. As it stands, applying, using, and diffusing AI in the federal government already faces an uphill battle given long standing federal IT malaise. Under current conditions federal AI diffusion is a big enough challenge on its own and should be the core focus of any such executive order.
Making shaping markets the goal of government purchase decisions, however, incentives undesirable competing outcomes. Under this strategy purchase decisions may be incentivized to maximize the impact a given purchase can have on the market, rather than maximize the value it has for government operations. A potential result: wasteful spending on subpar technology, a bias towards large “market shaping” tech titans, and resulting inefficiencies that encourage a poorly functioning, costly government. What’s more, when the focus is on market shaping, that drags attention from pressing problems which officials have more direct control over – such as ensuring purchases can be properly deployed and promoting AI diffusion.
Today there are so many challenges the government faces and given the tremendous uphill battle of government already has in IT, the goal of purchases cannot be anything other than improving the government’s capacity to achieve its ends.
Modest Steps Can Still Matter
To reiterate – the core purpose of this order, to set standards on AI purchases and establish ethical expectations, is reasonable. What is a mistake, however, is misdirecting the strategic point of these rules away from this core premise. The government should be focusing improving AI diffusion, make the operations more efficient and user friendly, preserving rights through ethical limits on agencies, and ensuring AI has an overall net benefit. Hoping that purchases can be used to shape markets distracts from these important, and more realistic goals.
While ensuring the government’s “AI house” is in order might not have the grand regulatory appeal some are hoping for, these modest goals do indeed matter and should still be priorities officials care about. Government IT is itself a major challenge with deep impacts on the nation, and unlike most IT decisions high quality government IT can literally change lives. The administration is encouraged embrace the worth and potential impact of this more modest, yet important priority.
To sweeten this argument, however, I’ll end by noting that if executed well, an executive order that focuses on and succeeds at creating a government that adopts and diffuses AI quickly and in the ‘right’ way can still shape markets. Given its size and scope, the government has extensive capacity to play with this tech, iterate, experiment, and discover what rules, designs and norms work best. If the resulting AI truly is a joy to use, and their governance standards work - people will take notice, expectations will be set, and perhaps this performance standard will be adopted widely.