Congress boots up AltmanGPT to answer their burning questions. Image Source: https://www.cnn.com/2023/05/17/tech/sam-altman-congress/index.html
In May, the Senate lit up the AI policy conversation holding two dueling artificial intelligence hearings: one on the use of AI in government and the other on AI oversight. Comparing the two, most acknowledge their contrasts; the Sam Altman-led oversight hearing featured high drama (for the Senate, that is) while the relatively staid AI in Government hearing featured substantive policy proposals.
Taking a step back, however, these contrasts in substance disguise an deeper thematic lockstep. During the oversight hearing, uncertain senators looked to a trio of computer scientists, hoping their technical understanding might yield effective solutions. Meanwhile, AI in Government witnesses proposed a formalization of this exact approach. Most proposals focused on the government’s pressing need for technologists and how to coax computer scientists into the bureaucracy to manage the vexing challenges of the AI age.
The thematic TLDR: AI is complex, and for solutions, we must turn to the experts.
Enter: Silicon Valley
None of this is surprising. Again and again, officials and analysts have defaulted to the position that AI policy should be led by technologists. In early May, President Biden and Vice President Harris kicked off a renewed AI policy push through a high profile meeting with Silicon Valley’s top AI leaders. The effect: highlighting and centering the importance and perspectives of these experts. In the United Kingdom, Prime Minister Rishi Sunak recently followed suit. In this case, the effect seemed to go deeper. While previously advocating a relatively light-touch, optimistic approach, Sunak’s government paired the meeting with a messaging shift to match the growing technologist-led consensus on AI risk. Days later his government adopted one of the very policy prescriptions Sam Altman and Gary Marcus pushed during the Senate hearings.
Even when these AI experts do not have direct contact with decisionmakers, their technically informed words hold incredible weight. Since Sam Altman debuted his marquee proposal for an AI licensure regime to regulate powerful AI models, decision makers have taken his advice to heart. On May 25th, representatives in the Pennsylvania Legislature introduced legislation to study, and perhaps pioneer, Altman’s proposal. While hardly wonks, these technologists are unquestionably taking a leading policy role and their ideas actively pushed to the front of the policy agenda.
The unique influence of these celebrity AI technologists, however, is only a manifestation of a deeper policy iceberg.
Today there is a rarely recognized, yet growing problem of excessive deference to technologists baked into formal AI policy and governance structures. Up, down, and across government, officials have staffed, centered, and prioritized the leadership of well credentialed technologists, computer scientists, and Silicon Valley natives to address AI challenges. Almost all together missing are other types of relevant and necessary non-technical expertise. Despite widespread predictions that AI might transform most if not all facets of society, the ship of state is being steered into the AI age by a brittle, technologist monoculture.
Not without Reason
Before examining this governance reality, it’s important to note that emphasis on technological leadership is not without reason. According to a recent Meritalk federal government survey, 87% of government leadership believe their agencies have significant AI resource gaps. Further, half of agencies report previous attempts to implement AI programs have met failure due to a lack of necessary expertise. AI experts are clearly needed.
We find deeper roots of this emphasis in the long festering malaise of government technology capacity shortfalls. On multiple occasions throughout the past decade government capacities have failed to meet digital age demands. While in some instances, these failures are simple knowledge gaps - such as the late Senator Hatch’s famous confusion over the basics of social media. In many cases, such as the disastrous 2013 rollout of Healthcare.gov, technical gaps mean a failure to deliver promised government services. In their annual report on “high-risk” challenges facing the federal government, the Government Accountability Office states that despite modest government tech capacity improvements, most of their 34 “risks” are directly rooted in IT malaise.
Government tech, and AI capacity shortfalls are very real and must absolutely be addressed. If the government does not understand AI, its policy will inevitably fail to match the complexity and diversity of this technology. Without capacity building action, we invite waste and another healthcare.gov-esque debacle. To get ahead of the AI game, integrating technologists into the government must be part of any AI strategy. My personal prescription: just as every agency is expected to staff at least one economist, they must also staff at least one AI expert.
Yes, we need experts. Yes, leaning on them is a very natural response to AI complexity and tech capacity shortfalls. And yes, technically informed advice certainly should hold weight.
That said, this emphasis becomes a problem when listening to and staffing technologists is the only solution.
Not Within Reason
Examining existing AI policy leadership, this certainly seems the case. While many claim federal AI policy doesn’t exist, in recent years (primarily in the 2020 Artificial Intelligence Initiative Act) congress has stood up a variety of AI bodies, offices, and commissions to help build the bedrock of federal AI policy. In almost every case, we find an expansive slate of AI technologists taking the lead:
On the National Security Commission on Artificial Intelligence (NSCAI), all commissioners were computer scientists, Silicon Valley insiders, or technical agency bureaucrats.
In leadership at the National Artificial Intelligence Initiative Office (NAIIO), the chief organizing structure for federal AI policy, this pattern continues. Until recently, the now-vacant directorship was filled by Lynne Parker, a distinguished expert in robotics. The agency’s remaining leadership is likewise dominated by computer scientists.
On the National AI Advisory Committee (NAIAC), tasked with advising the president, we see some recognition of the need for non-tech engagement. On its webpage, the commission prides itself as “interdisciplinary.” In reality, this means the inclusion of four non-technical advisors: a lawyer, a social scientist, a national security expert, and a union rep. Considering the other 22 members are either technologists, academic computer scientists, or Silicon Valley executives, this token representation appears a drop in the bucket.
While these current federal structures are exceedingly influential, they admittedly hold little direct authority. On the horizon, however, we see a constellation of efforts to back technologist leadership with real teeth. In Congress, there are several floating proposals to fill regulatory gaps, and take preemptive AI action, by establishing a new AI regulatory agency. While such an agency needn’t necessarily reiterate this pattern of technologist-led governance, in most proposed cases that is the exact model advocates seek. Commenting on his own draft legislation, Congressman Ted Lieu claims “legislators lack the necessary knowledge to set laws and guidelines” and believes we should instead look to the expert guidance of an agency armed with “tech-savvy personnel.”
Similar moves are taking shape in the states. A recent bill introduced in the New Jersey State Legislature would create a catch-all “artificial intelligence officer.” Again, expert deference is explicitly the point. In the words of its sponsor, it’s not in “[New Jersey’s] best interest for … a state legislator to try to overprescribe what that public policy [around artificial intelligence] looks like.” Instead, New Jersey should “set up a mechanism to allow individuals with deep experience in this area to utilize that experience to frame out what that public policy should look like.”
While there are certainly exceptions to this rule, in the majority of cases, AI leadership in government has been handed to technologists.
Blind Spots
Why do I label this a crisis? While certainly a bit melodramatic, I do so because today’s AI leadership holds unique long-run importance. In the next few years, choices are going to be made and laws written that form the bedrock rules and guardrails that will direct AI’s future. If only one type of expert is writing those rules, blind spots and misfires are guaranteed.
The fact is, technologists simply don’t know everything. In a recent Bloomberg column, fellow Mercatus Center economist Tyler Cowen rightly noted that “true expertise on the broader implications of AI does not lie with the AI experts themselves.” While technologists can speak to the nuances of AI architecture, the electronics of GPUs, machine learning methodology, and AI capability, they cannot speak with authority on all potential AI use cases, challenges, and impacts. Understanding and muddling through AI’s economic influence, legal effects, copyright implications, education uses, labor force impacts, and many, many other questions demands the expertise of the non-technical. By engaging this full variety of non-technical experts, choices will be better informed, grounded in the complexity of real-world application, and likely well-targeted.
…and Brittle Realities
The information asymmetries inherent in our current technologist-led approach are not just theoretical. Earlier this year we saw a glimpse of how lopsided expertise can distort policy conclusions.
In 2020, the Trump Administration and Congress, as part of the AI initiative Act, commissioned the National Artificial Intelligence Research Resource Task Force (NAIRRTF), a body charged to study the creation of a “national artificial intelligence resource.” In the words of the Task Force final report, the envisioned Resource would be “a shared research infrastructure that would provide AI researchers and students with significantly expanded access to computational resources, high-quality data, educational tools, and user support.” To study this vision, congress specifically mandated the task forces’ twelve leaders be “technical experts in artificial intelligence or related fields.” While the Biden administration wisely devoted additional resources towards staffing an auxiliary economist and non-technical policy expert, congress’ hard coded intent was clear: give the technologists the reigns.
In this specific case, the explicit centering of technical leadership proved an odd fit. The AI Research Resource is a policy prescription aiming to address widespread concerns that scarce resources, market design, barriers to entry, and the high costs of compute may prohibit those outside of big tech from innovating in AI. While technical knowledge is absolutely required for such a study, the field that best matches the core of these problems is not computer science but economics.
By tasking computer scientists to solve an economic problem, congress yielded a report lopsided towards what engineers do best: product design. Paging through the report we find detailed explanations of how a resource might be administered, implementation plans for its services, and a range of further “product” details. Meanwhile only two paragraphs in the 104 page document are devoted to establishing the shape of the underlying problem. Without a grounding in economics, the task force was unequipped to research and scrutinize the very barriers to entry they were trying to solve. The resulting blind spot: market research tells us compute barriers are in most cases not actually a substantial AI barrier to entry. Naively assuming the opposite, the report proceeds to recommend solutions to this problem that doesn’t seem to exist.
Correcting Course
None of this lengthy discussion means that the resource is a bad idea, the report shouldn’t be implemented, or that the task force failed. The point is technologists are only human, only know so much, and like everyone - have their limits. Expecting that the AI technologist are suited to understand and solve every facet of every AI problem is simply asking too much of one group. Meeting AI’s unwieldly complexity requires pluralism; only with diversity in expert leadership can we understand the diversity of AI use, impact, and design.
Again, technologists are needed and in most cases governance structures do no have enough technical capacity. Still, placing the full load of AI challenges onto the “AI experts” isn’t going to work.* So, how do we proceed?
Unfortunately, addressing our instinctual deference to AI technologists doesn’t have a simple solution. Both the public and decision makers are going to continue to take seriously the ideas and prescriptions of tech experts like Sam Altman. As they should. Improvement, however, means recognizing and socializing the limits of those views, taking them with a heavy grain of salt, and supplementing with alternative and non-technical ideas.
When it comes to government AI leadership, direct steps towards reducing technical deference are tractable. In legislation, congress should not bind administrative hands by mandating technical leadership, as they did with the research resource task force. In administration, decision makers should take care to consider what non-technical knowledge might inform AI challenges, and staff commissions and bodies with a critical mass of those experts.
These recommendations while actionable, are admittedly basic. Where they fail: the bigger problem of engaging, educating, and activating the policymakers and bureaucrats equipped to contribute these diverse perspectives.** Much, much more is needed on the part of government to address this issue. In coming posts, I hope to dive deeper into this problem, and its related questions, while presenting ideas to guide governance towards a more diverse, robust policy reality.
*As an AI expert and computer scientist- I can say first hand we don’t have all the answers.
**My AI Policy Guide, a plain English intro to AI policy for non-experts, was produced with this explicit challenge in mind.