Image Source: https://www.nytimes.com/2023/06/21/us/ai-regulation-schumer-congress.html
Today (Tuesday, July 11th) Senator Chuck Schumer is convening all 100 senators for a first-of-its-kind classified briefing on AI threats and national security. In his words, the goal is to share “how [the United States] is using and investing in AI to protect [U.S.] national security and to learn what our adversaries are doing in AI.” For the AI policy niche, this is something of a novelty. While there have been several AI hearings in recent months, never has the body so unanimously signaled both the importance of this tech and the fact that legislators must be paying attention.
For Majority Leader Chuck Schumer’s AI priorities, this brief is highly tactical. In case you missed it, on June 21st Senator Schumer debuted SAFE Innovation, a framework for potential AI legislation. Unlike the diverse scattershot of AI bills floating through the halls of Congress, this effort deserves unique attention as it is both bipartisan (supported by a “Group of Four” including Democratic Senators Schumer and Heinrich, and Republican Senators Rounds and Young) and backed by the powerful mandate of Senate leadership. Put into this context, today’s briefing can be seen as a “kickoff event” of sorts.
Why start this legislative push with national security? The reasons are several-fold. First, AI policy has long found a cozy home in the natsec nest. National security imperatives have not only fueled government-backed AI advances (The DARPA Grand Challenge famously catalyzed autonomous vehicle innovation), but national security is what pushed congress in 2019 to create the first formal AI policy structure, the National Security Commission on Artificial intelligence.
More to the point, national security is one of the few remaining areas where congress regularly finds bipartisan agreement. The National Defense Authorization Act (NDAA) routinely passes with in some cases veto-proof majorities while cybersecurity regulatory bills, also under the tech policy umbrella, enjoy similar bipartisan support. If the senators want the body to agree something must be done, security is clearly the breeziest path and these issues the lowest common denominator.
This meeting is only the beginning of a congressional push to regulate artificial intelligence. Given the potential importance of this effort, and the significance of this introductory event, it’s worth taking a step back to analyze where the Group of Four may be leading potential legislation. What are they targeting? What principles are guiding the way? And how might these early efforts color this groundbreaking technology’s political future?
Innovation
For legislative specifics, the best source we have is Senator Schumer’s recent address at CSIS where he formally launched the group’s SAFE Innovation effort. Notable in this speech was repeated emphasis that “Innovation must be our North Star.” Schumer rightly noted the incredible promise AI innovation may bring. In his words, AI could “shape how we fight disease, how we tackle hunger, manage our lives, enrich our minds, and ensure peace.” This innovation-centric tone is certainly a measured breath of fresh air. Since the release of ChatGPT the public’s initial sense of promise and awe has largely collapsed on a broad sense of uncertainty and fear (see May’s Senate hearing on AI oversight for a taste of this skeptical attitude toward the technology). The result is emphasizing AI risk, at the expense of AI benefits.
While I’m encouraged Senator Schumer recognizes this promise, choosing to start this push with a classified national security AI briefing may thematically derail this north star. I’ve long contended that we cannot put our heads in the sand regarding AI national security threats, especially the risk of ceding leadership to China and AI’s growing role in cybersecurity. That said, the national security policy lens is inherently biased towards framing technologies as either ‘weapons’ or ‘threats.’ There is little in between. If the Group of Four start by teaching senators (especially the many senators no doubt out of the AI loop) that AI is first and foremost a threat, that is the frame that will guide their votes, amendments, and support moving forward. Today’s brief risks diverting legislative eyes from the innovation north star. As a corrective, it will be important for the Group of Four to emphasize a different risk: what we stand to lose by deemphasizing AI’s upsides.
Silicon Valley certainly has an over-active imagination, yet AI’s potential does not mirror over-hyped trends like the metaverse or crypto. Already this current generation of AI is transforming imagination into truly impactful reality. Just days ago the first AI generated and AI targeted drug entered FDA phase II human trials. This drug is a novel treatment for idiopathic pulmonary fibrosis (IPF), a disease that causes painful scarring in the lungs and carries the grim prognosis of death within 3-4 years of detection. If approved, this drug will represent a milestone in AI history. Not only did this AI driven process condense the drug development timeline from an average of 6-10 years, to just 3, but approval will realize perhaps the first truly tangible, and life altering benefits of AI technology. Roughly 5 million people have IPF, meaning 5 million lives could be saved by this single AI output. Not only that, rare diseases like IPF tend to go underfunded and lack the raw patient volume needed to support the the trial-and-error of traditional drug research and testing. AI processes can potentially mitigate these hurdles - making development cheaper and shorter while minimizing trial-and-error through AI enabled targeting. The effect is to derisk corporate investment, better incentivizing companies to seek treatments for even the rarest conditions.
This IPF drug is certainly a proof of concept, but by no means an anomaly. Insilico Medicine, the company behind the drug, already has several AI produced treatments on the way while other companies have found similar success, such as the recent AI aided discovery of a novel antibiotic to previously unkillable drug resistant bacteria.
The point of this brief drug discovery rabbit hole is to show that AI legislation should not be taken lightly. Transformative AI applications are already in use, lives are literally on the line, and legislators should be careful that any requirements do not poison the incredible AI innovation well. AI security threats absolutely deserve their time of day, and there is more than likely action that can and should be taken to tame concerns such as AI cyber threats. But that shouldn’t be the only focus.
In the coming weeks, I recommend the Group of Four make a point of pairing today’s briefing with a similar brief on AI potential to highlight AI’s very real upsides. Education is required if lawmakers are truly going to embrace this north star and put in the critical work needed to maximize an abundant future.
Legislative Priorities
Whether or not the Senate is going to effectively achieve this stated goal is clearly a function of legislative specifics. Today, we unfortunately have very few details about what SAFE Innovation will mean in a practical sense. What we do have, however, is a list of priorities. In his introductory speech Senator Schumer highlighted four key pillars: Security (national security, election security, and jobs security), Accountability (for AI’s use and impact on youth, improper use of racially biased AI in hiring, AI generated content intellectual property rules, and algorithmic auditing requirements and best practices), America’s Foundations (meaning legislative defense of America’s founding principles. Policy targets include norms for proper use and countering China’s AI competitive posture), and Explainability (requirements for systems to explain their decisions).
This is a long, varied laundry list of target issues making it hard to predict where the focus might land and what issues, if any, we might see tackled in final legislation. Today’s national security briefing certainly suggests the security piece of the agenda will be a highlight. Likewise, in his speech Schumer gave unique emphasis on restricting the use of AI generated images in political ads, an issue that already has caught legislative interest (see the REAL Political Ads Act ) and has recently been gaining momentum. This too could gain support (it already has mine). Beyond these, however, what stands out most is what isn’t in his proposal: Transparency.
In April, Axios sources reported early details on the provisions we might see in a final bill. Uniquely, most proposals targeted transparency (in addition to a requirement on AI explainability) including data source disclosure requirements, intended audience statements, and transparency of ethical principles. None of these were found in Schumer’s debut address, and it’s unclear why. The reason could be that they are under wraps to discourage knit picking, that Axios’ reporting was off-base, or that these ideas are off the table.
Regardless, I would urge the Senators to refocus their potentially unwieldy list of intended policy targets back towards transparency. Assuming a do something approach, transparency is a unique area where congress can do real good while avoiding onerous industry requirements. A rule of thumb when considering if regulation is necessary is to look at products through the eyes of consumers. Do they have enough information to understand what they are buying and will the market process work? Given the complexity of AI, that is often difficult. Algorithmic transparency, if implemented conservatively, could provide useful information for consumers looking for the best product. Requirements needn’t be onerous. Statements on what data sets and data sources were used could help consumers and consumer watchdogs understand what has shaped these models and the biases they may carry. Data decisions would be taken with care if companies knew they are under a microscope. Ethics statements, while unenforceable by government, could help consumers understand what they are buying and hold companies to account if model behaviors mismatch stated principles. Finally, intended audience statements could help consumers avoid Inappropriate Deployment bias, improperly using a model in a context it wasn’t designed for.
Naturally, there are many more transparency regulations one could imagine, there are certainly harmful versions of all these potential requirements, and I would also encourage legislators to shy from catch-all “AI legislation” in favor of a use-case specific approach. That said, a second useful rule of thumb when considering regulation is to imagine how heavy the weight of compliance may be on the backs of the regulated. While I’ve not run an AI company, I have coded machine learning algorithms. In my (admittedly one-sided) view, these simple transparency requirements should be easy to comply with for even small under-resourced firms.
Looking forward
No matter what is included, this potential legislation will be an uphill battle in a divided congress. Still, since the pandemic began partisan divides haven’t inhibited significant legislative success. In his address, Senator Schumer was right to note that this bill could build on the momentum created by the bipartisan passage of the Chips and Science Act and the Infrastructure Investment and Jobs act. Bipartisanship is not dead, and action may indeed be possible.
While I have misgivings about starting this push with a focus on national security, today’s meeting also suggests the Group of Four is serious about legislating. Lowest common denominator topics such as cybersecurity, military investments, and moves to counter China could form the baseline of something passable and may build a coalition strong enough to support provisions that go beyond security. This strategic approach indicates this effort is truly about legislating, not messaging.
In any case, this process is only just beginning. As legislators attend today’s brief and others, they will explore this technology and form opinions. The results will blur into focus in the coming months, hopefully paired with a crystallized respect for the complexity and potentially game-changing importance of this innovation.