Congress's AI Bills: Analyzing AI Safety Institute Legislation
What does the U.S. Stand to gain from this bipartisan legislative proposal
Image Source: https://www.nist.gov/image/dsc6258fulljpg
As promised, more legislation! The next bill(s) on the agenda are the Future of AI Innovation Act of 2024 (the Senate bill) paralleled by the AI Advancement and Reliability Act of 2024 (the rough equivalent in the House), both of which have passed out of their respective committees. While these bills vary, they share a substantive core, formally establishing the Artificial Intelligence Safety Institute (AISI) as a new division under the National Institute of Standards and Technology (NIST).1
If the Senate and House both tentatively agree that this new federal body should be created, what would it be and what might it offer?
What Is the AISI?
Under the legislation, the AISI would have three core missions:
1. Conducting research, evaluation, testing, and supporting voluntary standards development;
2. Developing voluntary guidance and best practices for the use and development of AI;
3. Engaging with the private sector, international standards organizations, and multilateral organizations to promote AI innovation and competitiveness
In short, the AISI would be a technical research lab, an AI standards setting body, and, perhaps most critically, a tool of AI diplomacy. Already, a temporary AISI pilot is up and running. This ad-hoc entity, however, lacks authority, cannot last, and its continued existence is subject to political whim. Effectiveness requires formality. This legislation etches the AISI and these missions into law.
Overall, the AISI is a good concept. As I’ve previously discussed, standards setting is a clear U.S. national interest, and an activist policy is needed to ensure standards favor the liberal world’s principles and technology stack. The benefits of the AISI go further, however, offering deeply compelling fiscal and diplomatic silver linings Congress would be loath to ignore.
Fiscal Benefits
Under both bills, Congress is set to invest $10 million in the AISI, prompting the question: can we expect a return on this investment? Examining a string of historical cost–benefit ratio analyses on comparable NIST divisions, what we find is a consistent yes.
Benefit-to-cost ratio of NIST technical standards and safety divisions
SRR equals social rate of return; BCR equals benefit to cost ratio
According to a 2023 meta-analysis of NIST economic impact studies, the median benefit to cost ratio (BCR) of technical standards and safety divisions was 9 (see table). In plain English: for every 1 dollar spent, NIST’s investments yielded 9 dollars in economic benefit. Meanwhile, the low-water mark of return is $4, and the high-water mark $249, per dollar spent. Across the board, these are eye-popping numbers. While the AISI isn’t a guaranteed success, NIST clearly knows what it’s doing and, unlike many agencies, has proven itself through impact.
The returns, however, don’t stop there.
Diplomatic Benefits
Perhaps the greatest benefit of establishing the AISI is that it positions the U.S. to steer the global AI conversation.
If approved, the U.S. AISI would not be alone. Since the UK established its own version of the AISI at the landmark AI safety summit in 2023, there has been a cascade of other nations’ institutes of standards following suit to boost each nation’s respective voice in the global AI conversation. At May’s AI Seoul Summit, these institutes formally agreed to join efforts, setting up a collaborative consortium to share data, threat intelligence, and resources while coordinating work and investments when possible. In this context, a U.S. AISI represents an important seat at the diplomatic table and some degree of influence over this potentially potent global effort.
This matters in three ways.
1. Agenda setting. International collaboration often carries powerful weight in our understanding and approach to global risks. The Intergovernmental Panel on Climate Change’s unique success is illustrative: the organization’s climate assessments are treated as definitive and form the benchmark for climate policy action. This budding collaboration of AI agencies has already published threat reports that could serve to set a similar standard in AI safety. If this collaboration continues, and grows in influence, it is absolutely in the U.S. interest to be involved and set the tone and content of the conversation.
2. Standards. Consistent international standards are a major goal of this collaboration, and by participating in discussions, the U.S. would gain the greatest voice. NIST has a proven track record for standards diplomacy and has frequently positioned American standards as the global default. In the recent, related example of cybersecurity, NIST has been in the global driver’s seat, setting encryption standards and formalizing the baseline international approach to cyber risk management. In any AI policy negotiation, NIST will command both diplomatic expertise from its cybersecurity efforts and the authority of U.S. economic leadership This is a powerful recipe for success.
3. Budget magnification. While the United States cannot dictate the budgets of other nations, the U.S.’s commanding industrial and diplomatic position could provide strong influence over how other participants spend their dollars. Currently, the UK has pledged $133 million, Canada $37 million, and Singapore $38 million (all USD). More dollars can be expected as further nations pledge. The result: by investing $10 million in the AISI, Congress effectively buys leverage over another $208 million (likely to grow as other nations invest). This potential adds further heft to the already strong fiscal case for the legislation.
Futures
While Congress still must negotiate the difference between the dueling bills, establishing the AISI is likely a worthwhile endeavor. As with any government program, there is the possibility of waste and distraction. Impact ultimately will depend on how the executive branch organizes the body and invests the money. In this case, however, there are also reasonable concerns of interest-group influence unduly steering the results and direction NIST takes. In the last year, Washington has been flooded by deep-pocketed AI Safety activists, Silicon Valley titans, and many other factions trying to twist national policy to favor their interests. To maximize the AISI’s potential, and minimize outside influence, Congress should heed these concerns and consider measures to preserve NIST’s independence; for example, by removing provisions in the legislation that would enable the AISI to self-fund through private sector gifts.2
With such small tweaks, the AISI can be fashioned into a strong institution that not only generates high economic return but provides the U.S. with a strong diplomatic hand to steer the global AI safety efforts that will continue, with or without American involvement.
Note that in the House version, this is bafflingly referred to by an alternate name, the “Center for AI Advancement and Reliability.”
A necessary first step is to remove the authority of the AISI to accept public gifts from the legislation. Agencies funded by gifts will be inherently biased towards gift givers; and, in a space awash with lobbying funds, more likely to be captured or influenced by special interests. The AISI cannot be effective as an unbiased standards setter and accept public gifts. These are mutually exclusive.