A Critical Look at Critical Infrastructure
Digging into the true meaning of this popular AI policy theme
Image source: Power Grids | Everything You Ever Wanted to Know (taraenergy.com)
As policymakers put pen to paper to “do something” on AI, an increasingly common theme is “critical infrastructure (CI).” 180 days out from the AI executive order this emphasis is on full display. Just days ago, Homeland Security debuted a new AI Safety and Security Board to set a course towards the “safe and secure development and deployment of AI technology in our nation’s critical infrastructure.” While boards and committees are common, this effort is no joke. Here we see the full force of the administration’s rolodex. Membership includes OpenAI’s Sam Altman, Nvidias’ Jensen Huang, Anthropic’s Dario Amodei, Microsoft’s Satya Nadella; an AI industry who’s who.
The administration isn’t playing around.
This is just the tip of the CI iceberg. This week, CISA followed up with more, issuing its official safety guidelines for critical infrastructure operators. Legislation is also in progress. In November, a bipartisan collection of six senators released a bill to regulate certain uses of AI in critical infrastructure. Meanwhile in California, SB 1047 would mandate AI developers ensure certain AI systems cannot enable infrastructure attacks. Picking up on the energy, OpenAI has even begun messaging AI as the “next critical infrastructure.” This narrative has force, and industry is leaning in.
This flurry of action is no surprise. Our most critical systems, including the grid, water, pipelines, etc. often have little room for error, and when disaster strikes the impacts can be profound. Certain systems have to work. Perhaps CI represents a well-targeted, lowest common regulatory denominator.
At least until you look under the hood.
Recently, I published a policy brief exploring the often-unrecognized critical infrastructure reality. This legal category is deeply misunderstood and CI’s scope unrecognized. While our most critical systems may be a logical regulatory starting point, what’s included defies practicality.
Critical infrastructure doesn’t mean what most think it means. Before regulating AI’s use or impact on CI, we should ask: what exactly are we regulating?
A Critical Look at Critical Infrastructure
To dig into the CI reality, I’ve pulled a few excerpts from my recent report.
First, what does “critical infrastructure” even mean? Under the Patriot Act CI is defined as:
“systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.”
Note the unboundedness. In the law, there are no examples, formulas or criteria to limit what might count. Complete discretion is in the hands of the executive. The result? Policy bloat. Today, CI is subdivided into 16 industrial sectors “ranging from the clearly essential, including energy and water, to the decidedly optional, such as commercial facilities.”
The spread of these sectors is unwieldy:
“In its profile of the chemicals sector…[CISA includes] cosmetics, perfumes, bookbindings, and vehicle paint.”
The risk of a scratched car, it seems, is a matter of national security.
“Under transportation, there are critical systems such as our trains, but also vanpool and rideshare services.”
Uber and Lyft are likewise critical.
“Under commercial facilities, we find perhaps the biggest sprawl of ‘optionals,’ including the nation’s 2.1 million office buildings and retail shopping centers as well as the entire hotel, film, broadcast, and casino industries.”
Apparently, we can’t risk a day without gambling.
Beyond these examples, there is so, so much more. To distill the overall, sprawling picture:
“[T]he combined GDP of just 3 of these 16 sectors—the chemicals sector (25 percent), commercial facilities sector (20 percent), and healthcare sector (17.4 percent)—represent over 50 percent of the total US economy.”
To emphasize: this is just three sectors. While the government doesn’t record GDP figures for each, the total economic spread is likely far higher. The best answer to “What is critical infrastructure?” is, it seems, another question: “What isn’t critical infrastructure?
Why Care?
Unfortunately, this unwieldy scope jeopardizes policy success. Let’s look at a few examples:
1. CISA’s Safety and Security Guidelines
Sprawl stresses prioritization.
Government resources will always be limited and when scoping is this broad, results are bound to be watered down. CISA’s new critical infrastructure safety and security guidelines illustrate the challenge. In the AI executive order, CISA was tasked to “translate the National Institute of Standards and Technology’s AI Risk Management Framework into “relevant safety and security guidelines for use by critical infrastructure owners and operators.” A worthy task, yet given scope, an impossible mandate.
The resulting document is a casualty of breadth. Attempting to appease disparate CI parties, the report reads at the highest of high levels, offering neither novelty nor context specificity. The document is generic to a fault.
In CISA’s own word’s “the guidelines…are written broadly so they are applicable across critical infrastructure sectors, DHS encourages owners and operators of critical infrastructure to consider sector-specific and context specific AI risks and mitigations.” Their central guideline: figure it out yourself.
The fault here isn’t on CISA. The organization does essential work, is grossly underfunded, and the mandate’s 180-day time frame made targeted risk management and analysis across these sectors impossible. If CI were better prioritized, perhaps the agency could have marshalled itself to produce lean, mean, and effective guidelines.
2. California’s AI regulation
Another potential challenge is overregulation. If any bill either directly regulates or implicates CI, overreach is the result.
Right now, California is considering SB1047, 1an expansive bill to regulate AI risk and a perfect illustration of CI’s policy strain. Under the bill, AI developers would be required to make a “positive safety determination,” that an AI system doesn’t have a “hazardous capability.” This is in part defined as the potential to yield “At least five hundred million dollars ($500,000,000) of damage through cyberattacks on critical infrastructure via a single incident or multiple related incidents”2 if used “in a way that would be significantly more difficult to cause without access to a covered model.”
Given CI’s scope, this bill risks an unwieldy, unworkable information burden for AI developers. The litany of potential targets under the CI umbrella is so vast and so varied that no developer can be expected to reasonably imagine every possible risk. What’s more, the probability of a qualifying incident is not low. In 2017 the hack of Equifax, part of our financial service’s critical infrastructure, yielded $1.4 billion in damages. This triples the bill’s threshold. As for AI’s cyber utility, this tech is quickly augmenting the cyberattack chain. Already, OpenAI and Microsoft report threat actors are using AI to generate spear-phishing emails and assist actors with translation. On the cutting edge, voice cloning is enabling multi-million dollar thefts. Soon, we may even see AI generate malware as code generation aids malicious software.
The one-two-three punch of these varied AI uses, the likelihood, and the unwieldy array of critical infrastructure creates the very real risk that developers might be at fault. The possible result: A chilling effect will descend, binding legally-cautious developers.
The wide net of critical infrastructure puts AI innovation at risk.
Doing better
These examples should hopefully demonstrate that CI represents a poorly tuned policy and regulatory foundation. For those seeking targeted AI safety, it undermines state capacity. For those seeking innovation, it risks overregulation. No matter the perspective, the current state doesn’t work.
Thankfully, we can do better. Here are two potential legislative fixes:
1. Get specific: If targeted policy is the goal, directly list those targets. If you want grid regulation, say so. If you want safe pipelines, spell it out. The benefit here is clarity and simplicity. The challenge, however, is future proofing and flexibility.
2. Build criteria: Another option is to set out a list of criteria to flexibly determine what counts. Unlike hard-coding targets, criteria can be technology agnostic and evolutionary. When a new piece of infrastructure arises, the list of critical systems can expand and contract in tandem.
Both options are an improvement and (theoretically) neither should be difficult to legislate. Critical infrastructure isn’t controversial and related bills regularly receive bipartisan support. Fixing this problem could be easy and will set us up for success. At the very least, policy makers should be clear eyed. This is a big package of industries, and that scope should be respected. This carries real risk, and that risk must be accounted in policy.
For more on this issue please read my recent policy brief: Critical Risks: Rethinking Critical Infrastructure Policy for Targeted AI Regulation
For more, see my colleague’s incisive analysis of this bill.
Note that while the California bill uses an ever so slightly adjusted definition, differing largely in word order, the California government has explicitly deferred to the federal classification scheme to give substance to the law. Among the states inheritance of the federal approach is common if not universal.
Prioritizing everything means prioritizing nothing.