

Discover more from Digital Spirits
The Emerging Global Divide Over AI Regulation
The EU, China, and US are taking diverging approaches to AI tech oversight, and creating strange bedfellows.

There is a scramble around the world by lawmakers to learn about and to regulate AI. As Blair Levin and Larry Downes point out in the Harvard Business Review and Adam Thierer notes, all levels of government, including US states, federal agencies, and global bodies are considering and enacting AI regulations, and it’s unclear who has authority to regulate. Many AI safety researchers and policymakers–and even some in industry–have urged Congress to create new AI laws and an AI regulator. That would be a mistake. I noted in an interview with Jim Pethokoukis’ Faster, Please! substack:
Researchers, technologists, and policymakers make a big mistake in viewing AI as a new category of study, industrial policy, and law. AI should be “unbundled”: analyzed it by its specific applications, with existing laws applied. This is good practice for any general-purpose technology.
Take motors. Motors are used in warplanes and in electric toothbrushes. Policymakers apply established laws of war — with modifications for technology—to warplanes and established products liability law — with modifications for technology—to electric toothbrushes. They do not apply “motor law,” or “motor audits,” nor “motor ethics.” Motor technology is “unbundled,” not “bundled” under a single framework for disparate applications.
It should be noted that the “unbundled” v. “bundled” approaches do not represent a “deregulatory” or “regulatory” approach. These approaches do not cut right-left, Republican-Democrat, or big company-startup. In fact, these two approaches create strange bedfellows. There are members of both parties interested in a new agency to regulate AI. And you’ll find me and the limited-government policy experts at TechFreedom in large agreement with progressives Tim Wu and FTC Chair Lina Khan about how AI oversight should proceed. As the Washington Post summarized some of former Biden White House advisor Tim Wu’s views:
Don't: Create an AI-focused federal agency
Do: Enforce the laws on the books
Wu notes that a new agency “would create heavy compliance costs for market entry,” among other problems. I would add that broad, new laws directed at AI will result in public policy blind spots–AI doomers, for instance, fixated on non-existent or unlikely harms while allowing actual and imminent AI harms to go unremedied.
So I actually was encouraged by the recent joint statement by the FTC, DOJ, EEOC, and CFPB. The statement represents the “unbundled” approach that should tend to encourage AI development and innovation. These agencies say they will apply their existing laws to AI technologies, pledging “to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”
And I nodded in agreement as FTC Chairwoman Khan elaborated in the New York Times:
Although these [AI] tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market. . . . Existing laws prohibiting discrimination will apply, as will existing authorities proscribing exploitative collection or use of personal data.
Certainly, while Wu, Khan, and I agree on a general approach, there would be disagreements about what that oversight and enforcement would look like in specifics. The underlying substance of existing laws and how they’re enforced matter. For instance, though we might agree that, say, product liability rules and FTC rules should apply to AI services–not new “AI laws” from an AI regulatory body–product liability rules and FTC enforcement can, in the wrong hands, destroy companies and industries. Likewise, a vague, broad “bundled” approach, which EU regulators are close to codifying, could go essentially unenforced, letting AI companies and services grow, relatively unimpeded, for better or for worse. But the latter approach creates many more opportunities for regulator and interest group mischief and misdirected priorities.
China v. Europe
The approach matters, and they are distinct enough that researchers are noticing differences among nations. Some researchers at the Carnegie Endowment note these two different approaches to AI regulation (they say “horizontal” v. “vertical” approaches, rather than “bundled” v. “unbundled”). They note Europe v. China as the two competing approaches:
Neither the EU nor China is taking a purely horizontal or vertical approach to governing AI. But the EU’s AI Act leans horizontal and China’s algorithm regulations incline vertically.
Now, where is AI technology gaining investment and adoption? China, taking the unbundled approach. (Japan, likewise, is avoiding the horizontal, EU approach so far.) This is not to endorse all AI uses in China, merely that an unbundled approach drives investment. Europe, in contrast, is looking more like where AI technology goes to die. For instance, the EU’s AI Act, at last count, had 89 preamble sections and 85 articles. Its annex to a single article about designating a highly-regulated “high-risk” AI system requires regulators to weigh 15 factors and subfactors.
Perhaps unsurprisingly, tech companies are sometimes avoiding Europe since local laws could expose the company to punitive, unpredictable liability. As the Washington Post reports:
Google planned to launch its chatbot Bard in the E.U. this week but had to postpone that move on receiving requests for privacy assessments from the Irish Data Protection Commission, which enforces Europe’s General Data Protection Regulation. Italy temporarily banned ChatGPT amid concerns it violated Europe’s data privacy rules.
CEO of OpenAI Sam Altman suggested recently the company would avoid European markets, depending on the enforcement of its vague rules.
Which Approach in the US?
As the Center for Growth and Opportunity and other free-market research centers point out, markets themselves are the public’s first-line defense against “unaligned AI” services. However, as with any general-purpose technology, whether engines, or electricity, or the Internet, people will misuse and make criminal use of AI. Will the US take the sectoral approach or the horizontal (EU) approach? The Carnegie researchers noted that the US is somewhere in the middle, and it’s not clear presently which view will prevail. The Washington Post reports:
Google is urging the federal government to divvy up oversight of artificial intelligence tools across agencies rather than setting up a single regulator dedicated to the issue, striking a contrast with rivals like Microsoft and OpenAI in a new filing to the Biden administration.
I don’t think that Microsoft and OpenAI want a single regulator for all AI. As noted above, OpenAI’s Sam Altman appears reticent about the EU’s “horizontal” approach to regulation. However, Microsoft and OpenAI seem open to regulation for a dedicated regulator and licensing of “highly capable AI foundation models.” Altman even seemed to welcome the idea of “government auditors sitting in our buildings.”
Nevertheless, US lawmakers and regulators have a choice: Create new, vague “AI laws” or apply existing laws–like antitrust, copyright, Sec. 230, product liability, and constitutional law–to new technologies. US policymakers and US companies seem to be leaning towards the latter approach, which is a good sign for AI investment and legal clarity. But there is a growing chorus for taking the horizontal, EU-style approach.