Introducing AI Progress
Too many lawmakers and companies are approaching artificial intelligence in the wrong way
One of the core goals of this newsletter is to develop a classically liberal model for AI policy. In this essay Matthew Mittelsteadt joins Brent Skorup, a Senior Fellow at the Mercatus Center at George Mason University, to take first steps towards this goal. Here we introduce AI Progress - a new framework to guide AI policy.
A version of this essay (link) was originally published in Discourse magazine.
Thanks for reading Digital Spirits! Subscribe for free to receive new posts and support my work.
Millions of people around the world have been struck by the remarkable abilities of ChatGPT, Dall-E and other recent artificial intelligence (AI) services. AI technologies, including machine learning and computer vision, have been around for decades, yet recent rapid developments have led to what looks like the beginnings of widespread commercial and consumer adoption.
With AI in the zeitgeist and our governing institutions taking notice, policy may soon follow. Any institutions and public policies put in place today could have far-reaching ramifications, shaping the forward trajectory of AI technology and commerce. We fear that popular approaches to AI governance may lead lawmakers—and the industry—into costly dead ends and distorted priorities. To prevent that, we propose AI Progress, a novel framework to help guide AI progress and AI policy decisions. Fundamentally, our framework is based on an “unbundled” and “applied” view of AI technologies and policy: AI is not one technology but many, and policies must distinguish between different uses of AI. Further, policy should be guided by the goal of enhancing economic growth, social pluralism and individual liberty. Investors, researchers and institutions, we believe, should prioritize these.
Unbundle AI Technologies
First, we approach—and will urge researchers, policymakers and technologists to treat—AI as several general-purpose technologies. AI must be analyzed by its specific applications in many industries, not as a new category of study, industrial policy and law. This effort is necessary, unfortunately, because lawmakers have tended to lump technologies together and view AI as a distinct industrial sector. This conflation problem is signaled by, for instance, the 2020 National AI Initiative Act, the White House Framework for an AI Bill of Rights and NIST’s AI Risk Management Framework. Unfortunately, this trend has gone global; see, for instance, the Council of Europe’s drafting of a “convention on artificial intelligence, human rights, democracy and the rule of law.”
To illustrate our use-based, “unbundling” approach, consider motors, the high technology of the early 20th century. Motors and engines, like AI, are powerful and labor-saving general-purpose technologies. The current policymaking approach to AI would be akin to President William Howard Taft announcing a 1910 “Motor Bill of Rights,” joined by a National Motor Act of 1910 and a “National Motor Strategy.” These public policies are fictional and, to our eyes, ridiculous. Early motors, as we know today, would eventually specialize and differentiate into uses ranging from personal vehicles, commercial trucking and rail to warplanes, remote-controlled cars, toothbrushes and washing machines (Benedict Evans made a similar point, reacting to the “Internet of Things” discussions, a decade ago).
Motor “uses” and effects could not be anticipated and accurately prioritized in 1910. Certainly, people had an inkling motor innovations would be useful in, say, warfare, industry and agriculture. However, huge social and economic changes effected by motor innovations—like suburbanization, air travel and housework reductions—were not obvious at the early Motor Era. A (hypothetical) National Motor Strategy executed by agencies would have diverted governmental and commercial funding and attention to politically salient motor industries, starving investment in and, perhaps, attention to other areas. A Motor Bill of Rights and “international convention on motors and rule of law” would have created future “motor problems” while ignoring the actual commercial potential of motors and the social problems that might arise.
Similarly, a coherent national AI strategy would need to anticipate the risks and commercial potential for, say, how an online clothing retailer will use AI, how drone delivery companies will use AI and how biomedicine companies will use AI. It’s fantastical to contemplate. The legal, ethical and commercial issues raised by AI depend on the use—not the technology. AI-assisted “sentiment analysis” could be used by a customer service bot at a plumbing services call center or by law enforcement and intelligence agencies analyzing social media posts and private emails. Continuing to conflate AI technologies will result in public policy blind spots.
Unbundling general-purpose technologies—whether AI, the internet or motors—means public policy must focus on how law and regulation will apply to new technologies. Products liability law, insurance regulation, constitutional law, international law and common law—not a newly created system of AI law—must evolve and apply to a diversity of motor uses, internet uses or AI uses.
While AI may indeed transform society, it is not self-evident that AI technologies will improve liberal conceptions of progress—namely, economic growth, pluralism and individual liberty. Therefore, at this early stage of AI development, it’s important for classical liberals to persuade technologists, AI companies and policymakers that AI is compatible with and can even expand these areas of progress.
Why Emphasize Economic Growth?
Economic growth creates new services, jobs and wealth, thereby diminishing zero-sum fights over resources. Economic growth also means people can live healthier, longer lives than earlier generations and have more time for leisure, entertainment and family. AI technologies have powerful labor-saving functions, but it is unclear at present whether they will, on net, generate more jobs, services and economic outputs. Eli Dourado puts the issue succinctly: “What if AI ends up like the Internet—transformative to our daily lives while somehow not actually delivering major productivity gains?”
AI could be a key tool to mitigate some of the most wicked policy challenges that could threaten our future growth. At this early stage of AI development, we urge policymakers and AI companies to prioritize areas where AI is likely to generate economic growth. The biggest expenses for American households—and U.S. industrial sectors—include housing, energy, transportation, healthcare and child care. We aren’t aware of AI technologies that could plausibly make large improvements in housing, energy and child care production in the near term, though we’d love to be proven wrong. There are, however, AI technologies that could transform transportation and healthcare, two major sectors we think are most promising for AI and economic growth.
In transportation, AI technologies—namely, computer vision—are being tested and used for managing car traffic and identifying dangerous driving. Computer vision and autonomous systems could revolutionize personal vehicle use and make driving safer and cheaper. In aviation, federal regulators are planning a system of air corridors for a near era of aviation, including an influx of small aircraft and drones. AI technologies can be used to dramatically improve efficiency and safety and reduce labor costs of creating and maintaining these corridors.
In healthcare, large language models are improving medical research knowledge by making it easier and quicker to scan vast quantities of biomedical literature to answer medical questions. AI tools have shown promise in accurately detecting diseases like cancer and Alzheimer’s in patient scans. Perhaps an even more impactful use of AI is in the invention of new drugs and treatments. Using AI tools such as AlphaFold, researchers have demonstrated the ability to discover drug target molecules in days rather than years. Hopefully, these and related AI technologies will eventually improve other areas including diagnoses, treatments and health outcomes.
Many people are concerned about the effect of AI systems on employment. New technologies have mixed, but generally positive, effects on jobs and growth. In the past century, high-quality, low-cost motors, for instance, eliminated the need for millions of farm laborers around the world. But commercial demand for motors created millions of manufacturing and technician jobs, not to mention economic surplus allowing millions of people to upskill and gain education for white-collar and service jobs. A pro-growth AI policy demands recognition of potential downsides and problems, but also a vision for ways to ensure AI-generated economic benefits are broadly shared.
Why Emphasize Social Pluralism?
Social pluralism is a fundamental prerequisite for prosperous liberal democratic societies. Democracy functions through competitive ideation and working within shared institutions despite difference. While related to individual liberty and arguably a catalyst of economic growth, social pluralism improvement should be considered a goal in its own right.
What does it mean for AI to improve social pluralism? First, it involves improvements in social coexistence. AI systems will not be deployed in a generic society, but one of varied languages, cultures, identities, technologies, beliefs, religions and needs. Public policy should enable AI to improve social understanding, knowledge and nuance through applications that break down barriers and allow learning and connection.
Second, it involves improvements in social accommodation. AI systems are flexible and have the potential to match technical design with the nuanced, bespoke and spontaneously ordered needs of a pluralistic society. Public policy should recognize this flexibility and seek to break institutional, technical and structural barriers that may limit AI’s ability to accommodate social needs. Already AI has shown potential in both these aspects of pluralistic progress.
Perhaps the biggest improvement can be found in communications. Linguistic barriers are undoubtedly a hurdle for cross-cultural interaction. Today, 60% of internet communication is in the English language. For non-English speakers, the vast majority of this shared information space is inaccessible. Machine translation can mediate access. Telegram, a popular messaging application, recently introduced automatic chat translation, testing the ability of AI to break this ground. Through Telegram, users may no longer need to default to a shared language. All can communicate in their own tongues, no longer limited by linguistic barriers. Such applications have the potential to ease cross-cultural interoperability and understanding.
Improvements in communications could also go beyond translation. Automatic text simplification technology, which seeks to render complex text in a more accessible and easier-to-understand format, has shown initial progress in improving written language comprehension. This could help increase access to government communications that in many cases, even in critical public health situations, aren’t written for clarity or for a broad range of reading or education levels.
Innovations such as auto-captioning and driverless cars could improve social accommodation. Already, applications such as TikTok and Netflix have implemented auto-captioning AI on all videos, expanding content access to deaf viewers. Here, AI has broadened these formerly one-size-fits-all services to accommodate the bespoke needs of the physically impaired. Driverless cars, meanwhile, may offer expanded mobility, opening transportation to elderly or disabled individuals who may be unable to drive due to physical limitations. It remains to be seen whether the impact of such technologies will be transformative, modest or negligible. The benefit, however, is expanded social participation and the ability for a greater number of people to engage with society and add to our shared culture.
While AI systems indeed show great pluralistic potential, progress is not guaranteed. Realizing the potential of these innovations will require the time and space needed to tinker with and test system designs. To that end, policymakers should focus on ensuring that regulation improves access to these emerging technologies, unburdens innovation and allocates research funds to pro-pluralistic applications. Blunt, one-size-fits-all policies should be avoided. Inflexible policies will put AI systems into unnatural boxes and restrict their abilities to incorporate the nuanced and application-specific requirements needed to reflect and serve a pluralistic society.
AI systems also have the potential to directly forestall progress. Certain forms of AI can spread misinformation and illiberal ideas, AI-powered content moderation can suppress worthy ideas and AI chatbots could be tools of cultural hegemony and soft power. One of the greatest challenges will be “factional AI systems”—information services that target and serve specific nations, communities, beliefs or ideologies. Such systems could accelerate what social media has done—fragmenting people into self-selected communities. The CEO of Gab, an alt-right social media platform, has referred to AI as “the new information arms race” and called for the creation of ChatGPT-like systems coded to generate anti-liberal information and “biblical truths.”
While it’s questionable whether the market will support such parochial systems, government actors are not immune from the temptation to use AI for their own ends. Generative AI could be a powerful propaganda and soft-power tool used to project state-sanctioned thought and cultural attitudes. Such attempts to limit or steer society’s natural diversity can only undermine pluralism. Factional AI systems can enhance diversity of thought but can also exacerbate social and political frictions. Researchers, companies and civil organizations must anticipate this and look for ways that institutions and factional AI technologies can improve social harmony.
Why Emphasize Individual Liberty?
Technologies throughout history—paper, the printing press, gunpowder, cameras, the internet—have had mixed effects on individual liberty. Most nations, therefore, have constitutional laws and norms that constrain government officials’ control of social and political life and surveillance of private individuals. Absent these constraining laws and norms, tiny numbers of people in government would be able to infringe on the individual liberties of millions.
AI technologies, likewise, have the potential to act as double-edged swords. Cameras in public areas are already pervasive—there are tens of thousands in DC-area public schools alone—and cameras will be inputs for AI-assisted government surveillance. A 2021 Government Accountability Office study found that, of 24 federal departments and independent agencies analyzed, 18 were actively using facial recognition. However, facial recognition has privacy and safety benefits. It can also help secure homes, businesses and private property against intruders and criminals. Further, it can improve cybersecurity and secure access to private records and to personal devices like iPhones.
AI technologies are great—and getting better—at analyzing massive amounts of text and images and producing novel, useful outputs. U.S. government agencies and large commercial companies have collected huge data sets in the form of social media sentiment, credit reports, criminal records and location data, just to name a few. As the government of China has shown, AI technologies in the hands of law enforcement can be effective at pacifying a nation and, sometimes, punishing political dissidents. Americans have a long history of limited government and constitutional protections for personal privacy. However, norms can erode, and persistent government officials will use temporary emergencies and pretexts to evade or ignore laws.
As a general rule, AI progress means anticipating increasing state control and harmful state surveillance. One part of the public policy equation will be encouraging liberty-preserving innovation. Homomorphic encryption, for instance, may allow an individual’s data to remain private while still being usable in AI systems. Public policy should remove any barriers to the development and deployment of privacy-preserving technologies.
More crucial, however, will be liberty-serving norms, institutions and laws. Again, a “use-based” approach to AI makes the issues clearer: How does the Fourth Amendment apply to law enforcement drone surveillance? What types of consumer data should federal agencies be prevented from purchasing, combining and analyzing? What behavioral and criminal histories justify placement on a no-fly list? AI is an incredible labor-saving technology, and government agencies, like their commercial counterparts, will increasingly rely on machine learning and computer vision technologies. Congress, researchers and industry leaders must be in conversation to help decide what technologies and practices are out of bounds.
Demand Empiricism and Time
“Progress studies,” a term coined by Tyler Cowen and Patrick Collison and defined in an essay in The Atlantic, is the “study of the successful people, organizations, institutions, policies, and cultures that have arisen to date.” Creating public policies that tie AI technologies to liberal progress requires rigorous analysis and a knowledge of history. In too many fields of study, however, even high-quality analysis offers nothing actionable to policymakers or industry. Progress studies as a field, Cowen and Collison say, must be “closer to medicine than biology: The goal is to treat, not merely to understand.”
AI public policy should therefore be shaped by two essential ingredients: empiricism and time. Empiricism—decision-making based on factual, observable data—may sound basic, yet it is lacking in existing AI discourse and policy research. Over the past decade, and especially since the rise of generative AI, people have tended to sort themselves into narrative camps. Some are hyper-optimistic about the assumed transformations AI will bring, while others hyperfocus on imagined social upheaval, job losses and malicious sentience. Both camps make the mistake of telegraphing an unknowable AI future and treating hype-tinged narratives as fact. As Rohit Krishnan rightly points out, “Beyond the immediate future everything we can think of is fantasy.” AI policy research too often becomes AI discourse, losing fact to narratives.
While both optimism and concern are natural—and indeed required to ensure continued progress toward effective, safe and useful AI systems—these impulses should be rooted in the here and now. Decisions should be based on what can be measured and observed: What does the technology look like today? How is it actually being used? What paths forward look promising? What problems can we empirically identify today? Perfect information may be impossible, yet the best choices will always be based in fact. To ensure AI achieves these goals, good policy will steer clear of narratives and focus on what can be proven.
Time is the second essential policy ingredient. As AI technology evolves, we may see splashier innovation and the emergent challenges that will naturally follow. To meet these challenges, policymakers must have patience and first give them time to play out. In many cases, solutions can be found if engineers are given the time to imagine and implement technical fixes. In others, users will need to adapt. People will need time to understand the limitations of this technology, when not to use it and when to trust it (or not). These norms cannot be developed without giving people the leeway needed to learn and apply these innovations.
Finally, some problems cannot be solved in a decentralized manner and do require government regulation. Even here, however, time is essential. Properly shaping regulations requires a firm understanding of the problem they are intended to solve. Only through time can a problem be measured, defined and understood. Acting too quickly will result in regulations that misfire, failing to mitigate the challenge while causing unintended harm.
We’re eager to watch the development and mass adoption of AI technologies. However, we view with some trepidation the commercial hype on the one hand and the early attempts at regulating AI on the other. The popular view of AI as an industrial category, not a general-purpose technology, is misguided and harmful. AI companies and government officials should “unbundle” AI technologies rather than develop grandiose—but essentially useless—national strategies, business-development approaches and regulatory frameworks. It then becomes easier for both lawmakers and companies to prioritize AI advancement in areas likely to generate economic growth, social harmony and individual liberty.
Thanks for reading Digital Spirits! Subscribe for free to receive new posts and support my work.
I really like the analogy to motor-related technology. Just like regulations for motors in warplanes look completely different that regulations for motors in hoverboards, regulations for AI in consumer facial recognition software and regulations for private sector AI used develop drugs should look completely different. Even if the AI-concept of using an algorithm as a means to achieve an end is the same, the externalities created by the process of reaching these vastly different ends are completely different. We need government to address these externalities at the industry/economic-sector level, not the AI-concept level.