A dictionary as clear as the FTC’s AI definition
In late February, the Federal Trade Commission (FTC) issued a stark message to industry: “Keep your AI claims in check.” AI hype is at a high, so-too are worries AI tech might tempt Silicon Valley’s many hucksters. As a result, regulators are eager to snuff the spark of potential fraud. The FTC specifically warns of what they call the “fake AI problem,” a slew of products falsely claiming the flashy gloss of artificial intelligence simply because it’s the next big thing.
Amid hype, truth in advertising is certainly a risk. There are indeed many factually inaccurate claims flying around and few individuals have a concrete sense of AI reality. The challenge with the FTC’s “guidance,” however, is the lack thereof. The FTC wants to crack down on AI claims, yet all its recent messaging fails to clarify what exactly AI even is.
But, to enforce you first need to define. In their guidance the FTC asserts that their technologists can “look under [a system’s] hood” to verify if a product is “AI-enabled;” a statement that naturally requires some kind of clear cut technical definition of AI.
The problem is that no such definition exists.
Whether the FTC knows it or not, by making Artificial Intelligence a regulatory target they are wading into a festering can of definitional worms. Artificial intelligence is a term that has eluded consensus definition among researchers for 70 years. There is a reason for this. While some might think the concept is self-evident and the enabling technology is relatively consistent, that is hardly the case. AI technology is constantly changing, and the term can mean many things to many people.
While the FTC has teed up this definitional question, this challenge is broader. Any agency interested in AI regulation will require a working AI definition. Getting this right is essential. So, what does AI even mean and why is it hard to pin down? What challenges do agencies like the FTC potentially face? And in a broader sense, can we establish a constructive and clear definitional path towards enabling effective “AI policy?”
This post will be in two parts. Today, I will investigate the hazy complexities of this term. It’s not as straight forward as the public, the FTC, and many other agencies might think. In part 2, I’ll explore how AI’s definitional uncertainty may undermine regulatory efforts and suggest alternative regulatory paths agencies can pursue.
The Technical Mess
So why is AI so hard to define? The reason, in part, lies in the technology’s diversity. Today most falsely assume AI systems match a relatively consistent form factor: they run on machine learning algorithms, learn from big data, use a relatively constrained set of hardware, and their final product tends to be a useful, yet still probabilistic black box. While such designs are certainly common, AI can take many forms beyond this simple model.
The range of AI tech is truly unwieldly. To grasp this diversity, let’s examine each element of the AI triad: algorithms, compute (microchips), and data to see just how varied this tech can be and just how often AI breaks the rules of this confining standard model.
1. Algorithms
The (recent) classic example of what AI algorithms look like are systems like GPT-4. This “large language model” does what its name suggests – it is designed to analyze and generate language, and more specifically text. It’s a chatbot. The algorithmic secret sauce behind GPT is a technology called machine learning (ML). As its name implies, this algorithmic class is inspired by human intelligence and more specifically the data-driven learning process humans use to understand the world.
This technique’s unique effectiveness has bred wide popularity. Today machine learning can be found in GPT-4, DALL-E, driverless cars, recommendation engines and many other systems commonly considered AI. In fact, in recent years this technique has grow so popular that it’s often considered synonymous with artificial intelligence.
AI and ML are not synonymous, however. It may come as a surprise that just a decade ago, in 2012, machine learning wasn’t the rule, but very much the exception. Beyond the ML systems common today is a world of AI techniques perhaps unfamiliar to most.
A recent AI success that doesn’t rely on machine learning is Google’s AutoML-Zero. This system is designed to autonomously code and discover new algorithms, more specifically machine learning algorithms. An “evolutionary algorithm,” this AI variant is inspired not by the human learning process but dynamic trial and error of evolution. Given just a library of tools, evolutionary algorithms can evolve new methods to fit a prescribed goal, just as animals evolve new traits to fit a niche.
Often unfairly characterized as historical curiosities, non-machine learning systems are more than just vestiges of AI technical progress. In February, Google used an improved version of AutoML-Zero to develop Lion, an image classification optimization algorithm that achieved state-of-the-art performance in its domain. Machine learning may be common, but it isn’t a universal prerequisite for achieving industry leading performance.
____________________
Not only is modern AI more diverse than machine learning, but outside this box we find the technical borders between AI and traditional software haze and blur.
Consider Cyc, an ambitious enterprise AI system used by the Department of Defense, Cleveland Clinic, and others. Cyc is a self proclaimed “common-sense engine” that seeks to “digitally codify all the basic concepts humans take for granted but machines have never really grasped.” In more technical terms, Cyc is a form of symbolic AI (another type of non-ML AI) developed not through learning but by an 32-year long process of intentionally hand coding its extensive knowledgebase (in Cyc’s case, this library contains over 25 million rules). To translate this knowledge into action, Cyc uses an “inference engine” and logical deduction to form its intelligent predictions.
What makes systems like Cyc unique is their boringness. Coded with intention, symbolic systems abandon black box design and probabilistic decisions in favor of explainability and logic-driven determinism - qualities regulators often demand. To capture these benefits, Cyc’s underlying technology blurs technical lines. While outputs may be considered intelligent, Cyc’s code isn’t a far cry from the decision trees and graphs of traditional software. In this case, it isn’t the code that clearly distinguishes Cyc as AI, it is the quality of its results.
To users, it just feels like AI should.
____________________
While hardly representative of the full AI range, these illustrations should demonstrate that AI algorithms are far more than just machine learning. These algorithms are diverse, ever changing, and may yet be disrupted by unknown future inventions and game changing techniques.
For regulators, this creates a challenge. Any adopted functional definition requires both accounting for future change while coving the full unwieldy reality of today. Already, it should be clear why the field has long resisted a consensus definition.
2. Data
Many often fail to realize that AI is just as much a product of its data design as its algorithms. In the data bucket of the AI Triad, diversity is again the key word. While prototypical machine learning based systems like GPT-4 traditionally require extensive data, big data is not a panacea.
In recent years, so-called small data techniques such as transfer learning and Bayesian methods have challenged the mainstream. Unlike traditional big data AI, these techniques are optimized to operate in data sparse environments. A fact of reality is that big data simply isn’t always possible and such techniques help AI find success where traditional systems might fail.
In the case of DeepMind's chess, shogi, and go-playing system, Alpha-Zero small data can even push the state of the art. As the “zero” in its name suggests; to create this system its engineers had to take small data to the extreme. It was literally built without training data. Armed only with game rules, this system mastered each game through pure trial-and-error. By playing games against itself Alpha-Zero iteratively trialed novel techniques and battle tested what would prove to be winning strategies.
In the world of game-playing AI, this technique was revolutionary. Previous systems had relied heavily on extensive training data gathered from human play and the well-established wisdom of human-discovered strategy. Upon its debut, Alpha-zero exposed the limits of this big data approach. It turns out, systems trained on human data only repeat the strategic blind spots of human techniques.
Meanwhile, Alpha-zero reinvented strategy from the ground up. Its thinking was fresh, techniques unexpected, and strategy cunning. Not only did the system best the big data state of the art, but gaming experts noted its “new and expansive set of exciting and novel ideas that augment centuries of thinking about chess strategy.”
Today, Alpha-zero isn’t an anomaly. Small data techniques have increasingly been making great strides and helping reinvent the state of the art. While many assume big data is the default, reality shows data demands can range from the unwieldy big to the nonexistent. Yet again, the many qualities that define AI resist clear cut boxes.
3. Compute
AI discourse tends to minimize, or even ignore compute - the third crucial element of the AI Triad. Even when considered, this hardware is usually treated as one-size fits all monolith. At risk of sounding like a broken record: hardware reality is far more diverse than most assume. Not all chips are equal.
Today, the most common hardware people associate with AI are Graphics Processing Units (GPUs) - the chip class that make large systems like GPT possible. But as Georgetown’s Center for Security and Emerging Technologies notes AI hardware cannot be easily fit into the GPU box. Not only are different chips used for training and inference (the term use to describe AI systems when in use) but in each half of this equation we see a considerable range of designs. Some systems rely on massively parallel GPUs, others wicked fast Application Specific Integrated Circuits (ASICs) and Field Programmable Gate Arrays (FPGA), while some require only general-purpose central processing units (CPUs). United only by their use of silicon, these chips aren’t interchangeable. Each enables unique AI systems that would perhaps be impossible without the bespoke qualities each specific chip provides.
Stepping outside these digital chips, you can glimpse an even weirder world of AI compute. Today, some researchers are pursuing so-called “Biological AI;” eschewing silicon in favor of programmed artificial cells and neurons. In 2022, Australian researchers successfully demonstrated this technique’s promise by growing a clump of neurons and programming them to play Pong.
A perhaps more grounded digital chip alternative is optical neural networks. These chips replace electricity with photons (particles of light) to represent and compute data. While novel in AI, this concept isn’t off-the-wall. By using photons, optical networks borrow the now time-test, wickedly efficient techniques that underpin the internet’s fiberoptic cables. Light is faster than electricity, and perhaps someday will power efficient neural network designs.
While its uncertain if either of these beyond-mainstream techniques will even gain practical adoption, the point is that AI hardware is hardly a monolith. Looking to the future it cannot be assumed the hardware of today will match the form factor of tomorrow.
The AI of the Beholder
These examples show the dizzying diversity of AI technology. Under each pillar we see incredible innovative dynamism and technical inconsistency. The various technologies that make AI aren’t just different, they are wildly different.
If AI is so diverse, and ever changing, how can we possibly define it? To date, there have been many attempts. A recently compiled European Union report identified 45 potential definitions. While a useful list, even this is only a small slice of the overall total. In the AI research community, definitional inconsistency is the only consistency. In 2017, one expert survey polled engineers on a slimmed down list of 17 possible definitions. The survey findings showed that even when options were limited, responses were starkly polarized.
In government, attempts at definitions have likewise met resistance. In 2019, diplomats to the Organization for Economic Co-operation and Development (OECD) signed an agreement codifying a shared AI definition that theoretically would then inform their respective domestic laws. When it came time for legislative approval, however, the parties balked, descending into domestic definitional dithering. As a result both US law and the EU’s proposed AI Act diverge and fail to match the agreed OECD standard.
A close look at both shows the practical returns on this disagreement yielded little. Both definitions are clearly creatures of compromise. Characterized by generic, catch-all wording, these definitions are commendably open-minded and inclusive, but at the complete expense of the precision required for the regulatory purposes of agencies like the FTC. Yes, these definitions exist, but they do not solve the AI definitional challenge.
Why is that, even when a concerted diplomatic effort to build definitional consensus is made, we inevitably fail?
The reason is that AI isn’t one technology, or even a clear-cut set of technologies – it’s a goal or perhaps even just a notion. Recall again the haze that exists between this technology and standard software. Cyc's code is quite traditional, yet when in use, its results have a certain indescribable ‘AI-ness’ about them. It is the notion of a system’s intelligence, not code, that defines something as AI.
This AI notion is not only hard to pin down, and different from person to person, but changes over time. The so-called AI effect describes how “once the technology is in use, nobody thinks of it as AI anymore.” In 1948, Time magazine christened the newly invented transistor the “little brain cell,” clearly impressed at its potential cognitive abilities. In the eyes these 1940s writers the transistor seemed to hold a wonderous sense of ‘AI-ness.’ It was intelligence in silicon form. Meanwhile, today’s readers might only scoff at this quaint notion. Transistors are now the most frequently manufactured device on earth, and common is an understanding of the limits of their ‘intelligence.’ The AI Overton Window has shifted and no longer are these “little brain cells” the stuff of science fiction.
Society’s ‘sense’ of AI is unmoored and time-shifting. What might be considered fantastic and intelligent today, might seem quaint and unintelligent tomorrow. Amid such uncertainty, definition hungry regulators should take pause. While it might sound absurd, we can’t say with certainty whether the public will even consider GPT artificial intelligence in 20 years’ time.
The Regulator’s Challenge
This brings us back to the FTC. Again, to regulate you first need to define. Yet as government, society, and scientists repeatedly learn – defining AI is not only difficult, but fraught. AI is a technically diverse, temporally shifting, notional concept.
So how do we proceed? In Part 2, I’ll dive into the FTC’s efforts and explore options to better solve this regulatory challenge. While no solution will ever be perfect, I believe workable “AI policy” isn’t impossible. Yes, it will be somewhat rocky, but there still exists a path forward for the FTC and others trying to enter the AI regulatory fray. That said, the current tack likely won’t work and will inevitably drift into stormy waters.