It’s hardly novel or difficult at this point to make the case that vital American institutions have declined.
Emblems of elite disfunction abound, from the implosion of ideologically blinkered Ivy League presidents to the dogmatic suppression of entirely plausible scientific theories. And the data on plummeting public confidence in institutions have long told a clear story.
We may disagree about whether “Brokenism” or “late Soviet America” are the right frames, but few dispute they encase some disillusioning truths. While minds greater than mine debate the extent of ruin in a nation’s institutions, I’ll focus on what can be done, looking to experiments with bottom-up financial tools following the global financial crisis as a case study.
Emerging financial technologies—from cryptocurrency to online research forums to artificial intelligence—decentralize core financial processes and expertise. This decentralization challenges existing governance methods and exemplifies the conflict between traditional authority and the disruptive information technologies defying its grasp.
Overcoming this challenge in a productive way will require accepting that decentralized financial expertise is here to stay. Instead of continuing the doomed quest to put that genie back in the bottle, policymakers should incentivize higher quality from decentralized expertise. If successful, this shift could provide an example of a healthy way out of our trust and “trustworthiness” crisis more broadly.
Why the Loss of Trust
One of the best explanations for the loss of trust in institutions is Martin Gurri’s 2014 book The Revolt of the Public. In Gurri’s thesis, when institutions at the commanding heights of society lost their “monopoly on information,” public trust went with it. The one-to-many broadcasting technologies of the 20th Century naturally favored centralized control and official narratives. The arrival of many-to-many media over the Internet, however, loosed torrents of a great “perturbing agent between authority and the public”: information.
The proliferation of alternative information sources not only challenged specific claims (had monetary policy really been perfected? could scientists predict every natural disaster? and could intelligence agencies predict every manmade one?), but also the very idea of authoritative expertise. New information that deviated even the slightest bit from the elite’s self-justifying story sowed doubt in the minds of the public. The presence of doubt was anathema to the foundational myth of the hyper-competent expert. The fall was all-the-more foredoomed because that myth had been so grand. Notably, Gurri spreads the blame for credulous mythmaking around, lamenting the “exaggerated expectations by the public, abetted by exaggerated claims of competence by authority.”
There’s a special place for finance in Gurri’s history of institutional crisis. Pointedly, he argues the era of distrust was born on September 15, 2008, the day Lehman Brothers filed for bankruptcy. For Gurri, the global financial crisis and disappointing recovery were a double whammy of disillusionment, indicting both the technocrats on whose watch the crisis occurred, as well as their replacements, whose remediation efforts underperformed promises. By 2014, Gurri writes, “trust in economic experts had vanished, probably forever.”
DIY-Fi: The Rise of Bottom-Up Financial Alternatives
Disillusionment with that status quo typifies two subsequent financial phenomena that have decentralization—of both governance and knowledge—at their core: cryptocurrency and retail investor empowerment.
The 2008 financial crisis looms large in crypto lore. The Bitcoin White Paper was released the month after Lehman’s bankruptcy, and the first block of transactions recorded on the Bitcoin ledger encoded the headline: “Chancellor on brink of second bailout for banks.”
While the causal link between the financial crisis and Bitcoin’s origin is often overstated (peer-to-peer digital currencies had been sought for decades, and Satoshi Nakamoto had been working on the problem since before 2008), a general distrust of centralized institutions animates the project. Bitcoin’s core innovation was eliminating the need for “a trusted central authority” to solve the double-spend problem in payments.
Cryptocurrency has since given rise to a broader ecosystem of decentralized finance (DeFi) tools, including self-executing protocols for things like disintermediated escrow, trading, and lending. In addition, the underlying technology has raised the possibility of a new network architecture (sometimes called Web3) to challenge the walled-garden model of major tech platforms.
The rise of networked retail investors has decentralized certain financial research and expertise.
The ascent of the retail investor gained national attention in January 2021 with an historic short squeeze on GameStop shares. Participating retail investors leveraged trading apps and social media, which allowed traders to organize over forums like Reddit’s WallStreetBets.
The “meme stock” label applied to this phenomenon distils the conflict between centralized and decentralized expertise. While a term of derision connoting ignorant behavior when used by some legacy sources, the plain meaning of “meme stock” is simply something like “viral information stock.” The dispersion of information can be and, as we’ll see, often is the very opposite of perpetuating ignorance. Traditional authority, however, is threatened by viral information because of its source, not its quality.
Countering the Counterrevolution
New information sources challenging traditional authorities provoke, as Gurri explains, “counter-revolution by the established order.” One such counter strategy is to accuse the public of the very epistemic closure plaguing the experts. Hence elite denunciations of online echo chambers, the public’s supposed failures to “trust the science,” and misinformation. More authoritarian tactics involve surveilling and blocking online activity, as well as harassing and taking punitive legal action against perceived threats.
Financial regulators’ response to cryptocurrency and meme stocks is a counter-revolution. Crypto has faced an administrative crackdown that’s far more prohibitionist than regulatory. The meme stock phenomenon was met with a variety of paternalistic proposals, including those targeting the intuitiveness of trading app interfaces and seeking to restrict social media. Ultimately, the GameStop saga led to a highly flawed regulatory response that threatened not only retail trading but also AI innovation. In the words of Penn Law professor Jill E. Fisch, the reaction to retail traders reflected “a curious sense of elitism.”
In general, such elitism should not be mistaken for wisdom. The U.S. crypto crackdown is counterproductive to U.S. leadership. As I’ve argued, overt hostility towards the Americans developing and governing a new generation of global financial rails should worry anyone who wishes to see U.S. interests and values represented in those systems’ design. Similarly, undermining investment in applied cryptography talent is hardly a winning human capital strategy for cybersecurity.
In addition, the idea that only credentialed experts can produce worthwhile insights is contradicted by the reality of how certain retail traders do their own research. As my colleague Jennifer Schulp explains, investigation into the quality of WallStreetBets in particular “rejected the conventional view that the forum attracts only uninformed investors and leads to less informative retail trading.” There were skilled due diligence contributions on the forum, participants were able to discern quality, and retail investors were found “likely to benefit from recommendations.” In addition, Fisch argues that stock issuing corporations themselves may even benefit from the feedback of “the ‘collective wisdom of the Reddit chatroom.’”
While one shouldn’t over-index on the wisdom of the crowd, it’s the height of hubris to dismiss it incuriously. Generative AI is poised to accelerate the potential of distributed financial intelligence.
From the Wisdom of the Crowd to the Wisdom of the Computer
At first glance, the literature on generative AI’s financial expertise appears decidedly mixed. Headline findings that “LLMs can’t be trusted for financial advice” and that ChatGPT is a “Wall Street neophyte” clash with others indicating that “GPT has become financially literate” and can make valuable stock market predictions.
Undoubtedly, mileage may vary. But a deeper look at how and why different studies arrived at different results reveals the key capabilities and potential of generative AI as a financial expert.
Notably, some of the more negative studies focused on large language models’ (LLMs) quantitative skills, which, while improving, are not their strongest suit. By contrast, studies highlighting LLMs’ abilities to make stock picks with impressive Sharpe ratios and positive returns have capitalized on LLMs’ exceptional ability to process natural language and parse texts like earnings announcements and financial news.
Moreover, while past performance does not guarantee future results in finance or AI development, LLMs’ financial expertise has been improving. For example, LLMs are increasingly able to interpret complex and nuanced monetary policy statements. Researchers at the Federal Reserve Bank of Richmond found that GPT-4 outperformed earlier models, as well as traditional methods, in classifying FOMC announcements, providing “justifications akin to human rationale.”
Similarly, whereas GPT-3.5 achieved a ~65% score on financial literacy tests, GPT-4 achieved “a near-perfect 99% score,” suggesting financial literacy is “becoming an emergent ability of state-of-the-art models.” Interestingly, the same study found the more advanced model to be more measured in tone, addressing the recurring issue of LLM overconfidence. Given these findings, the investigators concluded it was “reasonable to posit that some of the most advanced LLMs now possess the capability to serve as robo-advisors for the masses.”
Individual Advice and Collective Intelligence
Robo-advisors, of course, already exist. But traditional robo-advisors tend to perform tasks that are simultaneously narrower and deeper than what advanced LLMs can do at present. Typical robo-advisors can tailor a portfolio to individual needs according to preset investing theses but don’t generate new strategies or respond to open-ended questions. LLMs, by contrast, can be more improvisationally creative, but their ability to execute long-term tasks is a work in progress. (The integration of traditional robo-advisors with LLMs is a space to watch.)
Importantly, robo-advisors typically involve a centralized provider who has successfully registered as an Investment Adviser with a centralized regulator, the Securities and Exchange Commission (SEC).
LLMs, however, open the possibility of decentralized provision of investment advice, presenting a novel regulatory challenge. Crucially, whether investment information is classified as regulated advice hinges on whether it’s personalized, i.e., “attuned to any specific portfolio or to any client’s particular needs.”
Critically, LLMs can provide personalized financial advice. One study found GPT-4 capable of recommending portfolios reflecting an individual’s particular situation and risk preference. (The advice was found to be “on par” with that provided by a traditional robo-advisor.)
This is significant for a few reasons. One, commercially available LLMs arguably already have the capacity to issue the type of financial advice requiring SEC registration, raising the question of whether the SEC can or should be in the business of licensing, writ large, generic LLMs with that ability. AI providers restricting this capability in their models for fear of triggering an SEC registration requirement would itself be implicit regulation.
Two, the availability of increasingly capable open-source financial LLMs challenges a centralized registration regime. The proliferation of widely dispersed models without the discrete regulatory touchpoint of a closed-source provider will likely overwhelm the capacity of a centralized regulator, absent an intolerably Draconian surveillance and enforcement regime.
And three, LLMs can help users transform impersonal financial information—such as a generic newsletter that is not itself tailored advice—into covered, personalized advice. It’s not hard to imagine the SEC taking a great deal of interest in a technology that effectively puts general financial information within a screw’s turn of becoming regulated financial advice.
To be clear, whether a closed- or open-source LLM provider, developer, or model itself would be found to satisfy the rest of the investment adviser criteria—e.g., receiving compensation and being “engaged in the business of” giving investment advice—would involve a separate, fact-specific inquiry. Nonetheless, that these criteria have been construed broadly and can involve multi-factor evaluation means the legal risk is non-trivial. To date, the SEC’s approach to emerging financial technologies has not been what one would call restrained.
A Better Way
Applying the investment adviser regime to LLMs restricts public access to affordable financial advice by imposing centralized credentialing on decentralized expertise. Instead of futilely battling to preserve a monopoly on expert gatekeeping, public policy should incentivize quality from decentralized sources. This would help to navigate between the twin dangers of “bureaucratic inertia” and “digital nihilism” that Gurri identifies.
The shift requires embracing what Gurri calls the “old-school virtues”: honesty and humility. In Gurri’s words, these traits were “long accepted to be the living spirit behind the machinery of the democratic republic” but have since been all but abandoned by the expert class. Reformers, he argues, must acknowledge that public policy is an uncertain process of trial and error.
The common law is a longstanding set of legal doctrines and processes that use iterative learning to incentivize honesty and competence.
In the context of AI financial agents, I previously argued that “the ancient but flexible common law may be the best framework for handling rapidly progressing autonomous AI” given that common law is “an evolutionary body of law that iteratively adapts historic principles to novel circumstances.” I explained that “centuries before specialized securities statutes imposed fiduciary duties on investment advisers, the common law of agency identified when autonomous agents owe others fiduciary duties” and, therefore, “as software itself gains autonomy, agency law can provide a legal framework for AI financial advisers that is suitably adaptable.” Applying that doctrine to AI financial agents, in my view, would provide standards rigorous enough “to mitigate risks but not so onerous that they become obstacles” to innovation. A thoughtful argument has since been advanced for applying a similar framework to sufficiently advanced generic AI agents.
Even short of autonomous AI financial experts, the common law can help to promote honesty and humility in the policy response to decentralized financial expertise.
Old-School Learning
Instead of prior restraint, the common law provides legal recourse that incentivizes honesty and competence.
Fraud actions can target intentional or reckless false representations on which others reasonably rely to their detriment. Instead of requiring the providers of AI financial experts to seek permission before entering the market, for instance, the prospect of common law actions can incentivize providers to be honest and accurate when describing their products.
In addition, tort actions can target breaches of relevant standards of care. For example, instead of facing prescriptive licensing requirements, the provider of false guidance who failed to use reasonable care may face an action for damages where the recipient justifiably relied on that information. Notably, that possibility should not be mistaken for flawed proposals that would deem AI providers uniformly liable, with minor exceptions, for any securities violations stemming from use of their models. That’s because the common law standard first asks key questions. For example, was the user’s reliance justified? In addition, was the standard of care observed? That standard of care can develop over time and consider, among other factors, privately evolved custom and industry best practice.
To be sure, every system has tradeoffs, and the common law courts do not always get it right. Moreover, the prospect of burdensome litigation can itself become an undue constraint. Nonetheless, the common law has self-corrective tools. Appeals can be one ex post cure. But prevention in the form of bargained-for contracts that document and clarify the parties’ expectations and potential liability upfront is often preferable. As legal scholar Richard Epstein argues, strong respect for contract rights would allow “the prospect of voluntary private correction of a wide range of uneducated judicial guesses.” In addition, where contracts provide for arbitration of certain disputes, “a strong sense of industry practice can grow up to handle recurrent cases.”
Enabling creative contractual and arbitral solutions should be an AI policy priority. For example, special-purpose alternative dispute resolution forums, which parties can agree to be bound by, could allow the development of a private common law tailored to certain AI systems and use cases, perhaps leveraging AI itself (as an interpreter of texts and simulator of synthetic fact patterns) to that end. This would help with the “adaptation and gap filling” that I’ve argued applying common law doctrines to AI will require.
Because legislative frameworks and the path-dependency of certain common law doctrines themselves (such as medical malpractice) can undermine the strength of private contracts, enabling these creative solutions will require active reform.
Conclusion
At this moment in history, time is not on the side of centralized experts maintaining monopolies. The cracks will continue to show and the doubt will continue to build. Re-centralizing solutions that fight the tide by suppressing information and impugning the obvious competencies of distributed expertise will only further undermine public trust. We need a fundamental shift in how authority understands its role. The first step is admitting and accepting the new reality. The second is to optimize within the constraints of that reality by deploying decentralized responses—such as the common law and private bargaining—to the rise of decentralized expertise. To do otherwise would be to fundamentally misconstrue the meaning and responsibility of authority in a democratic republic. Standing athwart public enlightenment is no answer. Denying the public the tools thereof is no virtue.
Jack Solowey is a Policy Analyst at the Cato Institute’s Center for Monetary and Financial Alternatives, where he focuses on financial technology, including crypto, DeFi, and AI. You can follow him on X @JackSolowey and on Substack at
.