Quick Takes on Senate Bills on the Move
The body is finally making good on its promise to "do something on AI"
Image Source: ABC
A little over a year ago Senate Majority Leader Chuck Schumer (D-NY) outlined the Senate’s goal to pursue some form of AI legislation. In July, the body suddenly began to make good on that promise. On July 23rd, the Senate passed the Defiance Act, targeting deep fakes, while on July 30th the Senate Commerce Committee passed to the floor a whopping 9 bills, most bipartisan.
With such a flurry, it’s hard to say what matters most, so rather than hyper concentrate on any one bill, I want to provide some quick hits on the ideas in each. Note that none of these comments represent an endorsement for or against. This will be part 1 of 2, with the other half of the bills to be discussed in a future post.
The DEFIANCE Act
The DEFIANCE Act deserves first billing as it actually passed the Senate. So, what’s inside? In brief, this bill gives citizens the ability to sue if a perpetrator distributes AI generated pornography of themselves without their consent.
The concept is solid and it’s no surprise the vote was unanimous. One would be hard pressed to find someone who wouldn’t want some form of recourse for the distribution of explicit deep fakes. I sure would. To get a bit more specific, one important key to this proposal is that the right to sue only applies when imagery was ‘knowingly distributed.’ To avoid a chilling effect, it’s exceedingly important to distinguish the malicious from the accidental and this bill specifically targets cases where the distributor is known, and provably acting to cause harm.
The Take it Down Act
This is in the same vein, but with more teeth. Take It Down goes a step further by criminalizing the distribution of non-consensual pornography (deep faked or otherwise) while also requiring covered social media platforms create a takedown procedure for victims. For industry, this is a regulatory bill, and it’s unclear how easy or difficult it would be for industry to implement take down procedures. That said, sometimes a bit of regulatory burden is reasonable. Non-consensual imagery can be very, very harmful. As AI grows more capable there are certain choices we will have to make to ensure innovation doesn’t sacrifice human autonomy. Everyone wants a certain sense of control over their image, privacy and reputation. By regulating this conduct and providing simple avenues of recourse the government would provide a small, yet meaningful tool that will empower and preserve individual autonomy. Certainly, there is a regulatory price to pay – but sometimes small prices are necessary.
Small Business Artificial Intelligence Training Act of 2024
On a much lighter note, comes the Small Business Artificial Intelligence Training Act. The gist of this bill is simple: the Small Business Administration will be required to develop training resources on:
prompt engineering and the use of artificial intelligence or emerging technologies, such as quantum hybrid tools, relating to access to credit and capital, financial management and accounting, business planning and operations, cybersecurity, marketing, supply chain management, government contracting, and exporting.
The impetus behind this idea is that AI and emerging technologies risk zooming past under-resourced small businesses, and senators are hoping the government can fill some resource gaps.
I don’t have an issue with this concept per say, however, the scope of the resources seems untargeted, likely minimally impactful, and possibly wasteful. When considering industry support of this type, policy leaders should be thoughtful about the bets we place. A pragmatic approach to resource gap filling should avoid throwing out a grab-bag of hot topics, and instead place bets at the nexus of: difficult to understand for small business, guaranteed to matter beyond 1 or 2 years, and grounded in real-world application.
On the list – cybersecurity is the standout priority. Cyber talent and understanding are in provably short supply, insecurities are uniquely difficult to manage for small business, AI is deeply used in cybersecurity, and regardless of whether current AI hype persists, cybersecurity isn’t going anywhere. In any proposals along these lines, cyber is a baseline and should take priority over other topics.
What absolutely shouldn’t be the focus of policymakers is prompt engineering, which oddly receives top billing. I’m floored that this concept is a priority. I cannot stress enough that this is a buzzword, a concept that isn’t proven to be an ‘in demand skill,’ and something that is very likely to fall out of date in just a few years. Some things, like cybersecurity, are important, enduring federal priorities. Prompt engineering is not. Do not build policy on fads.
Promoting United States Leadership in Standards Act of 2024
In short, this bill aims to ensure U.S. leadership in technical standard setting with the goal of greasing the wheels of commerce while reaping potential soft power benefits. If implemented, the bill would require the U.S. government to:
A. investigate what AI standards should be incentivized and;
B. create a pilot grant program to monetarily incentivize standards organizations to host standard setting conferences here in the United States.
This concept is interesting.
It’s no question the United States’ leading role in developing standards for aviation, the internet, and other major innovation has provided durable economic and political benefits. In 1944, the United States took the initiative to host a convention in Chicago on International Civil Aviation, aiming to set uniform standards for this emerging technology. Given the United States initiating and hosting role, American influence shaped standards deeply. One fruitful result: today all pilots and air traffic controllers must speak English. Given labor shortages, it’s absolutely a competitive and safety benefit for the United States that our workforce isn’t restrained by the heavy language acquisition requirements pilots in other countries face. Leaning into standards development can matter, and perhaps grant dollars can help us lead the way in AI just as we did in aviation.
I’m admittedly not certain what standards might have an equivalent benefit, but it’s worth exploring. As for downsides - they seem minimal since this is intended to only be a temporary pilot program.
The Vet AI Act
More interesting ideas! Like the previous bill, standards are the focus of the Vet AI Act. Specifically, the bill directs the National Institute of Standards and Technology (NIST) to collaborate with other agencies and stakeholders to (in the words of the sponsors)
“develop specifications, guidelines, and recommendations for third-party evaluators to work with AI companies to provide robust independent external assurance and verification of how their AI systems are developed and tested.”
Third party AI audits are increasingly popular as a potential AI risk management best practice. The challenge today, however, is defining what audits look like. Right now, businesses are something of an auditing wild west. The space has no best practices and little history, “just trust me” seems to be the baseline pitch for each auditing standard. To cut through the junk, providing voluntary auditing guidance could help steer industry towards good process and provide a baseline to contrast auditing services on offer.
Of the bills, this has the potential to be the most interesting. In recent years many jurisdictions including Colorado, New York City, and others have passed laws requiring or implicating third party audits. At present, many are undefined. Perhaps by being the loudest, most confident voice in the auditing room this NIST guidance could end up formally, or informally, becoming the baseline default for bills of this type. This pattern has already occurred in cybersecurity where NIST’s guidelines have been baked into numerous statutes across the states turning the voluntary, into something a little less voluntary. If AI audits follow this path, this creates a potential double edged regulatory sword: if enough states and jurisdictions legislate on AI audit requirements, and turn to NIST for implementation guidance, this could help harmonize requirements and avoid a patchwork. That said, if the NIST guidance is poorly thought out, or onerous, it could also create a single point of failure.
Naturally, we shouldn’t get ahead of ourselves when gaming out all potential futures. Voluntary Guidance is a default good and when uncertainty is this high, more is better.
Artificial Intelligence Public Awareness and Education Campaign Act
Last, and possibly least, is the Artificial Intelligence Public Awareness and Education Campaign Act. Briefly – this is a PSA bill requiring the Secretary of Commerce to launch an awareness campaign to educate citizens on AI opportunities and emerging threats such as DeepFakes and AI generated fraud. This is another solid concept, and PSAs are something I’ve advocated to mitigate near term harm. In 2023, Deloitte estimated generative AI, notably synthetic voice generators, enabled roughly 12.3 billion in fraud. That figure will only grow. The crux of success is that many didn’t realize just how far AI had advanced – an education gap worth filling, and quickly.
To get Americans up to speed, a simple PSA campaign could go a long way. Commerce’s challenge, however, would be getting this right. Since the 40s, the government has had enduring success with Smokey the Bear. The question: can we make some sort of smokey the bear of AI? When messages are made simple, memorable, and effective, their impact can be profound.
Legislative futures
It's hard to say what elements of these bills might make it passed the legislative bar. While each has passed a major hurdle, getting through both chambers is a difficult quest. My prediction is that if any of these elements move, they won’t move as some form of grand AI package, but rather the body will pick and choose its favorite pieces and bake them into a potential omnibus budget by year's end. Given current divisions, however, that is a big, uncertain if.