4 Comments

Excellent perspective. This is why so many AI startups will accept the invitation to launch outside the U.S. More funding, fewer ridiculous and superfluous hurdles, and governments eager to welcome U.S. innovators will result in yet another export we may never get back. The government's attitude of 'caution' aligns with the ridiculous decision to close many U.S. public schools closed yesterday (4/8) to protect children from the 'dangerous' solar eclipse. What a huge missed opportunity for embracing science and community! This AI 'safety policy' will guarantee yet another big missed opportunity.

Expand full comment

There are a lot of unexpected use cases we'll never get. There is a lot of creativity to unleash by letting your average-joe play with tech and see how it could work in his/her niche. This is lost by forcing everything through a planned process.

Expand full comment

The rights impact and safety impact are exceptionally broadly defined, to the point where they could make it into anything

Expand full comment

Exactly. Especially rights impacting. Because some of the named rights impacting cases are super specific, translation for instance, I think many will fail to see just how broad the whole set is. My planned follow-up is to take one of the super specific things (machine translation) and illustrate what we risk by limiting *only* that. In translation's case: FEMA's has so few Spanish translators the DOJ said they violated the civil rights act. Perhaps AI can fill the gap.

Expand full comment