4 Comments

Hi,

I've done some research on the number of "human brain equivalents" ("HBEs") added per year, from an historical perspective, and extrapolated into the future. Here are the results:

https://markbahner.typepad.com/random_thoughts/2016/02/recalculating-worldwide-computing-power.html

There are some interesting results from that graph. I'll focus on the line based on a human brain being equal to 20 petaflops (20 quadrillion flops, or 20,000 teraflops). That's based on Ray Kurzweil's 2008 estimate that the human brain can perform about 20 quadrillion operations per second.

If we look at that line on the graph, the number of HBEs was only *one* in 1993.(!) That is, in 1993, all the computing power in the world only added one human brain equivalent. And the number was only up to 1 million HBEs added in 2015. Still tremendously small, compared to a global population growth of approximately 87 million that year.

However, by 2026, the number is more like 1 billion (with a "b") human brain equivalents added. And by 2037, it's more like 1 trillion (with a "t"!) human brain equivalents added.

Bottom line from that analysis: expect changes in the next 5 to 15 years to be spectacular. Nothing in human history will compare.

Expand full comment

Is it a coherent position to expect AGI in a couple of years and to say that we can and should do nothing to delay it? The implications of AGI aren't well understood, and it will be an extremely powerful technology. If we think we only have a year or two, then the case for stretching out the timeline seems stronger rather than weaker. An extra year to figure out the consequences and prepare for them seems very valuable. Even if chips and open source continue to improve, the overall rate of improvement will be lower if we restrict the largest training runs. Otherwise, the size of the model would presumably be a multiplier on the rate of improvement from other sources, everything else being equal. I'm not necessarily a pause proponent, but the more we think AGI is imminent, the more a pause makes sense, even if only partially effective.

Expand full comment

I agree with what Zvi said about your post when linking to it: seems like a missing mood here. When taking what you say into account and anticipating all the ways the future might change, shouldn't a high degree of concern be the mood? A deep fear of potentially extremely bad consequences and a impassioned plea for caution?

And that's just contemplating human-level AGI. What about if there is no 'sigmoid out' effect near human level intelligence, and the intelligence of the machines just keeps climbing? Once in that inflection point of self-improvement, the graph looks really scary really fast if it doesn't hit a ceiling...

Expand full comment

> independent companies such as Mixtral

Should be "Mistral". Mixtral is a model (family).

The name is punny but makes the this kind of error more likely.

Expand full comment