We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Leopold Aschenbrenner
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai (2)
ai-safety (2)
existential-risk (2)
ai-ethics (1)
ai-governance (1)
ai-regulation (1)
ai-risk (1)
future (1)
research-policy (1)
Top
New
-
Leopold Aschenbrenner votes For and says:
AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into 1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and th... more Unverified source (2024) -
Leopold Aschenbrenner votes For and says:
As the race to AGI intensifies, the national security state will get involved. The USG will wake from its slumber, and by 27/28 we’ll get some form of government AGI project. No startup can handle superintelligence. And so by 27/28, the endgame will... more Unverified source (2024)