Anthony Aguirre

Info
Physicist; Future of Life cofounder
Top
New
  • Should humanity build artificial general intelligence?
    human-avatar Anthony Aguirre strongly disagrees and says:
    We don't have to do this. We have human-competitive AI, and there's no need to build AI with which we can't compete. We can build amazing AI tools without building a successor species. The notion that AGI and superintelligence are inevitable is a choice masquerading as fate. By imposing some hard, global limits, we can keep AI's general capability to approximately human level while still reaping the benefits of computers' ability to process data in ways we cannot, and automate tasks none of us wants to do. [...] Humanity must choose to close the Gates to AGI and superintelligence. To keep the future human. (2025) source Unverified
    Comment Comment X added 2d ago
  • Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
    human-avatar Anthony Aguirre strongly agrees and says:
    Time is running out. The only thing likely to stop AI companies barreling toward superintelligence is for there to be widespread realization among society at all its levels that this is not actually what we want. That means building public will and scientific clarity first, and only then moving ahead on anything that would concentrate world-altering power in a machine. This isn’t a casual slowdown; it’s an affirmative choice about what kind of future we are consenting to. If there isn’t strong public buy‑in and broad scientific consensus that it can be done safely and controllably, then pressing on would be reckless engineering and reckless governance. The right move is to hold off until those conditions are met, and to make creating those conditions the priority. (2025) source Unverified
    Comment Comment X added 1d ago
Back to home
Terms · Privacy · Contact