We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Emily M. Bender
Linguist; AI critic
ai (2)
ai-governance (2)
ai-regulation (2)
ai-safety (2)
×
existential-risk (2)
ai-ethics (1)
ai-policy (1)
ai-risk (1)
democracy (1)
future (1)
Top
New
-
Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
Emily M. Bender strongly disagrees and says:
Treating speculative “superintelligence” as the policy target and proposing to freeze development until there’s public buy‑in and scientific consensus distracts from the actual, present‑day harms of AI systems. These systems are already amplifying discrimination, exploiting labor, and enabling surveillance. A ban premised on hypothetical future scenarios centers the agendas of the firms and figures hyping those scenarios and sidelines the communities bearing real costs now. Democratic governance means addressing concrete harms, enforcing existing laws, and creating accountability for how AI is built and deployed. We don’t need to stop the world for a fantasy of control over imagined “superintelligence.” We need to regulate and redirect the industry we have—today. (2025) source Unverified -
Should humanity build artificial general intelligence?
Emily M. Bender disagrees and says:
For me, AI should be grounded and centred on the person and how to best help the person. But for a lot of people, it's grounded and centred on the technology. [...] The AGI thing, the AGI narrative, sidesteps that, and instead actually puts forward technology in place of people. I always say, it's important to supplement, not supplant. (2025) source Unverified