We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-governance (3)
ai-policy (3)
ai-safety (3)
×
existential-risk (3)
agi (2)
ai-regulation (2)
ai-risk (2)
ethics (2)
future (2)
policy (2)
ai (1)
ai-ethics (1)
cern-for-ai (1)
public-interest-ai (1)
regulations (1)
Top
New
-
Connor Leahy votes For and says:
The primary prerequisite to even considering starting to work on a safe ASI plan is to have a global ASI ban and powerful enforcement already in place. Unsafe ASI is vastly easier to build than controlled ASI, and is on the same tech path. [...] The ... more Unverified source (2026) -
Connor Leahy votes For and says:
The A.I. alignment field, the question of if we have superhuman intelligence, if we have superintelligence, if we have god-like A.I., how do we make that go well is a very, very important — and very importantly, this is a scientific problem. A scient... more Unverified source (2023) -
Connor Leahy votes Against and says:
All of these approaches are terrible. No one has a plan, no one is making any meaningful progress towards anything that even resembles alignment. [...] Alignment is way too hard, maybe impossible. [...] The product work was an attempt to make Conject... more Unverified source (2026)