We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-policy (4)
agi (3)
ai-governance (3)
ai-safety (3)
existential-risk (3)
ai (2)
ai-ethics (2)
ai-regulation (2)
ai-risk (2)
ethics (2)
future (2)
policy (2)
research-policy (2)
cern-for-ai (1)
public-interest-ai (1)
Top
New
-
Connor Leahy votes For and says:
The A.I. alignment field, the question of if we have superhuman intelligence, if we have superintelligence, if we have god-like A.I., how do we make that go well is a very, very important — and very importantly, this is a scientific problem. A scient... more Unverified source (2023) -
Connor Leahy votes For and says:
The primary prerequisite to even considering starting to work on a safe ASI plan is to have a global ASI ban and powerful enforcement already in place. Unsafe ASI is vastly easier to build than controlled ASI, and is on the same tech path. [...] The ... more Unverified source (2026) -
Connor Leahy votes Against and says:
If you think building AGI is okay, like anthropic should just be allowed to build AGI or whatever. And then, okay, you should state that clearly. And then people who don't think that's okay, should also state that clearly. And they should have the co... more Unverified source -
Connor Leahy votes Against and says:
All of these approaches are terrible. No one has a plan, no one is making any meaningful progress towards anything that even resembles alignment. [...] Alignment is way too hard, maybe impossible. [...] The product work was an attempt to make Conject... more Unverified source (2026)