We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Scott Wiener
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
ai-governance (3)
×
ai-safety (3)
policy (3)
ai-regulation (2)
regulations (2)
research-policy (2)
ai-ethics (1)
ai-policy (1)
ethics (1)
ethics-in-research (1)
labor-rights (1)
law (1)
open-science (1)
open-source (1)
public-interest-ai (1)
Top
New
-
Scott Wiener votes For and says:
performing a safety evaluation [...] massively powerful AI models before releasing them [...]. [...] test their large models for catastrophic safety risk. We’ve worked [...] with open source advocates [...]. Unverified source (2024) -
Scott Wiener votes For and says:
[...] the bill also provides critical protections to workers who need to sound the alarm if something goes wrong in developing these highly advanced systems, Unverified source (2025) -
Scott Wiener votes For and says:
If you develop a model *today* [...] and that model causes harm of any scale, someone can try to sue you [...] potentially recover damages. Unverified source (2024)