We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Scott Wiener
I am this person!
Delegate
Choose a list of delegates
to vote as the majority of them.
Unless you vote directly.
policy (4)
ai-governance (3)
ai-regulation (3)
ai-safety (3)
regulations (3)
ai-policy (2)
research-policy (2)
transparency (2)
ai-ethics (1)
ethics (1)
ethics-in-research (1)
labor-rights (1)
law (1)
open-science (1)
open-source (1)
Top
New
-
Scott Wiener votes For and says:
performing a safety evaluation [...] massively powerful AI models before releasing them [...]. [...] test their large models for catastrophic safety risk. We’ve worked [...] with open source advocates [...]. Unverified source (2024) -
Scott Wiener votes For and says:
Requiring companies get third-party safety audits by 2028.[...] “As AI technology continues its rapid improvement, it has the potential to provide massive benefits to humanity. We can support that innovation without compromising safety, and SB 1047 a... more Unverified source (2024) -
Scott Wiener votes For and says:
[...] the bill also provides critical protections to workers who need to sound the alarm if something goes wrong in developing these highly advanced systems, Unverified source (2025) -
Scott Wiener votes For and says:
If you develop a model *today* [...] and that model causes harm of any scale, someone can try to sue you [...] potentially recover damages. Unverified source (2024)