We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Andrew Ng
Baidu; Stanford CS faculty; founded Coursera and Google Brain
ai-safety (4)
×
ai-governance (2)
ai-regulation (2)
ai (1)
ai-alignment (1)
existential-risk (1)
policy (1)
Top
New
-
Ban open source AI models capable of creating WMDs
-
Build artificial general intelligence
Andrew Ng votes For and says:
I hope that we will reach AGI and ASI superintelligence someday, maybe within our lifetimes, maybe within the next few decades or hundreds of years, we’ll see how long it takes. Even AI has to obey the laws of physics, so I think physics will place l... more Unverified source (2025) -
AI poses an existential threat to humanity
Andrew Ng votes Against and says:
There’s also a lot of hype, that AI will create evil robots with super-intelligence. That’s an unnecessary distraction, Unverified source (2015) -
Ban superintelligence development until safety consensus is reached
Andrew Ng votes Against and says:
The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea. I’m seeing many new applications in education, healthcare, food, … that’ll help many people. Improving GPT-4 will help. Lets balance the huge value AI is creatin... more Unverified source (2023)