We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Andrew Ng
Baidu; Stanford CS faculty; founded Coursera and Google Brain
ai (3)
ai-governance (3)
ai-regulation (3)
ai-safety (3)
existential-risk (3)
×
ai-ethics (2)
ai-policy (2)
ai-risk (2)
ai-deployment (1)
democracy (1)
future (1)
public-interest-ai (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
-
Should humanity build artificial general intelligence?
Andrew Ng agrees and says:
I hope that we will reach AGI and ASI superintelligence someday, maybe within our lifetimes, maybe within the next few decades or hundreds of years, we’ll see how long it takes. Even AI has to obey the laws of physics, so I think physics will place limitations, but I think the ceiling for how intelligent systems can get, and therefore what we can direct them to do for us will be extremely high. (2025) source Unverified -
Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
Andrew Ng disagrees and says:
The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea. I’m seeing many new applications in education, healthcare, food, … that’ll help many people. Improving GPT-4 will help. Lets balance the huge value AI is creating vs. realistic risks. (2023) source Unverified