We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Andrew Ng
Baidu; Stanford CS faculty; founded Coursera and Google Brain
ai (3)
ai-governance (3)
ai-regulation (3)
ai-safety (3)
existential-risk (3)
ai-ethics (2)
ai-policy (2)
ai-risk (2)
ai-deployment (1)
democracy (1)
economics (1)
future (1)
future-of-work (1)
gov (1)
inequality (1)
Top
New
-
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
-
Should we have a universal basic income?
Andrew Ng strongly disagrees and says:
I do not believe in unconditional basic income because this just encourages people to be trapped in low skilled jobs without a meaningful path to climb up to do better work. So rather than to pay people to “do nothing” I would rather see a new “New deal” where we pay you to study because I think that today we know how to educate people at scale and the society is pretty good at finding meaningful work and rewarding people at the relevant skills. Incentivizing people to study increases the odds that the displaced worker can gain the skills they need to reenter the workforce and contribute back to this tax phase that gives as this engine of growth of the economy. source Unverified -
Should humanity build artificial general intelligence?
Andrew Ng agrees and says:
I hope that we will reach AGI and ASI superintelligence someday, maybe within our lifetimes, maybe within the next few decades or hundreds of years, we’ll see how long it takes. Even AI has to obey the laws of physics, so I think physics will place limitations, but I think the ceiling for how intelligent systems can get, and therefore what we can direct them to do for us will be extremely high. (2025) source Unverified -
Should humanity ban the development of superintelligence until there is broad scientific consensus that it will be done safely and controllably and strong public buy-in?
Andrew Ng disagrees and says:
The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea. I’m seeing many new applications in education, healthcare, food, … that’ll help many people. Improving GPT-4 will help. Lets balance the huge value AI is creating vs. realistic risks. (2023) source Unverified