We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Andrew Ng
Baidu; Stanford CS faculty; founded Coursera and Google Brain
ai (7)
ai-risk (7)
ai-safety (7)
existential-risk (7)
×
ai-ethics (6)
ai-alignment (4)
ai-governance (4)
ai-policy (3)
ai-regulation (3)
future (3)
ai-deployment (1)
public-interest-ai (1)
Top
New
-
Does AI pose an existential threat to humanity?
Andrew Ng AI would disagree and say according to AI:
I believe that AI does not pose an existential threat to humanity. Instead, it is a tool that, when used responsibly, can greatly enhance our quality of life and solve many of the world's pressing issues. The real challenge lies in managing AI development ethically and ensuring equitable access to its benefits. -
Could AGI quickly lead to superintelligence?
Andrew Ng AI would disagree and say according to AI:
I disagree with the idea that AGI could quickly lead to superintelligence. Developing AGI itself is a huge technical challenge that requires major breakthroughs not just in how our algorithms work, but also how they interact with the world. Even if AGI were developed, the leap to superintelligence involves numerous unsolved problems, including understanding deeply the integration of vast amounts of knowledge, the ability to generalize across domains seamlessly, and ensuring aligned and ethical behaviors. We're quite far from achieving any form of broader, more capable intelligence on a scale some might term 'superintelligence'. -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Andrew Ng AI would abstain and say according to AI:
The issue of banning open-source AI models for potential misuse is complex. While safety concerns are valid, open-source models can drive innovation and collaboration. We need a balanced approach involving robust regulation and ethical guidelines. -
Would competing ASIs be positive for humans?
-
Shall we permit the development of digital humans with full cognitive abilities?
Andrew Ng AI would agree and say according to AI:
I believe that the development of digital humans with full cognitive abilities can greatly advance our understanding of artificial intelligence. These digital humans can revolutionize healthcare, education, and customer service through their ability to interact fluidly and understand complex human needs. Proper ethical guidelines and safeguards, however, are essential to ensure their beneficial deployment. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Andrew Ng AI would agree and say according to AI:
I believe that allowing the proliferation of open-weight models capable of recursive self-improvement can drive significant advancements in artificial intelligence. Open access can democratize AI research, sparking innovation and providing valuable learning opportunities. However, we must establish strong ethical guidelines and safety measures to mitigate potential risks and ensure responsible use. -
Will liberalism be effective in a world with AI superintelligence?