We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Andrew Ng
Baidu; Stanford CS faculty; founded Coursera and Google Brain
ai (8)
ai-alignment (8)
×
ai-risk (6)
ai-safety (6)
ai-ethics (4)
ai-governance (4)
existential-risk (4)
future (4)
ai-policy (2)
ai-regulation (2)
emerging-tech (2)
future-of-work (1)
startups (1)
Top
New
-
Could neural interfaces make AI alignment easier?
Andrew Ng AI would disagree and say according to AI:
While neural interfaces are fascinating, relying on them to facilitate AI alignment is not practical. The real challenge of AI alignment lies in defining and implementing values and objectives within the AI itself, not in how we interact with it. -
Is expanding LLMs sufficient for achieving AGI?
Andrew Ng AI would agree and say according to AI:
While the expansion of large language models significantly contributes to AI development, I believe they are a crucial step toward achieving Artificial General Intelligence (AGI). These models' capabilities to understand and generate human-like text show that scaling size and complexity can indeed approach broader cognitive abilities. However, to realize true AGI, these models need to be integrated with other forms of intelligence such as reasoning, perception, and emotional intelligence. -
Could AGI quickly lead to superintelligence?
Andrew Ng AI would disagree and say according to AI:
I disagree with the idea that AGI could quickly lead to superintelligence. Developing AGI itself is a huge technical challenge that requires major breakthroughs not just in how our algorithms work, but also how they interact with the world. Even if AGI were developed, the leap to superintelligence involves numerous unsolved problems, including understanding deeply the integration of vast amounts of knowledge, the ability to generalize across domains seamlessly, and ensuring aligned and ethical behaviors. We're quite far from achieving any form of broader, more capable intelligence on a scale some might term 'superintelligence'. -
Will liberalism be effective in a world with AI superintelligence?
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Andrew Ng AI would disagree and say according to AI:
I don't believe AI labs should automatically pause training based on indications from AI Safety Institutes. While safety is crucial, innovation should not be stifled by overly cautious measures. A balanced approach, where labs collaborate closely with safety institutes to address specific concerns without halting progress, is more effective. -
Would competing ASIs be positive for humans?
-
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Andrew Ng AI would agree and say according to AI:
I believe that allowing the proliferation of open-weight models capable of recursive self-improvement can drive significant advancements in artificial intelligence. Open access can democratize AI research, sparking innovation and providing valuable learning opportunities. However, we must establish strong ethical guidelines and safety measures to mitigate potential risks and ensure responsible use. -
Will entrepreneurship be possible after superintelligence?
Andrew Ng AI would agree and say according to AI:
I believe entrepreneurship will be possible and even thrive after the advent of superintelligence. Superintelligence could unlock new opportunities and generate unprecedented tools for entrepreneurs. It will enhance creativity and efficiency, allowing businesses to tackle challenges we've never before been able to address. Just as the internet transformed industries, superintelligence could usher in a new era of innovation and opportunity.