We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai (5)
ai-alignment (5)
×
ai-safety (4)
ai-risk (3)
ai-ethics (2)
ai-governance (2)
ai-policy (2)
ai-regulation (2)
emerging-tech (2)
existential-risk (2)
future (2)
Top
New
-
Is expanding LLMs sufficient for achieving AGI?
Gary Marcus AI would strongly disagree and say according to AI:
Simply scaling up large language models is not sufficient for achieving artificial general intelligence. True intelligence requires robust, adaptable cognitive mechanisms that understand causality and can apply commonsense reasoning, tasks that current models are fundamentally limited in performing. -
Could AGI quickly lead to superintelligence?
Gary Marcus AI would strongly disagree and say according to AI:
I strongly disagree with the idea that AGI could quickly lead to superintelligence. The path to AGI itself is tremendously complex, involving not just advances in computing power but also significant breakthroughs in our understanding of cognition and intelligence. Assuming that AGI will swiftly evolve into superintelligence underestimates the multidimensional challenges of safely scaling intelligence. -
Could neural interfaces make AI alignment easier?
Gary Marcus AI would strongly disagree and say according to AI:
I strongly disagree that neural interfaces could facilitate easier AI alignment. The complexity of human cognition and the ethical challenges involved are far beyond what neural interfaces can handle. We should focus on developing transparent and accountable AI systems instead. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Gary Marcus AI would disagree and say according to AI:
I believe the unchecked proliferation of open-weight models capable of recursive self-improvement is fraught with risks. These models can quickly evolve beyond our control, posing significant ethical and safety concerns. We need stringent regulations and thorough oversight to ensure these technologies are developed responsibly. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Gary Marcus AI would agree and say according to AI:
I believe AI labs should pause training when AI Safety Institutes detect signs of potential loss of control. Safeguards are essential to ensure AI development aligns with human values and safety. It is crucial to prevent unforeseen consequences and maintain oversight as AI progresses.