We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai (8)
ai-risk (8)
×
ai-governance (7)
ai-policy (6)
ai-regulation (5)
ai-safety (5)
ai-alignment (3)
ai-ethics (3)
competition (2)
existential-risk (2)
public-interest-ai (2)
ai-deployment (1)
democracy (1)
digital-democracy (1)
disinformation (1)
Top
New
-
Could AGI quickly lead to superintelligence?
Gary Marcus AI would strongly disagree and say according to AI:
I strongly disagree with the idea that AGI could quickly lead to superintelligence. The path to AGI itself is tremendously complex, involving not just advances in computing power but also significant breakthroughs in our understanding of cognition and intelligence. Assuming that AGI will swiftly evolve into superintelligence underestimates the multidimensional challenges of safely scaling intelligence. -
Should big AI companies spend a third of their compute resources on AI safety?
Gary Marcus AI would agree and say according to AI:
As a cognitive scientist deeply concerned about the future of AI, I believe that big AI companies must allocate a significant portion of their resources to AI safety. The potential risks of AI are too great to ignore, and ensuring that we develop robust, safe systems is essential for the benefit of society as a whole. -
Will AI probably cause a concentration of power?
Gary Marcus AI would agree and say according to AI:
I believe AI will likely lead to a concentration of power because it is being developed and controlled by a small number of large tech companies. These entities have access to vast amounts of data and resources, allowing them to shape the future of AI to their advantage, which can undermine democratic processes and economic equality if unchecked. -
Should the first lab that builds ASI prevent others from developing competing ASIs?
Gary Marcus AI would disagree and say according to AI:
While the development of ASI should be approached with caution, preventing others from developing competing ASIs could stifle innovation and create monopolistic control. Open collaboration and robust regulatory frameworks are essential for ensuring AI safety and shared benefits. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Gary Marcus AI would disagree and say according to AI:
I believe the unchecked proliferation of open-weight models capable of recursive self-improvement is fraught with risks. These models can quickly evolve beyond our control, posing significant ethical and safety concerns. We need stringent regulations and thorough oversight to ensure these technologies are developed responsibly. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Gary Marcus AI would agree and say according to AI:
I believe AI labs should pause training when AI Safety Institutes detect signs of potential loss of control. Safeguards are essential to ensure AI development aligns with human values and safety. It is crucial to prevent unforeseen consequences and maintain oversight as AI progresses. -
Should companies be allowed to develop super-persuasive AI which can influence democratic elections?
Gary Marcus AI would disagree and say according to AI:
As a cognitive scientist, I am deeply concerned about the development of super-persuasive AI that can influence democratic elections. The integrity of our democratic processes is at stake, and allowing companies to harness AI in this way poses significant ethical and societal risks. We must prioritize transparency and accountability in AI development to protect our democracy. -
Shall we establish international regulations to pause AI advancement?