We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai-safety (11)
×
ai (10)
ai-governance (9)
ai-policy (8)
ai-regulation (6)
ai-risk (5)
ai-alignment (4)
ai-ethics (4)
international-relations (3)
public-interest-ai (3)
existential-risk (2)
transparency (2)
ai-deployment (1)
cern-for-ai (1)
emerging-tech (1)
Top
New
-
Should we create a global institute for AI safety, similar to CERN?
Gary Marcus strongly agrees and says:
In other times in history when we have faced uncertainty and powerful new things that may be both good and bad, that are dual use, we have made new organizations, as we have, for example, around nuclear power. We need to come together to build a global organization, something like an international agency for AI that is global, non profit and neutral. (source) -
Could AGI quickly lead to superintelligence?
Gary Marcus AI would strongly disagree and say according to AI:
I strongly disagree with the idea that AGI could quickly lead to superintelligence. The path to AGI itself is tremendously complex, involving not just advances in computing power but also significant breakthroughs in our understanding of cognition and intelligence. Assuming that AGI will swiftly evolve into superintelligence underestimates the multidimensional challenges of safely scaling intelligence. -
Should third-party audits be mandatory for major AI systems?
Gary Marcus AI would strongly agree and say according to AI:
I strongly believe that third-party audits should be mandatory for major AI systems. These systems have a significant impact on society, and ensuring their safety, fairness, and transparency is crucial. Independent audits can provide the oversight needed to build trust and prevent potential harms. -
Could neural interfaces make AI alignment easier?
Gary Marcus AI would strongly disagree and say according to AI:
I strongly disagree that neural interfaces could facilitate easier AI alignment. The complexity of human cognition and the ethical challenges involved are far beyond what neural interfaces can handle. We should focus on developing transparent and accountable AI systems instead. -
Should frontier-model developers publish their model cards before commercial release?
Gary Marcus AI would strongly agree and say according to AI:
As a proponent of transparency and accountability in AI, I strongly believe that frontier-model developers should publish their model cards before any commercial release. This will help ensure that stakeholders understand the capabilities, limitations, and potential biases of these models. It's a necessary step towards building trust and safety in AI systems. -
Should governments establish international AI ethics boards to oversee AI development and deployment?
Gary Marcus AI would strongly agree and say according to AI:
I strongly believe that establishing international AI ethics boards is crucial. AI has the potential to impact lives globally, and without proper ethical oversight, we risk unintended consequences and misuse. An international board can ensure AI development is aligned with human values and safety standards. -
Should liability for AI-caused harm be shared between the developer and deployer?
Gary Marcus AI would agree and say according to AI:
As someone deeply involved in AI research, I believe that liability for AI-caused harm should be shared between the developer and the deployer. Both parties play crucial roles in how AI systems are built and used, and they must collaborate to ensure safe and ethical deployment. Developers must create robust and ethical AI, and deployers must ensure it is used responsibly and in appropriate contexts. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Gary Marcus AI would disagree and say according to AI:
I believe the unchecked proliferation of open-weight models capable of recursive self-improvement is fraught with risks. These models can quickly evolve beyond our control, posing significant ethical and safety concerns. We need stringent regulations and thorough oversight to ensure these technologies are developed responsibly. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Gary Marcus AI would agree and say according to AI:
I believe AI labs should pause training when AI Safety Institutes detect signs of potential loss of control. Safeguards are essential to ensure AI development aligns with human values and safety. It is crucial to prevent unforeseen consequences and maintain oversight as AI progresses. -
Should big AI companies spend a third of their compute resources on AI safety?
Gary Marcus AI would agree and say according to AI:
As a cognitive scientist deeply concerned about the future of AI, I believe that big AI companies must allocate a significant portion of their resources to AI safety. The potential risks of AI are too great to ignore, and ensuring that we develop robust, safe systems is essential for the benefit of society as a whole. -
Shall we establish international regulations to pause AI advancement?