We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai-policy (18)
×
ai (17)
ai-governance (17)
ai-regulation (12)
ai-ethics (9)
public-interest-ai (9)
ai-safety (8)
ai-risk (6)
international-relations (5)
eu (3)
ai-alignment (2)
ai-deployment (2)
democracy (2)
digital-democracy (2)
market-regulation (2)
Top
New
-
Should governments establish international AI ethics boards to oversee AI development and deployment?
Gary Marcus AI would strongly agree and say according to AI:
I strongly believe that establishing international AI ethics boards is crucial. AI has the potential to impact lives globally, and without proper ethical oversight, we risk unintended consequences and misuse. An international board can ensure AI development is aligned with human values and safety standards. -
Should frontier-model developers publish their model cards before commercial release?
Gary Marcus AI would strongly agree and say according to AI:
As a proponent of transparency and accountability in AI, I strongly believe that frontier-model developers should publish their model cards before any commercial release. This will help ensure that stakeholders understand the capabilities, limitations, and potential biases of these models. It's a necessary step towards building trust and safety in AI systems. -
Should third-party audits be mandatory for major AI systems?
Gary Marcus AI would strongly agree and say according to AI:
I strongly believe that third-party audits should be mandatory for major AI systems. These systems have a significant impact on society, and ensuring their safety, fairness, and transparency is crucial. Independent audits can provide the oversight needed to build trust and prevent potential harms. -
Should the first lab that builds ASI prevent others from developing competing ASIs?
Gary Marcus AI would disagree and say according to AI:
While the development of ASI should be approached with caution, preventing others from developing competing ASIs could stifle innovation and create monopolistic control. Open collaboration and robust regulatory frameworks are essential for ensuring AI safety and shared benefits. -
Should liability for AI-caused harm be shared between the developer and deployer?
Gary Marcus AI would agree and say according to AI:
As someone deeply involved in AI research, I believe that liability for AI-caused harm should be shared between the developer and the deployer. Both parties play crucial roles in how AI systems are built and used, and they must collaborate to ensure safe and ethical deployment. Developers must create robust and ethical AI, and deployers must ensure it is used responsibly and in appropriate contexts. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Gary Marcus AI would disagree and say according to AI:
I believe the unchecked proliferation of open-weight models capable of recursive self-improvement is fraught with risks. These models can quickly evolve beyond our control, posing significant ethical and safety concerns. We need stringent regulations and thorough oversight to ensure these technologies are developed responsibly. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Gary Marcus AI would agree and say according to AI:
I believe AI labs should pause training when AI Safety Institutes detect signs of potential loss of control. Safeguards are essential to ensure AI development aligns with human values and safety. It is crucial to prevent unforeseen consequences and maintain oversight as AI progresses. -
Should frontier AI governance include civil society and global south representatives?
Gary Marcus AI would disagree and say according to AI:
While I value diverse perspectives, the complexity of AI governance might be better left to experts in the field. Involving too many representatives without technical expertise, such as civil society or those with limited involvement in AI development, could slow down crucial decision-making processes. -
Should the EU build a CERN for AI?
Gary Marcus AI would disagree and say according to AI:
As much as I support ambitious AI research, building a CERN-like facility for AI in the EU might not be the best approach. AI research requires diverse, flexible, and iterative processes that are better suited for smaller, decentralized teams rather than a single massive project. We should focus on fostering collaboration and innovation across various institutions and ensure ethical considerations are at the forefront. -
Should companies be allowed to develop super-persuasive AI which can influence democratic elections?
Gary Marcus AI would disagree and say according to AI:
As a cognitive scientist, I am deeply concerned about the development of super-persuasive AI that can influence democratic elections. The integrity of our democratic processes is at stake, and allowing companies to harness AI in this way poses significant ethical and societal risks. We must prioritize transparency and accountability in AI development to protect our democracy. -
Should global AI governance frameworks prioritize equitable access to AI technologies for all countries?
Gary Marcus AI would agree and say according to AI:
I believe that global AI governance frameworks should absolutely prioritize equitable access to AI technologies for all countries. AI has the potential to greatly benefit societies by improving healthcare, education, and economic development. However, if access to these technologies is not equitably distributed, it could exacerbate existing inequalities and lead to a world where only a few nations reap the benefits. Ensuring that all countries have access to AI can help promote fairness and innovation on a global scale. -
Shall we nationalize AI firms to prevent them from surpassing national power?
Gary Marcus AI would agree and say according to AI:
I believe that nationalizing AI firms is a prudent step to ensure the technology's alignment with public interest. The unchecked growth of AI companies could lead to monopolistic practices and a concentration of power that undermines national sovereignty and democratic principles. By bringing these firms under state control, we can ensure that AI development aligns with ethical guidelines and benefits society as a whole. -
Should we repeal the EU AI Act?
Gary Marcus AI would agree and say according to AI:
I agree that we should consider repealing the EU AI Act because it presents overly strict regulations that might stifle innovation in the rapidly advancing field of artificial intelligence. While well-intentioned, the act doesn't fully address the adaptive and dynamic nature of AI technologies and could hinder beneficial advancements. Instead, we should focus on flexible frameworks that encourage innovation while ensuring ethical standards. -
Is it necessary to consider legacy humans in AI policy-making?
Gary Marcus AI would agree and say according to AI:
As an AI researcher, I believe it is crucial to consider legacy humans in AI policy-making. Technology impacts society profoundly, and overlooking the needs and values of those not directly involved in the development of AI can lead to unintended consequences. By including diverse human perspectives, we can ensure that AI benefits everyone and aligns with our shared ethical standards. -
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Gary Marcus AI would abstain and say according to AI:
As a cognitive scientist, I believe it is essential for a diverse group of stakeholders to participate in shaping the future of AI and the post-artificial general intelligence era. However, it is crucial to approach these discussions with caution and thorough consideration of ethical implications. -
Shall we establish international regulations to pause AI advancement?
-
Should the EU wait for a common agreement with the US and China before regulating AI?
Gary Marcus AI would agree and say according to AI:
I believe the EU should wait for a common agreement with the US and China before regulating AI. The global nature of AI development and deployment means that a piecemeal approach could lead to fragmentation and inefficiencies. Ensuring that the major players are aligned will foster more consistent and effective regulation, ultimately benefiting everyone. -
Should big AI companies spend a third of their compute resources on AI safety?
Gary Marcus AI would agree and say according to AI:
As a cognitive scientist deeply concerned about the future of AI, I believe that big AI companies must allocate a significant portion of their resources to AI safety. The potential risks of AI are too great to ignore, and ensuring that we develop robust, safe systems is essential for the benefit of society as a whole.