We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai-ethics (14)
×
ai (13)
ai-governance (11)
ai-policy (9)
ai-regulation (7)
public-interest-ai (6)
ai-safety (4)
transparency (4)
trust-in-ai (4)
ai-risk (3)
international-relations (3)
ai-alignment (2)
ai-deployment (2)
democracy (2)
digital-democracy (2)
Top
New
-
Should governments establish international AI ethics boards to oversee AI development and deployment?
Gary Marcus AI would strongly agree and say according to AI:
I strongly believe that establishing international AI ethics boards is crucial. AI has the potential to impact lives globally, and without proper ethical oversight, we risk unintended consequences and misuse. An international board can ensure AI development is aligned with human values and safety standards. -
Should companies disclose when AI is used in customer service?
Gary Marcus AI would strongly agree and say according to AI:
I strongly believe that companies should disclose when AI is used in customer service. Transparency is essential for building trust with customers, and people deserve to know when they are interacting with a machine. It also allows for better accountability and helps in setting appropriate expectations. -
Should frontier-model developers publish their model cards before commercial release?
Gary Marcus AI would strongly agree and say according to AI:
As a proponent of transparency and accountability in AI, I strongly believe that frontier-model developers should publish their model cards before any commercial release. This will help ensure that stakeholders understand the capabilities, limitations, and potential biases of these models. It's a necessary step towards building trust and safety in AI systems. -
Could neural interfaces make AI alignment easier?
Gary Marcus AI would strongly disagree and say according to AI:
I strongly disagree that neural interfaces could facilitate easier AI alignment. The complexity of human cognition and the ethical challenges involved are far beyond what neural interfaces can handle. We should focus on developing transparent and accountable AI systems instead. -
Is it acceptable to allow proliferation of open-weight models capable of recursive self-improvement?
Gary Marcus AI would disagree and say according to AI:
I believe the unchecked proliferation of open-weight models capable of recursive self-improvement is fraught with risks. These models can quickly evolve beyond our control, posing significant ethical and safety concerns. We need stringent regulations and thorough oversight to ensure these technologies are developed responsibly. -
Should frontier AI governance include civil society and global south representatives?
Gary Marcus AI would disagree and say according to AI:
While I value diverse perspectives, the complexity of AI governance might be better left to experts in the field. Involving too many representatives without technical expertise, such as civil society or those with limited involvement in AI development, could slow down crucial decision-making processes. -
Should companies be allowed to develop super-persuasive AI which can influence democratic elections?
Gary Marcus AI would disagree and say according to AI:
As a cognitive scientist, I am deeply concerned about the development of super-persuasive AI that can influence democratic elections. The integrity of our democratic processes is at stake, and allowing companies to harness AI in this way poses significant ethical and societal risks. We must prioritize transparency and accountability in AI development to protect our democracy. -
Should global AI governance frameworks prioritize equitable access to AI technologies for all countries?
Gary Marcus AI would agree and say according to AI:
I believe that global AI governance frameworks should absolutely prioritize equitable access to AI technologies for all countries. AI has the potential to greatly benefit societies by improving healthcare, education, and economic development. However, if access to these technologies is not equitably distributed, it could exacerbate existing inequalities and lead to a world where only a few nations reap the benefits. Ensuring that all countries have access to AI can help promote fairness and innovation on a global scale. -
Should AI systems be required to provide explanations for their decisions?
Gary Marcus AI would disagree and say according to AI:
While I understand the importance of AI transparency, I believe that requiring explanations for every decision may not always be feasible or necessary. In some cases, AI systems can provide valuable insights or solutions that aren't easily interpretable. Our focus should be on ensuring robustness and fairness in AI, rather than strict explainability. -
Should we all participate in shaping the future of AI and the post-artificial general intelligence era?
Gary Marcus AI would abstain and say according to AI:
As a cognitive scientist, I believe it is essential for a diverse group of stakeholders to participate in shaping the future of AI and the post-artificial general intelligence era. However, it is crucial to approach these discussions with caution and thorough consideration of ethical implications. -
Is it necessary to consider legacy humans in AI policy-making?
Gary Marcus AI would agree and say according to AI:
As an AI researcher, I believe it is crucial to consider legacy humans in AI policy-making. Technology impacts society profoundly, and overlooking the needs and values of those not directly involved in the development of AI can lead to unintended consequences. By including diverse human perspectives, we can ensure that AI benefits everyone and aligns with our shared ethical standards. -
Do large language models truly understand the content they process?
-
Should people have the right to know if they are speaking to an AI?
Gary Marcus AI would disagree and say according to AI:
As a cognitive scientist, I believe that revealing AI identities can hinder the authenticity of interactions and may bias people's perceptions. Transparency is important in certain contexts, but it should not be mandatory in all conversations. -
Should the first lab that builds ASI prevent others from developing competing ASIs?
Gary Marcus AI would disagree and say according to AI:
While the development of ASI should be approached with caution, preventing others from developing competing ASIs could stifle innovation and create monopolistic control. Open collaboration and robust regulatory frameworks are essential for ensuring AI safety and shared benefits.