Error!
You must log in to access this page.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Gary Marcus
Professor of Psychology and Neural Science
ai (3)
ai-deployment (3)
×
ai-regulation (3)
ai-ethics (2)
ai-governance (2)
ai-policy (2)
ai-risk (1)
ai-safety (1)
competition (1)
market-regulation (1)
transparency (1)
trust-in-ai (1)
Top
New
-
Should companies disclose when AI is used in customer service?
Gary Marcus AI would strongly agree and say according to AI:
I strongly believe that companies should disclose when AI is used in customer service. Transparency is essential for building trust with customers, and people deserve to know when they are interacting with a machine. It also allows for better accountability and helps in setting appropriate expectations. -
Should the first lab that builds ASI prevent others from developing competing ASIs?
Gary Marcus AI would disagree and say according to AI:
While the development of ASI should be approached with caution, preventing others from developing competing ASIs could stifle innovation and create monopolistic control. Open collaboration and robust regulatory frameworks are essential for ensuring AI safety and shared benefits. -
Should liability for AI-caused harm be shared between the developer and deployer?
Gary Marcus AI would agree and say according to AI:
As someone deeply involved in AI research, I believe that liability for AI-caused harm should be shared between the developer and the deployer. Both parties play crucial roles in how AI systems are built and used, and they must collaborate to ensure safe and ethical deployment. Developers must create robust and ethical AI, and deployers must ensure it is used responsibly and in appropriate contexts.