Biased? Add more sourced quotes.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Require human-in-the-loop oversight for agentic AI systems acting in high-stakes domains
Cast your vote:
For (16)
-
Mustafa SuleymanMicrosoft AI CEO; authorvotes For and says:For AI to deliver its promised benefits, it must be designed in the humanist tradition, with people remaining unequivocally in control, and with human dignity always coming first. We are not building an ill-defined and ethereal superintelligence; we ... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stuart J. RussellAI Expert and Professorvotes For and says:The problem of control — how do we maintain power, forever, over entities that will eventually become more powerful than us — is the most important problem facing humanity. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Kai-Fu LeeAI entrepreneur and authorvotes For and says:The ultimate security concern is autonomous weapons without humans in the loop. If we allow A.I. to pull the trigger, imagine how many triggers there can be and also how smart the targeting can be. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Saket SrivastavaChief Information Officer at Asanavotes For and says:By 2026, boards will ask the same questions about agents that they ask about people: who is allowed to do what, with which data and under whose supervision. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Raghu MalpaniCTO of UiPath, enterprise automation companyvotes For and says:The need for stronger oversight is only increasing as organizations deploy more agents and give them access to a wider range of core processes, demanding continuous, embedded visibility and real-time control. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
DV LambaCTO of OneTrust, trust intelligence platformvotes For and says:Governance isn't a checkpoint anymore; it's a circuit breaker built into the pipeline. In 2026, accountability-in-the-loop will be the standard for high-risk AI. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Eoin HinchyCEO and co-founder of Tines, workflow automation platformvotes For and says:The next wave of AI innovation will be defined by agents that act before they're asked, but the real differentiator will be how effectively humans stay in the loop. Human judgment remains critical, providing the context, ethics, and nuance that AI ca... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Brando BenifeiItalian MEP, AI Act co-rapporteurvotes For and says:We finally have the world's first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. We ensured that human beings and European values are at the very centre of AI's developmen... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Thierry BretonEU internal market commissionervotes For and says:We are regulating as little as possible and as much as needed with proportionate measures for AI models. The AI Act ensures that high-risk AI systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human overs... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Demis HassabisNobel laureate, AI Researcher and CEO of DeepMindvotes For and says:The risk from the technology itself, as it becomes more autonomous, more agent-based, is what's going to happen over the next few years. We have to make sure we keep control of the systems, that they're aligned with our values, that they're doing wha... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Geoffrey HintonGodfather of Deep Learningvotes For and says:We have no idea whether we can stay in control of entities that are more intelligent than us. There is no guaranteed path to safety. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yoshua BengioAI Pioneer, Turing Award winnervotes For and says:The leading AI companies are increasingly focused on building generalist AI agents — systems that can autonomously plan, act, and pursue goals across almost all tasks that humans can perform. Despite how useful these systems might be, unchecked AI ag... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Dario AmodeiCEO at Anthropicvotes For and says:We need both a way to frequently monitor these emerging risks, and a protocol for responding appropriately when they occur. Autonomous AI systems that escape human control pose a significant threat to society. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Marc BenioffCEO and co-founder of Salesforcevotes For and says:Every company can be an agentic enterprise but we have to keep the human in the loop. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Sapna ChadhaGoogle VP for Southeast Asia and South Asia Frontiervotes For and says:You wouldn't want to have a system that can do this fully without a human in the loop. Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Brad SmithMicrosoft vice chair and presidentvotes For and says:Laws requiring developers to build safety brakes into such systems, and requiring that deployers can use them effectively, would promote accountability by ensuring that these systems remain under human control at all times. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Abstain (0)
Against (8)
-
Reid HoffmanLinkedIn co-founder, venture capitalist, author of Superagencyvotes Against and says:You have to do the hard work of thinking about the outcomes that you're trying to steer away from, as opposed to saying just stop until you're perfect. An iterative deployment process that gets AI tools into everyone's hands and then responds to thei... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Andrew NgBaidu; Stanford CS faculty; founded Coursera and Google Brainvotes Against and says:Guardrails are needed. But they should be applied to AI applications, not to general-purpose AI technology. Regulating the technology itself would stifle innovation and hold back the progress that benefits everyone. Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Avani DesaiCEO of Schellman, cybersecurity and compliance firmvotes Against and says:Humans really can't keep up with high-frequency, high-volume decision-making made by generative AI. Traditional human review models are collapsing as generative and agentic systems move from experimentation into production. Unverified source (2026)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stephen RobertsOxford professor and co-founder of Mind Foundryvotes Against and says:The phrase 'human in the loop' creates the often-illusory impression that humans have ultimate control over the AI system. The AI makes a decision or prediction, and then the human uses this to inform their own decision, with interaction being superf... more Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Cory DoctorowAuthor and digital rights activistvotes Against and says:AI's 'human in the loop' is a moral crumple zone, an accountability sink, but not a supervisor. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Marc AndreessenGeneral Partner at a16z (VC), co-founder of Netscapevotes Against and says:The quest to regulate AI is driven by a 'Baptists and Bootleggers' coalition — true believer social reformers who think regulation prevents disaster, and self-interested CEOs who profit from regulatory barriers that protect them from competition. AI ... more Unverified source (2023)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jinsook HanGenpact Chief Strategy, Corporate Development, and Global Agentic AI Officervotes Against and says:I don't like the term 'human in the loop.' It makes it sound like, oh, you enter any human in any part of the loop, and it's going to work out. Whereas, do you really need a human in this loop? When the human actually goes in to look at it, what is t... more Unverified source (2025)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yann LeCunComputer scientist, AI researchervotes Against and says:Regulators should regulate applications, not technology. Regulating basic technology will put an end to innovation. Making technology developers liable for bad uses of products built from their technology will simply stop technology development. Unverified source (2024)DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
Finding sourced quotes...
More
h/ai-safety
votes