We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Should humanity ban autonomous lethal weapons?
ai-ethics
international-relations
defense
ai-governance
ai
ai-safety
ai-policy
ai-regulation
law
ai-risk
Cast your vote:
Results (58):
filter
Quotes (54)
Users (0)
-
Mirjana Spoljaric EggerPresident of the International Committee of the Red Crossstrongly agrees and says:Life-and-death decisions must never be delegated to sensors and algorithms. Human control over the use of force is critical to preserving accountability in warfare. Machines with the power to take lives without human involvement should be banned under international law. (2025) source VerifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Pope FrancisLeader of the Catholic Churchstrongly agrees and says:in light of the tragedy that is armed conflict, it is urgent to reconsider the development and use of devices like the so-called ‘lethal autonomous weapons’ and ultimately ban their use. This starts from an effective and concrete commitment to introduce ever greater and proper human control. No machine should ever choose to take the life of a human being. (2024) source VerifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Max TegmarkPhysicist, AI Researcherstrongly agrees and says:It opens up entirely new possibilities for things that you can do—where you can go into battle or do a terrorist attack with zero risk to yourself, and you can also do it anonymously, because if some drones show up and start killing people somewhere you have no idea who sent them. [...] One of the main factors that limits wars today is that people have skin in the game. [...] Politicians don’t want to see body bags coming home, and even a lot of terrorists don’t want to get killed. (2015) source VerifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
António GuterresUN Secretary-Generalstrongly agrees and says:I send greetings to everyone attending these important consultations on a defining issue of our time — the threat posed by lethal autonomous weapons systems. Machines that have the power and discretion to take human lives without human control are politically unacceptable, morally repugnant and should be banned by international law. I reiterate my call for the conclusion of a legally binding instrument by 2026. The work being done by you and others around the world — including within the context of the Convention on Certain Conventional Weapons — is moving us in the right direction. (2025) source VerifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mary L. CummingsRobotics and AI policy scholarstrongly disagrees and says:There has been increasing debate over the use of autonomous weapons in the military, whether they should be banned for offensive uses, and even whether such technologies threaten human existence. As a former fighter pilot for the U.S. Navy, but also as a professor of robotics, I find these debates filled with emotional rhetoric, often made worse by media and activist organizations. Proponents of a ban on offensive autonomous weapons advocate that any use of such weapons should not be beyond meaningful human control. This language is problematic at best because there are widely varying interpretations of what meaning human control is. I suggest that what is needed is not a call for meaningful human control of autonomous weapons but rather a focus on meaningful human certification of such systems. (2019) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Evan AckermanIEEE Spectrum robotics senior editorstrongly disagrees and says:We’re not going to be able to prevent autonomous armed robots from existing. The real question that we should be asking is this: Could autonomous armed robots perform better than armed humans in combat, resulting in fewer casualties on both sides? The problem with this argument is that no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots. The barriers keeping people from developing this kind of system are just too low. [...] What we really need, then, is a way of making autonomous armed robots ethical, because we’re not going to be able to prevent them from existing. I’m not in favor of robots killing people. If this letter was about that, I’d totally sign it. But that’s not what it’s about; it’s about the potential value of armed autonomous robots, and I believe that this is something that we need to have a reasoned discussion about rather than banning. (2015) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gen. Paul J. SelvaFormer Vice Chairman, Joint Chiefsagrees and says:I don't think it's reasonable for us to put robots in charge of whether or not we take a human life. [...] I was "an advocate for keeping that restriction." Humans needed to remain in the decision making process "because we take our values to war." (2017) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Christof HeynsSouth African human rights law scholarstrongly agrees and says:It is obviously gratifying to see one’s research having an impact. At the same time, it should be recognised that we have a long way to go before there will be a complete ban, as I and many others have called for. This is an immensely complicated issue, involving on one hand the security concerns of the most powerful nations on Earth, and on the other hand the question of whether computers should hold the power of life and death over humans. Many have rightfully described this as one of the big issues of our time. (2018) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Lt. Gen. Jack ShanahanFormer JAIC director, USAF generalstrongly disagrees and says:To the second part of your question, I am strongly in favor of discussions internationally about things like norms. I think with this point, it would be counterproductive to have outright bans on things that people don't even fully understand what they mean when they say, "Ban this." What do you mean by that? Nobody has fully defined. There's a tendency, a proclivity to jump to a killer robot discussion when you talk A.I., and yet if you come and watch what my systems in Project Maven were doing, what we're working in, it's -- it's as far from that spectrum as you could possibly imagine. But it is a -- it is a completely valid conversation we should be having on things like international norms. And those are ongoing. (2019) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Alan F. T. WinfieldProfessor of Robot Ethics, UWE Bristolstrongly agrees and says:The second reason I think it’s a bad idea is if the robot-with-a-gun is not remotely controlled by a human but ‘autonomous’. Of course there are serious ethical and legal problems with this, like who is responsible if the robot makes a mistake and shoots the wrong person. But I won’t go into those here. Instead I’ll explain the basic technical problem which is – in a nutshell – that robot’s are way too stupid to be given the autonomy to make the decision about what to shoot and when. Would you trust a robot with the intelligence of an ant, with a gun? I know I wouldn’t. I’m not sure I would even trust a robot with the intelligence of a chimpanzee […] with a gun. […] Personally I would like to see international laws passed that prohibit the use of robots with guns (a robot arms limitation treaty). source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Christopher JenksSMU law professor, LOAC expertdisagrees and says:Unrealistic, belated and short-sighted, and ignores mankind's frailties. I am not advocating their use, but I object to an outright ban. […] At a minimum, it's at least possible that at some point in the not-too-distant future LARs may be able to better distinguish between combatants and civilians, from a greater distance and more accurately, than humans, and thus cause fewer civilian casualties. (2013) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
United States Department of StateU.S. foreign affairs departmentstrongly disagrees and says:The United States has opposed calls to develop a ban and does not support opening negotiations, whether on a legally binding instrument or a political declaration, at this time. We must not be anti-technology and must be cautious not to make hasty judgments about emerging or future technologies especially given how 'smart' precision-guided weapons have allowed responsible militaries to reduce risks to civilians in military operations. (2020) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mark CoeckelberghPhilosopher, AI ethics scholaragrees and says:There are more problems with fully autonomous weapons, but the conclusion is clear to me: while they have some advantages, their use is ethically highly problematic in many ways. Therefore, we should not use them and perhaps ban them. Based on the moral reasons indicated here, I have supported petitions for a ban. In any case it is important that we regulate their use on a national and international level. (2018) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Gill PrattRoboticist; Toyota Research Institute CEOdisagrees and says:I believe that now is the wrong time to be making decisions like this. Having the discussion is fine. But saying, “No, we’re not going to work on this” is wrong. First, we need to understand what’s possible. We can make a choice not to use what we develop - we have made choices like that with bio-weapons, for example. We made a choice to ban them. In the case of lethal autonomy, we need to learn a whole lot more and there’s a whole of good that they can do, too, in stopping lethal errors from happening. I would like to see where we can get to with that. There are also whole lot of reasons why a ban is impractical right now. To call for one now based on an emotional fear of a far future thing, this is the wrong time to do that. (2015) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Joshua DorosinU.S. State Dept. deputy legal adviserstrongly disagrees and says:We believe it is premature to enter into negotiations on a legally binding instrument, a political declaration, a code of conduct, or other similar instrument, and we cannot support a mandate to enter into such negotiations. (2018) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Robert O. WorkFormer U.S. Deputy Defense Secretarystrongly disagrees and says:“Here is one of the problems with the Campaign To Stop Killer Robots,” said former deputy defense secretary Robert Work. “They refer to ‘lethal autonomous weapons systems.’ […] They’re defining a weapon that is unsupervised or independent from human direction, unsupervised in its battlefield operations, and self-targeting [i.e. chooses its own targets],” Work said. “The weapon doesn’t exist! It might not even be technically feasible, and if it is technically feasible, there’s absolutely no evidence that a western army, certainly the United States, would employ such a weapon.” “In the meantime,” Work went on angrily, “they’re willing to say, ‘I’m willing to sacrifice the lives of American servicemen and women, I’m willing to take more civilian casualties, and I’m willing to take more collateral damage, on the off chance that sometime in the future this weapon will exist. “That’s unethical to me,” Work said. “That’s terribly unethical. In fact, I think it’s immoral.” (2019) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Yoshua BengioAI Pioneer, Turing Award winnerstrongly agrees and says:This risk should further motivate us to redesign the global political system in a way that would completely eradicate wars and thus obviate the need for military organizations and military weapons. [...] It goes without saying that lethal autonomous weapons (also known as killer robots) are absolutely to be banned (since from day 1 the AI system has autonomy and the ability to kill). Weapons are tools that are designed to harm or kill humans and their use and existence should also be minimized because they could become instrumentalized by rogue AIs. Instead, preference should be given to other means of policing (consider preventive policing and social work and the fact that very few policemen are allowed to carry firearms in many countries). (2023) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jai GalliottRobotics researcher and philosopherstrongly disagrees and says:The open letter signed by more than 12,000 prominent people calling for a ban on artificially intelligent killer robots, connected to arguments for a UN ban on the same, is misguided and perhaps even reckless. Wait, misguided? Reckless? Let me offer some context. I am a robotics researcher and have spent much of my career reading and writing about military robots, fuelling the very scare campaign that I now vehemently oppose. [...] UN bans are also virtually useless. Just ask anyone who’s lost a leg to a recently laid anti-personnel mine. The sad fact of the matter is that “bad guys” don’t play by the rules. (2015) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Kelsey D. AthertonDefense technology journalistdisagrees and says:Those headlines were misleading. The letter doesn’t explicitly call for a ban, although one of the organizers has suggested it does. Rather, it offers technical advice to a UN committee on autonomous weapons formed in December. The group’s warning that autonomous machines “can be weapons of terror” makes sense. But trying to ban them outright is probably a waste of time. (2017) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Maciej ZającPhilosopher and technology ethiciststrongly disagrees and says:I argue that enacting a global ban would be both unnecessary and insufficient for avoiding or meaningfully limiting AWS proliferation to actors willing to use them as weapons of subjugation and terror. It would be unnecessary because banning high-end AWS designed for counter-platform use in civilian-sparse environments would do nothing to stop proliferation of primitive killer robots best suited for mass casualty attacks against civilians. It would also be entirely insufficient – getting states generally willing to abide by the international law to support the ban would do little to stop tyrants and terrorists from acquiring simple killer robots. The latter would have to be prevented or limited by a number of vigorously pursued policies aimed specifically at malevolent actors. (2025) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jean-Baptiste Jeangène VilmerFrench security policy scholarstrongly disagrees and says:Those who reply that to delegate firing at targets to a machine is on principle unacceptable are begging the question. They do not define the “human dignity” they invoke, nor do they explain how exactly it is violated. Regarding the Martens Clause, it is more of a reminder – that in the event that certain technologies were not covered by any particular convention, they would still be subject to other international norms—than a rule to be followed to the letter. It certainly does not justify the prohibition of LAWS. If the target is legal and legitimate, does the question of who kills it (a human or a machine) have any moral relevance? And is it the machine that kills, or the human who programmed it? Its autonomy is not a Kantian “autonomy of the will,” a capacity to follow one’s own set of rules, but rather a functional autonomy, which simply implies mastering basic processes (physical and mental), in order to achieve a set goal. Furthermore, to claim as the deontologist opponents of LAWS do, that it is always worse to be killed by a machine than a human, regardless of the consequences, can lead to absurdities. Sparrow’s deontological approach forces him to conclude that the bombings of Hiroshima and Nagasaki—which he does not justify—are more “human” and so respectful of their victims’ “human dignity” than any strike by LAWS, for the simple reason that the bombers were piloted. (2015) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Izumi NakamitsuUN disarmament chiefstrongly agrees and says:The Secretary-General has always said that using machines with fully delegated power, making a decision to take human life is just simply morally repugnant. It should not be allowed. It should be, in fact, banned by international law. That's the United Nations position. (2025) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Alyn SmithScottish MP; foreign affairs leadstrongly agrees and says:The developments in Artificial Intelligence and facial recognition technologies, as well as other related technologies, could remove that human element from the control of these weapons altogether. They should be banned. They should be banned pre-emptively. I'm not the first person to call for this. Thirty nations, the UN Secretary General and the Pope have called for a ban on moral but also on technological grounds. The UK can genuinely take a lead on this. And in a bi-partisan spirit I would urge it to do so because this will be a genuinely globally significant development. To ban lethal autonomous weapons preemptively and work to build a global consensus on the practicalities of meaningful human control over weapon systems is of global significance. (2020) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ministry for Europe and Foreign Affairs (France)France’s foreign ministrystrongly disagrees and says:At first, lethal autonomous weapons systems that cannot guarantee use in conformity with international humanitarian law — that is, systems that are intrinsically indiscriminate; systems whose effects cannot be limited, anticipated and controlled; systems of a nature to cause superfluous injury or unnecessary suffering; and systems operating outside any human control and a responsible chain of command (fully autonomous lethal weapon systems) — should be prohibited. In a second phase, lethal weapons systems integrating autonomy, to which military command may entrust the execution of tasks related to critical functions (identification, selection and engagement of targets) within a specific framework of action (so‑called “partially” autonomous lethal weapons), should be regulated by implementing appropriate national measures throughout the system’s life cycle to ensure, in particular, that their development and use will be in accordance with international humanitarian law, while preserving human control as well as human responsibility and accountability. (2025) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Holy SeeThe Vatican’s sovereign entitystrongly agrees and says:Autonomous Weapon Systems, which are capable of identifying and attacking targets without direct human intervention, are a “cause for grave ethical concern” because they lack the “unique human capacity for moral judgment and ethical decision-making.” Without adequate, meaningful and consistent human control, the weaponization of AI could also become highly problematic and pose an existential risk. For these reasons, the Holy See has called for a reconsideration of the development of these weapons and a ban on their use, because “no machine should ever choose to take the life of a human being.” […] Given the rapid pace of technological advancements and the massive investment and research into weaponizing artificial intelligence, it is of the utmost urgency that this GGE delivers concrete results in the form of a robust, legally binding instrument and, in the meantime, establish an immediate moratorium on their development and use. (2025) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Alexander SchallenbergAustria’s foreign minister and diplomatstrongly agrees and says:Rapid technological advances raise fundamental legal, moral and security questions. No more so than lethal autonomous weapons systems. This is not science fiction. It is fast becoming a reality – a reality that the Secretary-General of the UN has rightly called both “politically unacceptable and morally repugnant”. We cannot allow an algorithm to decide who lives and who dies. We must ensure that weapons systems without meaningful human control are banned under international law. (2021) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Roger CabinessPentagon public affairs spokespersonstrongly disagrees and says:The U.S. Department of Defense has a policy to keep a “human in the loop” when deploying lethal force. […] “For example, commanders can use precision‑guided weapon systems with homing functions to reduce the risk of civilian casualties,” said Cabiness. (2017) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Federal Ministry for European and International Affairs (Austria)Austria’s foreign ministry (press office)agrees and says:What greater attack on human rights and human dignity could there be than allowing an algorithm to make life‑or‑death decisions? Human rights and humanitarian international law are our compass; they must continue to focus on people, not on machines. Otherwise, we face digital anarchy. This is why Austria is one of the countries preventatively supporting a legally binding international prohibition on autonomous weapon systems without human control. […] With this conference, we want to create broad societal awareness of the issue. When it comes to autonomous weapon systems, Austria fully supports an international legal standard that would guarantee human control. Self‑regulation by the developers is not sufficient. We must establish guard rails to ensure that no red lines are crossed. (2021) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ian R. KerrCanada Research Chair in Tech Ethicsstrongly agrees and says:Although engaged citizens sign petitions everyday, it is not often that captains of industry, scientists and technologists call for prohibitions on innovation of any sort — let alone an outright ban. The ban is an important signifier. Even if it is self-serving insofar as it seeks to avoid “creating a major public backlash against AI that curtails its future societal benefits,” by recognizing that starting a military AI arms race is a bad idea, the letter quietly reframes the policy question of whether to ban killer robots on grounds of morality rather than efficacy. This is crucial, as it provokes a fundamental reconceptualization of the many strategic arguments that have been made for and against autonomous weapons. When one considers the matter from the standpoint of morality rather than efficacy, it is no longer good enough to say, as careful thinkers like Evan Ackerman have said, that “no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots.” We know that. But that is not the point. Delegating life-or-death decisions to machines crosses a fundamental moral line — no matter which side builds or uses them. Playing Russian roulette with the lives of others can never be justified merely on the basis of efficacy. This is not only a fundamental issue of human rights. The decision whether to ban or engage killer robots goes to the core of our humanity. (2015) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Frank SauerSenior research fellow, Bundeswehr Universitystrongly agrees and says:The point of a preemptive treaty is to prevent future harm and with all the dangers and concerns associated with fully autonomous weapons, it would be irresponsible to take a “wait and see” approach and only try to deal with the issue after the harm has already occurred. Once developed, they will be irreversible; it will not be possible to put the genie back in the bottle as the weapons spread rapidly around the world. The notion of a preemptive treaty has been done before. The best example is the 1995 CCW protocol that bans blinding laser weapons. After initial opposition from the U.S. and others, states came to agree the weapons would pose unacceptable dangers to soldiers and civilians. The weapons were seen as counter to the dictates of public conscience and nations came to recognize their militaries would be better off if no one had the weapons than if everyone had them. These same rationales apply to fully autonomous weapons. A specific treaty banning a weapon is also the best way to stigmatize the weapon. Experience has shown that stigmatization has a powerful effect even on those who have not yet formally joined the treaty, inducing them to comply with the key provisions, lest they risk international condemnation. A regulatory approach restricting use to certain locations or to specific purposes would be prone to longer-term failure as countries would likely be tempted to use them in other, possibly inappropriate, ways during the heat of battle or in dire circumstances. Once legitimized, the weapons would no doubt be mass produced and proliferate worldwide; only a preemptive international treaty will prevent that. (2016) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Michael C. HorowitzPolitical scientist; former DoD officialdisagrees and says:Advocates of a ban on autonomous weapons often claim that the technology today isn’t good enough to discriminate reliably between civilian and military targets, and therefore can’t comply with the laws of war. In some situations, that’s true. For others, it’s less clear. Over 30 countries already have automated defensive systems to shoot down rockets and missiles. They are supervised by humans but, once activated, select and engage targets without further human input. These systems work quite effectively and have been used without controversy for decades. Autonomous weapons should not be banned based on the state of the technology today, but governments must start working now to ensure that militaries use autonomous technology in a safe and responsible manner that retains human judgment and accountability in the use of force. (2015) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Denise GarciaProfessor of international affairs, Northeasternstrongly agrees and says:Instead, Washington should take the lead in drafting a new, international agreement to ban killer robots and regulate other kinds of autonomous systems. There is no better time to push for such a prohibition than next week, on May 13, when 117 countries will meet in Geneva for the first multilateral UN talks on killer robots at the United Nations. There, the United States should stand up and tell the world that people must remain in complete control when it comes to war and peace. According to the International Court of Justice, even if a means of war does not violate international law, it may still breach the dictates of public conscience through what is known as the Martens Clause, a preamble in the Hague Convention that its drafters inserted to cover new and unexpected contingencies. The clause recommended that states effectively evaluate the moral and ethical repercussions of any new technologies. Organizations such as Human Rights Watch and Amnesty International have invoked the Martens Clause in advocating for a preemptive ban on killer robots. (2014) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Amnesty InternationalHuman rights NGOstrongly agrees and says:Ban the development, transfer, deployment and use of fully autonomous weapons systems. (2020) source VerifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
International Committee for Robot Arms ControlNGO on autonomous weapons policystrongly agrees and says:to prohibit the development, testing, production and use of autonomous weapon systems in all circumstances. (2014) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Human Rights WatchGlobal human rights advocacy organizationstrongly agrees and says:DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Kathleen McKendrickBritish Army officer; Chatham House fellowstrongly disagrees and says:Thus, a prohibition on the development and use of lethal autonomous weapons systems is not the simple solution it appears to be. (2018) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Greg AllenAI and defense policy analystdisagrees and says:DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Paul ScharreCNAS executive and weapons expertstrongly disagrees and says:Even worse, the proposed solution—a legally binding treaty banning autonomous weapons—won't solve the real problems humanity faces as autonomy advances in weapons. (2017) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Jody WilliamsNobel Peace laureate; landmine-ban advocatestrongly agrees and says:Killer robots loom over our future if we do not take action to ban them now, (2013) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Stuart J. RussellAI Expert and Professorstrongly agrees and says:A treaty banning autonomous weapons would prevent large-scale manufacturing of the technology. (2017) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Peter AsaroNew School professor; ICRAC co-founderstrongly agrees and says:We have been working for the past seven years now trying to get an international treaty to prohibit fully autonomous weapons systems of this nature. (2020) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Kenneth AndersonAmerican University law professor, juriststrongly disagrees and says:DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Ronald C. ArkinRobotics professor, Georgia Techdisagrees and says:DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Campaign to Stop Killer RobotsCoalition to ban killer robotsstrongly agrees and says:DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
European ParliamentEU legislative bodystrongly agrees and says:Weapons without meaningful human control over selecting and attacking targets should be banned before it is too late, stressed MEPs on Wednesday. (2018) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Steven GrovesHeritage Foundation fellowstrongly disagrees and says:DelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Subbarao KambhampatiASU computer science professor; ex-AAAI presidentstrongly disagrees and says:But a ban is not the solution – neither is inflaming the public with dystopian visions of the future. (2017) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Mark GubrudPhysicist, UNC adjunct, arms-control advocatestrongly agrees and says:That is why I believe we need to ban them as fast and as hard as we possibly can. (2016) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Toby WalshScientia Professor of Artificial Intelligencestrongly agrees and says:We must add autonomous weapons to the list of weapons that are morally unacceptable to use. (2017) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Noel SharkeyEmeritus AI and robotics professorstrongly agrees and says:It is clear that the rational approach to the inhumanity of automating death by machine is to prohibit it. (2012) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Wendell WallachBioethicist and AI governance expertstrongly agrees and says:We need a law to ban autonomous robots from killing people on their own initiative. For example, in 2013, the Northrop Grumman X-47B, a prototype sub-sonic aircraft with two bomb compartments and a 62-foot wingspan, autonomously took off from and landed on an aircraft carrier. The proposed ban on autonomous lethal robots is focused upon ensuring that in the future, selecting a target and pulling the “trigger” is always a decision made by a human and never delegated to a machine. There must always be a human in the loop. Today’s computers do not have the smarts to make discriminating decisions such as who to kill or when to fire a shot or a missile. Thus, a ban is directed at future systems that have not yet been deployed, and in nearly all cases, have not yet been built. There is still time to make a course correction. Nevertheless, there already exist dumb autonomous or semi-autonomous weapons that can kill. (2015) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Bonnie DochertyHuman rights lawyer and scholarstrongly agrees and says:The next revolution in warfare threatens to undermine fundamental principles of morality and law. Fully autonomous weapons, already under development in a number of countries, would have the power to select targets and fire on them without meaningful human control. In so doing, they would violate basic humanity and the public conscience. A new report from Human Rights Watch and Harvard Law School’s International Human Rights Clinic, of which I was the lead author, shows why fully autonomous weapons would fail both prongs of the test laid out in the Martens Clause. We conclude that the only adequate solution for dealing with these potential weapons is a preemptive ban on their development, production, and use. More than 70 countries will convene at the United Nations in Geneva from August 27 to 31 to discuss what they refer to as lethal autonomous weapons systems. They will meet under the auspices of the Convention on Conventional Weapons, a major disarmament treaty. To avert a crisis of morality and a legal vacuum, countries should agree to start negotiating a treaty prohibiting these weapons in 2019. (2018) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
Government of the United KingdomUnited Kingdom central governmentstrongly disagrees and says:The UK remains fully committed to the Convention on Certain Conventional Weapons. Under the UK’s chairmanship in 2017, High Contracting Parties agreed essential financial reforms. But the Convention will only become financially stable if all High Contracting Parties pay their contributions and arrears promptly. The UK welcomes the progress made this year by the CCW Group of Governmental Experts on Lethal Autonomous Weapons Systems. We support continuing the GGE’s current mandate into 2019. We continue to oppose a legal instrument or ban that would prejudice legitimate technological advances. We look forward to further work through the GGE on Guiding Principles and the role of existing processes, structures, industry standards and national and international legal frameworks. (2018) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
-
National Security Commission on Artificial IntelligenceU.S. federal advisory commissionstrongly disagrees and says:World military powers both large and small are pursuing artificial intelligence (AI)-enabled and autonomous weapon systems. Such systems have the potential to help commanders make faster, better, and more relevant decisions. They will enable weapon systems to be capable of levels of performance, speed, and discrimination that exceed human capabilities. The increasing use of AI technologies in weapon systems has generated important questions regarding whether such systems are lawful, safe, and ethical. The Commission does not support a global prohibition of AI-enabled and autonomous weapon systems. (2021) source UnverifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.