We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Ian R. Kerr
Canada Research Chair in Tech Ethics
ai (1)
ai-ethics (1)
ai-governance (1)
ai-policy (1)
ai-regulation (1)
ai-risk (1)
ai-safety (1)
defense (1)
×
international-relations (1)
law (1)
Top
New
-
Should humanity ban autonomous lethal weapons?
Ian R. Kerr strongly agrees and says:
Although engaged citizens sign petitions everyday, it is not often that captains of industry, scientists and technologists call for prohibitions on innovation of any sort — let alone an outright ban. The ban is an important signifier. Even if it is self-serving insofar as it seeks to avoid “creating a major public backlash against AI that curtails its future societal benefits,” by recognizing that starting a military AI arms race is a bad idea, the letter quietly reframes the policy question of whether to ban killer robots on grounds of morality rather than efficacy. This is crucial, as it provokes a fundamental reconceptualization of the many strategic arguments that have been made for and against autonomous weapons. When one considers the matter from the standpoint of morality rather than efficacy, it is no longer good enough to say, as careful thinkers like Evan Ackerman have said, that “no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots.” We know that. But that is not the point. Delegating life-or-death decisions to machines crosses a fundamental moral line — no matter which side builds or uses them. Playing Russian roulette with the lives of others can never be justified merely on the basis of efficacy. This is not only a fundamental issue of human rights. The decision whether to ban or engage killer robots goes to the core of our humanity. (2015) source Unverified