We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Jean-Baptiste Jeangène Vilmer
French security policy scholar
ai (1)
ai-ethics (1)
ai-governance (1)
×
ai-policy (1)
ai-regulation (1)
ai-risk (1)
ai-safety (1)
defense (1)
international-relations (1)
law (1)
Top
New
-
Should humanity ban autonomous lethal weapons?
Jean-Baptiste Jeangène Vilmer strongly disagrees and says:
Those who reply that to delegate firing at targets to a machine is on principle unacceptable are begging the question. They do not define the “human dignity” they invoke, nor do they explain how exactly it is violated. Regarding the Martens Clause, it is more of a reminder – that in the event that certain technologies were not covered by any particular convention, they would still be subject to other international norms—than a rule to be followed to the letter. It certainly does not justify the prohibition of LAWS. If the target is legal and legitimate, does the question of who kills it (a human or a machine) have any moral relevance? And is it the machine that kills, or the human who programmed it? Its autonomy is not a Kantian “autonomy of the will,” a capacity to follow one’s own set of rules, but rather a functional autonomy, which simply implies mastering basic processes (physical and mental), in order to achieve a set goal. Furthermore, to claim as the deontologist opponents of LAWS do, that it is always worse to be killed by a machine than a human, regardless of the consequences, can lead to absurdities. Sparrow’s deontological approach forces him to conclude that the bombings of Hiroshima and Nagasaki—which he does not justify—are more “human” and so respectful of their victims’ “human dignity” than any strike by LAWS, for the simple reason that the bombers were piloted. (2015) source Unverified