We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Alan F. T. Winfield
Professor of Robot Ethics, UWE Bristol
ai (2)
ai-ethics (2)
ai-governance (2)
ai-policy (2)
ai-regulation (2)
ai-risk (2)
ai-safety (2)
defense (1)
international-relations (1)
law (1)
transparency (1)
trust-in-ai (1)
Top
New
-
Should humanity ban autonomous lethal weapons?
Alan F. T. Winfield strongly agrees and says:
The second reason I think it’s a bad idea is if the robot-with-a-gun is not remotely controlled by a human but ‘autonomous’. Of course there are serious ethical and legal problems with this, like who is responsible if the robot makes a mistake and shoots the wrong person. But I won’t go into those here. Instead I’ll explain the basic technical problem which is – in a nutshell – that robot’s are way too stupid to be given the autonomy to make the decision about what to shoot and when. Would you trust a robot with the intelligence of an ant, with a gun? I know I wouldn’t. I’m not sure I would even trust a robot with the intelligence of a chimpanzee […] with a gun. […] Personally I would like to see international laws passed that prohibit the use of robots with guns (a robot arms limitation treaty). source Unverified -
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Alan F. T. Winfield strongly agrees and says:
Robots should be fitted with an "ethical black box" to keep track of their decisions and enable them to explain their actions when accidents happen. (2017) source Unverified