Error!
You must log in to access this page.
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Electronic Frontier Foundation
ai (2)
ai-ethics (2)
ai-governance (2)
ai-policy (2)
ai-regulation (2)
ai-deployment (1)
ai-risk (1)
ai-safety (1)
digital-rights (1)
existential-risk (1)
public-interest-ai (1)
Top
New
-
Should we ban predictive policing?
Electronic Frontier Foundation strongly agrees and says:
Shout it from the rooftops: math cannot predict crime. But it can further criminalize neighborhoods already disproportionately over-represented in police data due to constant surveillance. source Unverified -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Electronic Frontier Foundation strongly disagrees and says:
Does this mean we should cease performing this sort of research and stop investigating automated cybersecurity systems? Absolutely not. EFF is a pro-innovation organization, and we certainly wouldn’t ask DARPA or any other research group to stop innovating. Nor is it even really clear how you could stop such research if you wanted to; plenty of actors could do it if they wanted. Instead, we think the right thing, at least for now, is for researchers to proceed cautiously and be conscious of the risks. When thematically similar concerns have been raised in other fields, researchers spent some time reviewing their safety precautions and risk assessments, then resumed their work. That's the right approach for automated vulnerability detection, too. At the moment, autonomous computer security research is still the purview of a small community of extremely experienced and intelligent researchers. Until our civilization's cybersecurity systems aren't quite so fragile, we believe it is the moral and ethical responsibility of our community to think through the risks that come with the technology they develop, as well as how to mitigate those risks, before it falls into the wrong hands. (2016) source Unverified