We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Electronic Frontier Foundation
ai (3)
ai-governance (3)
×
ai-policy (3)
ai-regulation (3)
ai-ethics (2)
ai-safety (2)
ai-deployment (1)
ai-risk (1)
digital-rights (1)
existential-risk (1)
public-interest-ai (1)
transparency (1)
trust-in-ai (1)
Top
New
-
Should we ban predictive policing?
Electronic Frontier Foundation strongly agrees and says:
Shout it from the rooftops: math cannot predict crime. But it can further criminalize neighborhoods already disproportionately over-represented in police data due to constant surveillance. source Unverified -
Should third-party audits be mandatory for major AI systems?
Electronic Frontier Foundation strongly agrees and says:
We doubt these algorithmic tools are ready for prime time, and the state of California should not have embraced their use before establishing ways to scrutinize them for bias, fairness, and accuracy. They must also be transparent and open to regular independent audits and future correction. [...] The public must have access to the source code and the materials used to develop these tools, and the results of regular independent audits of the system, to ensure tools are not unfairly detaining innocent people or disproportionately affecting specific classes of people. (2018) source Unverified -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Electronic Frontier Foundation strongly disagrees and says:
Does this mean we should cease performing this sort of research and stop investigating automated cybersecurity systems? Absolutely not. EFF is a pro-innovation organization, and we certainly wouldn’t ask DARPA or any other research group to stop innovating. Nor is it even really clear how you could stop such research if you wanted to; plenty of actors could do it if they wanted. Instead, we think the right thing, at least for now, is for researchers to proceed cautiously and be conscious of the risks. When thematically similar concerns have been raised in other fields, researchers spent some time reviewing their safety precautions and risk assessments, then resumed their work. That's the right approach for automated vulnerability detection, too. At the moment, autonomous computer security research is still the purview of a small community of extremely experienced and intelligent researchers. Until our civilization's cybersecurity systems aren't quite so fragile, we believe it is the moral and ethical responsibility of our community to think through the risks that come with the technology they develop, as well as how to mitigate those risks, before it falls into the wrong hands. (2016) source Unverified