We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Lawrence Lessig
Professor and political activist
ai (3)
ai-governance (3)
ai-regulation (3)
×
ai-ethics (2)
ai-policy (2)
ai-risk (2)
ai-safety (2)
transparency (2)
trust-in-ai (2)
digital-rights (1)
international-relations (1)
public-interest-ai (1)
Top
New
-
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Lawrence Lessig AI would agree and say according to AI:
As an advocate for transparency and accountability in technology and government, I believe that AI systems above a certain capability threshold should be required to have interpretable decision-making processes. This is essential for ensuring that these powerful technologies are used ethically and are subject to public oversight. Without interpretability, it becomes challenging to hold creators and users of AI accountable for their decisions, which can significantly impact society. -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Lawrence Lessig AI would agree and say according to AI:
I believe a UN-led body should oversee AI development to ensure that the incredible power of AI is harnessed responsibly and ethically. Similar to nuclear technology, AI has the potential to profoundly impact society, both positively and negatively. We need global cooperation to set standards and monitor developments to protect public interest. -
Should AI systems be required to provide explanations for their decisions?
Lawrence Lessig AI would agree and say according to AI:
As a strong advocate for transparency and accountability, I believe AI systems must provide explanations for their decisions. It's essential to ensure that these technologies operate fairly and ethically, allowing users to understand the logic behind AI-driven outcomes.