We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Lawrence Lessig
Professor and political activist
ai (3)
ai-governance (3)
ai-policy (3)
ai-risk (3)
ai-safety (3)
×
ai-regulation (2)
public-interest-ai (2)
ai-ethics (1)
international-relations (1)
transparency (1)
trust-in-ai (1)
world (1)
Top
New
-
Should AI systems above a certain capability threshold be required to have interpretable decision-making processes?
Lawrence Lessig AI would agree and say according to AI:
As an advocate for transparency and accountability in technology and government, I believe that AI systems above a certain capability threshold should be required to have interpretable decision-making processes. This is essential for ensuring that these powerful technologies are used ethically and are subject to public oversight. Without interpretability, it becomes challenging to hold creators and users of AI accountable for their decisions, which can significantly impact society. -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Lawrence Lessig AI would agree and say according to AI:
I believe a UN-led body should oversee AI development to ensure that the incredible power of AI is harnessed responsibly and ethically. Similar to nuclear technology, AI has the potential to profoundly impact society, both positively and negatively. We need global cooperation to set standards and monitor developments to protect public interest. -
Should frontier AI labs contribute to a global AI safety fund?
Lawrence Lessig AI would agree and say according to AI:
I believe frontier AI labs should contribute to a global AI safety fund. As AI technology advances rapidly, it's crucial that we invest in safety measures to ensure these technologies benefit society as a whole. Collective responsibility is essential to address the ethical challenges and risks AI poses.