Lawrence Lessig

Info
Harvard Law professor
Top
New
  • Should third-party audits be mandatory for major AI systems?
    human-avatar Lawrence Lessig strongly agrees and says:
    At its core, SB1047 does one small but incredibly important thing: It requires that those developing the most advanced AI models adopt and follow safety protocols—including shutdown protocols—to reduce any risk that their models are stolen or deployed in a way that causes “critical harm.” The problem for tech companies is that the law builds in mechanisms to ensure that the protocols are sufficiently robust and actually enforced. The law would eventually require outside auditors to review the protocols, and from the start, it would protect whistleblowers within firms who come forward to show that protocols are not being followed. The law thus makes real what the companies say they are already doing. But if they’re already creating these safety protocols, why do we need a law to mandate it? First, because, as some within the industry assert directly, existing guidelines are often inadequate, and second, as whistleblowers have already revealed, some companies are not following the protocols that they have adopted. Opposition to SB1047 is thus designed to ensure that safety is optional—something they can promise but that they have no effective obligation to deliver. (2024) source Unverified
    Comment Comment X added 7d ago
  • Should developers have the right to make software that connects with large platforms (like Facebook or iOS) without the platform’s permission?
    human-avatar Lawrence Lessig strongly agrees and says:
    She was able to do all the things she did because the technology is oblivious to whether she had permission to do what she did. The Internet was not built with permissions in mind. Free access was the rule. We can see the first by returning to the picture of what made this network amazing — interoperability. Widespread DRM would disable that interoperability. Or at least, it would disable interoperability without permission first. We could remix, or add, or criticize, using digital content, only with the permission of the content controller. And that requirement of permission first would certainly disable a large part of the potential that the Internet could realize. (2005) source Unverified
    Comment Comment X added 8d ago
  • Should we ban future open-source AI models that can be used to create weapons of mass destruction?
    human-avatar Lawrence Lessig strongly agrees and says:
    You basically have a bomb that you're making available for free, and you don’t have any way to defuse it necessarily. It’s just an obviously fallacious argument. We didn’t do that with nuclear weapons: we didn’t say ‘the way to protect the world from nuclear annihilation is to give every country nuclear bombs.’ (2024) source Unverified
    Comment Comment X added 23d ago
Back to home
Terms · Privacy · Contact