Stuart J. Russell

Info
AI Expert and Professor
Wikipedia
Location: United States
Top
New
  • Should we ban future open-source AI models that can be used to create weapons of mass destruction?
    And a second point is about liability. And it's not completely clear where exactly the liability should lie. But to continue the nuclear analogy, if a corporation decided they wanted to sell a lot of enriched uranium in supermarkets, and someone decided to take that enriched uranium and buy several pounds of it and make a bomb, we say that some liability should reside with the company that decided to sell the enriched uranium. They could put advice on it saying, "Do not use more than," you know, "three ounces of this in one place," or something. But no one's going to say that that absolves them from liability. So, I think those two are really important. And the open source community has got to start thinking about whether they should be liable for putting stuff out there that is ripe for misuse. (2023) source Verified
    Comment 1 Comment X added 23d ago
  • Should humanity ban autonomous lethal weapons?
    human-avatar Stuart J. Russell strongly agrees and says:
    A treaty banning autonomous weapons would prevent large-scale manufacturing of the technology. (2017) source Unverified
    Comment Comment X added 23d ago
  • Should third-party audits be mandatory for major AI systems?
    This committee has discussed ideas such as third-party testing, a new national agency, and an international coordinating body, all of which I support. Here are some more ways to “move fast and fix things”: Eventually, we will develop forms of AI that are provably safe and beneficial, which can then be mandated. Until then, we need real regulation and a pervasive culture of safety. (2023) source Unverified
    Comment Comment X added 23d ago
  • Should humanity build artificial general intelligence?
    “If we pursue [our current approach], then we will eventually lose control over the machines. But, we can take a different route that actually leads to AI systems that are beneficial to humans,” said Russell. “We could, in fact, have a better civilization.” “You should not deploy systems whose internal principles of operation you don’t understand, that may or may not have their own internal goals that they are pursuing and that you claim show ‘sparks of AGI,’ […] ‘If we believe we have sparks of AGI, that’s a technology that could completely change the face of the earth and civilization,’ said Russell. ‘How can we not take that seriously?’” (2023) source Unverified
    Comment Comment X added 7d ago
Back to home
Terms · Privacy · Contact