We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Stuart J. Russell
AI Expert and Professor
Wikipedia
Location: United States
ai (4)
ai-safety (4)
ai-ethics (3)
ai-governance (3)
ai-policy (3)
ai-regulation (3)
ai-risk (3)
existential-risk (2)
ai-deployment (1)
defense (1)
international-relations (1)
law (1)
public-interest-ai (1)
transparency (1)
trust-in-ai (1)
Top
New
-
Does AI pose an existential threat to humanity?
Stuart J. Russell strongly agrees and says:
Developing strong AI would be the biggest event in human history, but we need to make sure it's not the last event in human history. source Unverified -
Should we ban future open-source AI models that can be used to create weapons of mass destruction?
Stuart J. Russell agrees and says:
And a second point is about liability. And it's not completely clear where exactly the liability should lie. But to continue the nuclear analogy, if a corporation decided they wanted to sell a lot of enriched uranium in supermarkets, and someone decided to take that enriched uranium and buy several pounds of it and make a bomb, we say that some liability should reside with the company that decided to sell the enriched uranium. They could put advice on it saying, "Do not use more than," you know, "three ounces of this in one place," or something. But no one's going to say that that absolves them from liability. So, I think those two are really important. And the open source community has got to start thinking about whether they should be liable for putting stuff out there that is ripe for misuse. (2023) source Verified -
Should we ban autonomous lethal weapons?
Stuart J. Russell strongly agrees and says:
A treaty banning autonomous weapons would prevent large-scale manufacturing of the technology. (2017) source Unverified -
Should third-party audits be mandatory for major AI systems?
Stuart J. Russell agrees and says:
This committee has discussed ideas such as third-party testing, a new national agency, and an international coordinating body, all of which I support. Here are some more ways to “move fast and fix things”: Eventually, we will develop forms of AI that are provably safe and beneficial, which can then be mandated. Until then, we need real regulation and a pervasive culture of safety. (2023) source Unverified