We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Is open-source AI potentially more dangerous than closed-source AI?
Cast your vote:
Results (6):
filter
Quotes (1)
Users (1)
-
Yoshua BengioAI Pioneer, Turing Award winnerstrongly agrees and says:we need to increase governance and we should consider limiting access to the large-scale generalist AI systems that could be weaponized, which would mean that the code and neural net parameters would not be shared in open-source and some of the important engineering tricks to make them work would not be shared either. Ideally this would stay in the hands of neutral international organizations (think of a combination of IAEA and CERN for AI) that develop safe and beneficial AI systems that could also help us fight rogue AIs. source VerifiedDelegateChoose a list of delegatesto vote as the majority of them.Unless you vote directly.
More
ai-risk
votes
More
ai-safety
votes
More
ai-ethics
votes
| Should we ban predictive policing? |
| Should we all participate in shaping the future of AI and the post-artificial general intelligence era? |
More
ai-governance
votes
| Will Europe face mass unemployment without ownership of AGI and robots? |
| Should we create a global institute for AI, similar to CERN? |
| Should we repeal the EU AI Act? |
More
ai
votes