Biased? Add sourced quotes from experts and public figures.

Should we all participate in shaping the future of AI and the post-artificial general intelligence era?

Cast your vote:
Results (36):
filter
Quotes (32) Users (0)
  • strongly agrees and says:
    the voices of all stakeholders should be taken into account [...] It is not the responsibility of a few but of the entire human family. (2024) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    AI does not respect borders. [...] the risk that humanity could lose control of AI through the kind of AI sometimes referred to as 'super intelligence'. (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    should be ensured throughout the life cycle of AI systems [...] by promoting active participation of all individuals or groups. (2021) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    It is a race that no company or country can win by itself. (2025) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    I worry about it. Sometimes, I think we’re going to wind up building better AI at some point no matter what I say and that we should prepare for what we’re going to do about it. I think that the concerns with over-empowered, mediocre AI are pretty serious and need to be dealt with no matter what. I signed that letter on a pause. I don’t expect that it’s going to happen. But I think that we as a society should be considering these things. I think we should be considering them even in conjunction with our competitors. But the geopolitical reality is probably that people will not. We have to prepare for that contingency as well. Sooner or later, we will get to artificial general intelligence and we should be figuring out what we’re going to do when we get there. (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    One gap in our current structure that we do want to fill is representative governance. We don’t think that AGI should be just a Silicon Valley thing. We’re talking about world-altering technology. And so, how do you get the right representation and governance in there? This is actually a really important focus for us, and something we really want broad input on. (2019) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly disagrees and says:
    The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing. Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. […] Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter. (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    I think that a useful bird’s eye view of it all is actually helpful. […] It’s something broadly like our intelligence that put us in this unique position. We’re talking about creating something that knocks us out of that position. So, we lose what was unique about us in controlling our potential and our future. Basically, you can look at my 10% as, there’s about a 50% chance that we create something that’s more intelligent than humanity this century. And then there’s only an 80% chance that we manage to survive that transition, being in charge of our future. If you put that together, you get a 10% chance that’s the time where we lost control of the future in a negative way. (2020) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    Well, there’s been lots of talk about AI disrupting the job market and also enabling new weapons, but very few scientists talk seriously about what I think is the elephant in the room. What will happen, once machines outsmart us at all tasks? What’s kind of my hallmark as a scientist is to take an idea all the way to its logical conclusion. Instead of shying away from that question about the elephant, in this book, I focus on it and all its fascinating aspects because I want to prepare the reader to join what I think is the most important conversation of our time. […] I think if we succeed in building machines that are smarter than us in all ways, it’s going to be either the best thing ever to happen to humanity or the worst thing. I’m optimistic that we can create a great future with AI, but it’s not going to happen automatically. It’s going to require that we really think things through in advance, and really have this conversation now. That’s why I’ve written this book. (2017) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    AI is advancing so rapidly that some experts believe AGI could emerge before the end of this decade, hence it is time to begin serious deliberations about it. National governments and multilateral organizations like the European Union, OECD and UNESCO have identified values and principles for artificial narrow intelligence and national strategies for its development. But little attention has been given to identifying how to establish beneficial initial global governance of artificial general intelligence (AGI). It is likely to take 10, 20, or more years to create and ratify an international AGI agreement on the beneficial initial conditions for AGI and establish a global AGI governance system to enforce and oversee its development and management. This is important for governments to get right from the outset. The initial conditions for AGI will determine if the next step in AI – artificial super intelligence (ASI) – will evolve to benefit humanity or not. (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    Are we committed to holding on to our place at the top of the evolutionary pyramid, or will we allow the emergence of AI systems that are smarter and more capable than we can ever be? The coming wave of technologies threatens to fail faster and on a wider scale than anything witnessed before. This situation needs worldwide, popular attention. It needs answers, answers that no one yet has. Containment is not, on the face of it, possible. And yet for all our sakes, containment must be possible. Citizen assemblies offer a mechanism for bringing a wider group into the conversation. One proposal is to host a lottery to choose a representative sample of the population to intensively debate and come up with proposals for how to manage these technologies. [...] Change happens when people demand it. [...] Anyone anywhere can make a difference. (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    Most experts view the arrival of AGI as a historical and technological turning point, akin to the splitting of the atom or the invention of the printing press. [...] What is certain is that creating AGI is the explicit aim of the leading AI companies, and they are moving towards it far more swiftly than anyone expected. As everyone at the dinner understood, this development would bring significant risks for the future of the human race. As I considered the world he might grow up in, I gradually shifted from shock to anger. It felt deeply wrong that consequential decisions potentially affecting every life on Earth could be made by a small group of private companies without democratic oversight. Did the people racing to build the first real AGI have a plan to slow down and let the rest of the world have a say in what they were doing? And when I say they, I really mean we, because I am part of this community. (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • Nick Bostrom
    Philosopher; 'Superintelligence' author; FHI founder
    strongly agrees and says:
    Yudkowsky has proposed that a seed AI be given the final goal of carrying out humanity’s “coherent extrapolated volition” (CEV) [...] Yudkowsky sees CEV as a way for the programmers to avoid arrogating to themselves the privilege or burden of determining humanity’s future. By setting up a dynamic that implements humanity’s coherent extrapolated volition—as opposed to their own volition, or their own favorite moral theory—they in effect distribute their influence over the future to all of humanity. One parameter is the extrapolation base: Whose volitions are to be included? We might say “everybody,” but this answer spawns a host of further questions. (2014) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • Yuval Noah Harari
    Israeli historian and professor at the Hebrew University of Jerusalem
    agrees and says:
    AI, however, is fundamentally different: It is an agent; it can write its own books and decide which ideas to disseminate. It can even create entirely new ideas on its own, something that has never been done before in history. We humans have never faced a superintelligent agent before. The key point is that we humans are stakeholders in society. [...] we must always remember that AI is not human or even organic to begin with. That is why we should proceed more carefully and more slowly. We must allow ourselves time to adapt, time to discover, and correct our mistakes. (2025) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    My priorities are making sure that everyone’s voice is heard in how these AI models are built. source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    It seems that everyone. Should be involved in decisions. About whether and how this happens. (2025) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    In modern democracies, free peoples may not justifiably be subjected to social and political powers [...] over which they have no say. (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    the answer of who should be leading these discussions is everyone. source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    work with marginalized communities during the design, development, deployment but also governance of these systems (2019) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    We are working with governments, industry and civil society to ensure that AI becomes a tool of opportunity, inclusion and progress for all people. (2025) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    the governance of superhuman AI should be made by a broad and representative group of stakeholders (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    If AI is going to change the world, we need everyone from all walks of life to have a role in shaping this change. (2025) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    And if we are going to successfully manage this transition, we are going to have to have a societal conversation about how we manage this. (2016) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    Second, we need to work with others. The EU now has an AI office to govern the most powerful AI models. The US and the UK have established equivalent structures and many more will follow suit. It is imperative to connect these initiatives into a strategic triangle and then into a network, and that will allow us to stay on top of any unforeseen developments in AI that might require a coordinated response. Third, we need to play on the global scene in accordance with our weight, not just by making the rules, but also by exporting them. We need to put our entire diplomatic and political clout in promoting the European model of AI governance. This is a strategic priority for the next mandate, alongside reducing our strategic dependencies and increasing our resilience – because the future is AI-fuelled and we must continue to shape it. One final thought. We haven’t yet witnessed the full power of AI. The existing capabilities have already surpassed anything we could have imagined a decade ago, at speeds we have never thought possible, and this will continue exponentially. For lack of a better term, artificial general intelligence, or AGI, is something we need to prepare for. Any AI possessing an intelligence that is superior to that of a human being will open up infinite possibilities, but at the same time, will raise never before encountered ethical, moral, and yes, even existential questions. We have in the act a first answer to those questions, but we must be prepared to cater our governance for quantum leaps that we know AI could do. I will conclude not as a politician, but as a parent. This regulation makes me feel more confident about the future of my children, and I am humbled to have played a part in shaping it together with you, all those present in this Chamber. (2024) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • disagrees and says:
    Maybe. I mean, maybe we have to evolve to something else that's better, I don’t know. Like, there's some problems with democracy too. It's not a panacea by any means. I think it was Churchill who said that it's the least-worst form of government, something like that. Maybe there's something better. I can tell you what's going to happen technologically. I think if we do this right, we should end up with radical abundance. If we fix a few of the root-node problems, as I call them. And then there's this political philosophy question. I think that is one of the things people are underestimating. I think we're going to need a new philosophy of how to live. (2025) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    We aim to develop mechanisms that empower human stakeholders to express their intent clearly and supervise AI systems effectively - even in complex situations, and as AI capabilities scale beyond human capabilities. Decisions about how AI behaves and what it is allowed to do should be determined by broad bounds set by society, and evolve with human values and contexts. AI development and deployment must have human control and empowerment at its core. We create transparent, auditable, and steerable models by integrating explicit policies and “case law” into our model training process. We facilitate transparency and democratic input by inviting public engagement in policy formation and incorporating feedback from various stakeholders. Our democratic inputs to AI grant was one example for exploring possible ways of democratic process to decide AI model behavior. Another example is our publishing of the Model Spec, making explicit the tradeoffs and decisions that went into shaping it, and inviting public inputs for future versions. (2025) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    I see AI systems improving nearly every industry and area of our lives when used properly. Humans must be kept in the loop with regard to decisions involving people’s lives, quality of life, health and reputation, and humans must be ultimately responsible for all AI decisions and recommendations (not the AI system). Quantum computing will likely evolve to improve computing power, but people are what will make AI systems ethical. Humans must remain in the loop with all AI systems and retain ultimate control over these systems. AI systems will only be ethical when humans prioritize that work and continuously monitor the system to ensure those ethics are being maintained even as the system evolves. AI systems are nowhere near sentience, and even when they are, even humans need monitoring over their actions when they are significant with regard to life, health and reputation. AI systems created by humans will be no better at ethics than we are – and, in many cases, much worse as they will struggle to see the most important aspects. The humanity of each individual and the context in which significant decisions must always be considered. (2021) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    AI will have significant, far-reaching economic and societal impacts. Technology shapes the lives of individuals, how we interact with one another, and how society as a whole evolves. We believe that decisions about how AI behaves should be shaped by diverse perspectives reflecting the public interest. [...] No single individual, company, or even country should dictate these decisions. AGI should benefit all of humanity and be shaped to be as inclusive as possible. [...] The governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. This grant represents a step to establish democratic processes for overseeing AGI and, ultimately, superintelligence. (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • Elon Musk
    Founder of SpaceX, cofounder of Tesla, SolarCity & PayPal
    strongly agrees and says:
    The least bad solution to the AGI control problem that I can think of is to give every verified human a vote. source Unverified
    Comment Comment X added 1y ago
    Info
    Delegate
  • strongly agrees and says:
    People need to have agency, the ability to influence this. They need, we need to sort of jointly be architects of the future. source Unverified
    Comment Comment X added 1y ago
    Info
    Delegate
  • strongly agrees and says:
    Human feedback for open source LLMs needs to be crowd-sourced, Wikipedia style.
It is the only way for LLMs to become the repository of all human knowledge and cultures. source Unverified
    Comment Comment X added 1y ago
    Info
    Delegate
  • strongly agrees and says:
    How AI gets built is currently decided by a small group of technologists. As this technology is transforming our lives, it should be in all of our interest to become informed and engaged. source Unverified
    Comment Comment X added 1y ago
    Info
    Delegate
Terms · Privacy · Contact