Biased? Add sourced quotes from experts and public figures.

Should humanity build artificial general intelligence?

Cast your vote:
Results (50):
filter
Quotes (49) Users (0)
  • Pedro Domingos
    Professor of computer science at UW and author of 'The Master Algorithm'
    agrees and says:
    One is that artificial general intelligence is still very far away. We’ve made a lot of progress in AI, but there’s far, far more still to go. It’s a very long journey. We’ve come a thousand miles, but there’s a million more to go. So, a lot of the talk that we hear as if AI is just around the corner—human level, general intelligence is just around the corner—really is missing a knowledge of the history of AI and knowledge of how hard the problem is. We know now that this is a very hard problem. In the beginning, the pioneers underestimated how hard it was, and people who are new to the field still do that. That’s one aspect. The other aspect is, which is more subtle but ultimately more important, is that even if AGI was around the corner, there’s still no reason to panic. We can have AI systems that are as intelligent as humans are; in fact, far more, and not have to fear them. People fear AI because when they hear “intelligence” they project onto the machine all these human qualities like emotions and consciousness and the will to power and whatnot, and they think AI will outcompete us as a species. That ain’t how it works. AI is just a very powerful tool, and as long as we... I can imagine hackers trying to create an evil AI and we need a cyber police to do that. But short of that think, for example, that you want to use AI to cure cancer. And this is, of course, a very real application. We want it to be as intelligent as possible, or any other application. So the more intelligent we make the AI, the better off we are, provided that we stay in control. (2021) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • strongly agrees and says:
    Does this mean that machines will replace us? I don't feel that it makes much sense to think in terms of "us" and "them." I much prefer the attitude of Hans Moravec of Carnegie-Mellon University, who suggests that we think of those future intelligent machines as our own "mind- children." In the past, we have tended to see ourselves as a final product of evolution -- but our evolution has not ceased. Will robots inherit the earth? Yes, but they will be our children. We owe our minds to the deaths and lives of all the creatures that were ever engaged in the struggle called Evolution. Our job is to see that all this work shall not end up in meaningless waste. (1994) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • agrees and says:
    I think we can all agree already now that we should aspire to build AGI that doesn't overpower us, but that empowers us. And think of the many various ways that can do that, whether that's from my side of the world of autonomous vehicles. I'm personally actually from the camp that believes this human-level intelligence is required to achieve something like vehicles that would actually be something we would enjoy using and being part of. So focusing on those and then coming up with the obstacles, coming up with the ways that that can go wrong and solving those one at a time. (2018) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • strongly agrees and says:
    We can change ourselves, and we can also build new children who are properly suited for the new conditions. Robot children. "Not at all. Long before the cancer, I was already obsessively committed to robots for whatever neurotic reason. That was where I wanted to spend my energy." (1995) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • strongly agrees and says:
    I'm much more focused on the benefits to all of us. I am haunted by the fact that the industrial revolution didn't touch the parts of the world where I grew up until much later. So I am looking for the thing that may be even bigger than the industrial revolution, and really doing what the industrial revolution did for the West, for everyone in the world. So I'm not at all worried about AGI showing up, or showing up fast. Great, right? That means 8 billion people have abundance. That's a fantastic world to live in. I would say we should speed up the work that needs to be done to create these alignments. To align an AI model with the world, you have to align it in the world and not in some simulation. (2023) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • disagrees and says:
    For me, AI should be grounded and centred on the person and how to best help the person. But for a lot of people, it's grounded and centred on the technology. [...] The AGI thing, the AGI narrative, sidesteps that, and instead actually puts forward technology in place of people. I always say, it's important to supplement, not supplant. (2025) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • disagrees and says:
    This is going to be the most productive decade in the history of our species. But in order to truly reap the benefits of AI, we need to learn how to contain it. Paradoxically, part of that will mean collectively saying no to certain forms of progress. As an industry leader reckoning with a future that’s about to be ‘turbocharged’ I think we all need to play a role in shaping the technology—and that includes recognizing the lines we should not cross. (2023) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • disagrees and says:
    Because you're saying like well one, these things are not climate friendly to begin with, right, so like how dare you even put it in the same camp. Two, it's okay to unsettle humanness and if you actually listen to people who you know like uh people like Indigenous scholars and and Indigenous elders yeah actually that's that's actually not a bad sort of thing. Building building a quote unquote 'AGI' is is not it though. You are not decentering the the human in a way that is actually uh kind of uh writing our relationship with the Earth in any kind of meaningful way, and that just that's a thing that really grinds my gears. (2023) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • Sam Harris
    American author, philosopher, and neuroscientist
    agrees and says:
    It’s just a matter of the implications of continually making progress in building more and more intelligent machines. Any progress, it doesn’t have to be Moore’s law, it just has to be continued progress, will ultimately deliver us into relationship with something more intelligent than ourselves. [...] Given the power and value of intelligent machines, we will build more and more intelligent machines at almost any cost at this point, so a failure to do it would be a sign that something truly awful has happened. (2020) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • strongly disagrees and says:
    If we build it using anything remotely like modern methods, on anything remotely like the current understanding or lack of understanding that we have about AI, then yeah, building it anytime soon would be a death sentence. [...] When you're in a car careening towards a cliff, you don't say, let's talk about gravity risk. You say, we need to stop this. I think what the world needs is a global ban on superintelligence research and development. [...] Superintelligence doesn't exist yet. We don't need to make it. (2025) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • agrees and says:
    AGI definition is same level as a human brain, that is AGI, artificial general intelligence, but people have a different point of view, definition of artificial superintelligence. How super? You know, ten times super? A hundred times super? My definition of ASI is 10,000 times super smarter than human brain. That is my definition of ASI and that is coming in 2035, ten years, ten years from today, 10,000 times smarter. That’s my prediction. Both, both. We should be looking forward to that. Of course, we have to also be careful, we have to regulate. If such a superpower comes and no regulation, it could be super dangerous. (2024) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • Yuval Noah Harari
    Israeli historian and professor at the Hebrew University of Jerusalem
    agrees and says:
    […] there is enormous positive potential otherwise we wouldn’t develop it. AI can invent new medicines. Uh, you can have AI doctors providing billions of people around the world with much, much better healthcare than what people receive today. I’m not saying, oh, we should stop all development of AI. No, the key question is how do we enable the positive potential of AI to flower while avoiding the really existential risks that this technology poses. (2024) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • strongly disagrees and says:
    So, given how hard it is to be fair, why should we build AI that needs us to be fair to it? […] And I’m recommending that if you specify something, if you say okay this is when you really need rights in this context, okay once we’ve established that, don’t build that, okay? If you have something that’s sort of a humanoid servant that you own, then the word for that is slave. […] And so I was trying to establish that look, we are going to own anything we build, and so therefore it would be wrong to make it a person, because we’ve already established that slavery of people is wrong and bad and illegal. (2017) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • Andrew Ng
    Baidu; Stanford CS faculty; founded Coursera and Google Brain
    agrees and says:
    I hope that we will reach AGI and ASI superintelligence someday, maybe within our lifetimes, maybe within the next few decades or hundreds of years, we’ll see how long it takes. Even AI has to obey the laws of physics, so I think physics will place limitations, but I think the ceiling for how intelligent systems can get, and therefore what we can direct them to do for us will be extremely high. (2025) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • Nick Bostrom
    Philosopher; 'Superintelligence' author; FHI founder
    strongly agrees and says:
    I think broadly speaking with AI, … rather than coming up with a detailed plan and blueprint in advance, we’ll have to kind of feel our way through this and make adjustments as we go along as new opportunities come into view. […] I think ultimately this transition to the superintelligence era is one we should do. It would be in itself an existential catastrophe if we forever failed to develop superintelligence. (2025) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • strongly agrees and says:
    Because ultimately I'm a Cosmist. I think humanity should build artilects - gods. To choose never to do such a thing, would be a tragic mistake, I feel. We are too limited to be much as humans, but artilects have no limits. They could be as magnificent as the laws of physics will allow them to be (maybe even beyond?). (1997) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • disagrees and says:
    The thought of open source AGI being released before we have worked out how to regulate these very powerful AI systems is really very scary. In the wrong hands technology like this could do a great deal of harm. It is so irresponsible for a company to suggest it. (2024) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • Ilya Sutskever
    AI researcher, co-founder and former former chief scientist at OpenAI
    agrees and says:
    After almost a decade, I have made the decision to leave OpenAI. [...] I'm confident that OpenAI will build AGI that is both safe and beneficial. (2024) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • strongly agrees and says:
    Superintelligence is within reach. Building safe superintelligence (SSI) is the most important technical problem of our time. We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence. It’s called Safe Superintelligence Inc. (2024) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • strongly disagrees and says:
    This is our nuclear moment. Rapid AI advancement represents one of history’s most consequential and dangerous technological shifts. We demand politicians and companies pause AGI development until international safety agreements are established. Join our global network standing for democratic oversight of AI. PauseAI Global unites concerned citizens—scientists, parents, students, workers, and community leaders—who believe transformative technologies require public input before they progress beyond human control. (2025) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • strongly agrees and says:
    Perhaps the most ambitious scientific quest in human history is the creation of general artificial intelligence, which roughly means AI that is as smart or smarter than humans. The dominant approach in the machine learning community is to attempt to discover each of the pieces required for intelligence, with the implicit assumption that some future group will complete the Herculean task of figuring out how to combine all of those pieces into a complex thinking machine. This paper describes another exciting path that ultimately may be more successful at producing general AI. The idea is to create an AI-generating algorithm (AI-GA), which automatically learns how to produce general AI. (2019) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • agrees and says:
    I don't like the phrase AGI. I prefer human-level intelligence because human intelligence is not general. Internally, we call this AMI-advanced machine intelligence. We have a pretty good plan on how to get there. First, we are building systems that understand the physical world-which learn by watching videos. Second, we need LLMs to have persistent memory. Humans have a special structure in the brain that stores our working memory, our long-term memory, factual, episodic memory. We don't have that in LLMs. And the third most important thing is the ability to plan and reason. (2024) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • disagrees and says:
    Putting liability on users feels most incentive-compatible. While the link between how a model is developed and how it ends up being used is often unclear, the user decides exactly how the AI is used. Liability on users creates a strong pressure to do AI in what I consider the right way: focus on building mecha suits for the human mind, not on creating new forms of self-sustaining intelligent life. The former responds regularly to user intent, and so would not cause catastrophic actions unless the user wanted them to. The latter would have the greatest risk of going off and creating a classic "AI going rogue" scenario. (2025) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • Bill Joy
    Sun Microsystems cofounder; computer scientist
    strongly disagrees and says:
    How soon could such an intelligent robot be built? The coming advances in computing power seem to make it possible by 2030. And once an intelligent robot exists, it is only a small step to a robot species—to an intelligent robot that can make evolved copies of itself. [...] The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge. Yes, I know, knowledge is good, as is the search for new truths. [...] if open access to and unlimited development of knowledge henceforth puts us all in clear danger of extinction, then common sense demands that we reexamine even these basic, long-held beliefs. (2000) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • strongly agrees and says:
    The survival of man depends on the early construction of an ultra-intelligent machine. In order to design an ultraintelligent machine we need to understand more about the human brain or human thought or both. In the following pages an attempt is made to take more of the magic out of the brain by means of a "subassembly" theory, which is a modification of Hebb's famous speculative cell-assembly theory. Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. (1965) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • strongly agrees and says:
    I’m going to work on artificial general intelligence (AGI). [...] I should be working on it. (2019) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • agrees and says:
    I’m not that into slowing down right now. [...] Where the pause needs to happen is right around when you’re getting close to human-level intelligence. (2025) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • strongly disagrees and says:
    The development of full artificial intelligence (AI) could spell the end of the human race. (2014) source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • disagrees and says:
    We’d be best off building AIs as tools rather than as agents or rivals. source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • agrees and says:
    [...] we will want to do it, but it will be imperative to tweak our laws. source Unverified
    Comment Comment X added 2d ago
    Info
    Delegate
  • Tyler Cowen
    Professor of Economics, George Mason University & author of Average is Over
    agrees and says:
    So when people predict a high degree of existential risk from AGI, I don’t actually think “arguing back” on their chosen terms is the correct response. Radical agnosticism is the correct response, where all specific scenarios are pretty unlikely. Nonetheless I am still for people doing constructive work on the problem of alignment, just as we do with all other technologies, to improve them. I have even funded some of this work through Emergent Ventures. Besides, what kind of civilization is it that turns away from the challenge of dealing with more…intelligence? [...] So we should take the plunge. [...] We should take the plunge. We already have taken the plunge. We designed/tolerated our decentralized society so we could take the plunge. (2023) source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • strongly agrees and says:
    Well, I’ll tell you anyway: personally, I’m not worried. I have been trying to build something capable of replacing me since the 1980s, because I consider that effort exciting and fascinating, and because I believe that it’s the natural next step for mankind. And in taking that next step we are giving a helping hand to the entire universe. Because the scale on which we are going to get Artificial Intelligence – AI machines that are simply much cleverer than people in every respect worth mentioning – means that we will no longer occupy the summit of creation, though I personally think that may not really be desirable anyway. Rather, we should be delighted to be part of such a gigantic process, a process leading to the universe taking its next great step toward a higher level of complexity. These new entities will of course begin spreading out into space, as the bulk of resources are to be found outside rather than inside this thin biosphere. And eventually they will be completely capable of colonizing the entire Milky Way – that’s entirely physically possible – in a way that humans would never be able to do. And that is something really amazing and wonderful and exciting. And that’s why I’m not worried at all, and believe that nobody else has reason to worry either. source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • strongly agrees and says:
    The development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and to our future. We should be living in a much better world with AI, and now we can. Rather than allowing ungrounded panics around killer AI, “harmful” AI, job-destroying AI, and inequality-generating AI to put us on our back feet, we in the United States and the West should lean into AI as hard as we possibly can. We should seek to win the race to global AI technological superiority and ensure that China does not. In the process, we should drive AI into our economy and society as fast and hard as we possibly can, in order to maximize its gains for economic productivity and human potential. (2023) source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • strongly disagrees and says:
    Trying to ‘build’ AGI is an inherently unsafe practice. Build well-scoped, well-defined systems instead. Don’t attempt to build a God. (2023) source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • strongly disagrees and says:
    If you think building AGI is okay, like anthropic should just be allowed to build AGI or whatever. And then, okay, you should state that clearly. And then people who don't think that's okay, should also state that clearly. And they should have the conflict. They should. And I disagree with this. Like I am firmly, no, I don't think that anthropic or any other private corporation should be building AGI. Period. Build co-emps systems. Let's also assume furthermore that we keep the system secure and safe and they don't get immediately stolen by every unscrupulous actor in the world. Then you have very powerful systems which you can do very powerful things. In particular, you can create great economic value. So one of the first things I would do with the co-emps system if I had one is I would produce massive amounts of economic value and trade with everybody. I would trade with everybody. I'd be like, look, wherever you want in life, I'll get it to you. In return, don't build AGI. source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • disagrees and says:
    We need to change the target. It’s 100 times easier to look at something existing and think, “OK, can we substitute a machine or a human there?” The really hard thing is, “let’s imagine something that never existed before.” [...] But ultimately that second way is where most of the value comes from. [...] “We subsidize capital and we tax labor.” (2022) source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • strongly disagrees and says:
    We haven’t lost until we have lost. We still have a great chance to do it right and we can have a great future. We can use narrow AI tools to cure aging, an important problem and I think we are close on that front. Free labor, physical and cognitive, will give us a lot of economic wealth to do better in many areas of society which we are struggling with today. People should try to understand the unpredictable consequences and existential risks of bringing AGI or superintelligent AI into the real world. Eight billion people are part of this experiment they never consented to – not just that they have not consented, they cannot give meaningful consent because nobody understands what they are consenting to. It’s not explainable, it’s not predictable, so by definition, it’s an unethical experiment on all of us. So, we should put some pressure on people who are irresponsibly moving too quickly on AI capabilities development to slow down, to stop, to look in the other direction, to allow us to only develop AI systems we will not regret creating. source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • strongly agrees and says:
    AI is the pivotal technology that will allow us to meet the pressing challenges that confront us, including overcoming disease, poverty, environmental degradation, and all of our human frailties. We have a moral imperative to realize the promise of these new technologies while mitigating the peril. But it won’t be the first time we’ve succeeded in doing so. When I was growing up, most people around me assumed that nuclear war was almost inevitable. The fact that our species found the wisdom to refrain from using these terrible weapons shines as an example of how we have it in our power to likewise use emerging biotechnology, nanotechnology, and superintelligent AI responsibly. We are not doomed to failure in controlling these perils. Overall, we should be cautiously optimistic. While AI is creating new technical threats, it will also radically enhance our ability to deal with those threats. As for abuse, since these methods will enhance our intelligence regardless of our values, they can be used for both promise and peril. We should thus work toward a world where the powers of AI are broadly distributed, so that its effects reflect the values of humanity as a whole. (2024) source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • strongly agrees and says:
    OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles: [...] We are committed to doing the research required to make AGI safe, and to driving the broad adoption of such research across the AI community. We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. (2018) source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • agrees and says:
    The original vision of AI was re-articulated in 2002 via the term “Artificial General Intelligence” or AGI. This vision is to build “Thinking Machines” - computer systems that can learn, reason, and solve problems similar to the way humans do. […] While several large-scale efforts have nominally been working on AGI (most notably DeepMind), the field of pure focused AGI development has not been well funded or promoted. This is surprising given the fantastic value that true AGI can bestow on humanity. (2023) source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • disagrees and says:
    “If we pursue [our current approach], then we will eventually lose control over the machines. But, we can take a different route that actually leads to AI systems that are beneficial to humans,” said Russell. “We could, in fact, have a better civilization.” “You should not deploy systems whose internal principles of operation you don’t understand, that may or may not have their own internal goals that they are pursuing and that you claim show ‘sparks of AGI,’ […] ‘If we believe we have sparks of AGI, that’s a technology that could completely change the face of the earth and civilization,’ said Russell. ‘How can we not take that seriously?’” (2023) source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • agrees and says:
    Like, right, right now, if I’m in a really good mood, I take everything blissfully and calmly. If I’m in a bad mood and someone points out something I did that was wrong or stupid, like there’s some anger that rises up inside me, I’m like, “f*** you. Why are you saying that?” I mean, I just tamp that down and let that like flow pass before it takes of me because I’m 56 years old, I’m not five years old, right? But, but we all, we all have that, right? I mean, we evolve with that. You don’t have to program that into the AI, right? We, we should not do so. Now we could, if you want to build military robots and if you’re building like a corporate sales AI, you may build into it that his motivation is “haha I figured out how to extract all this guy’s money by selling him crap he doesn’t need.” I mean, you could build AI with that motivation, but we don’t have to, we could build an AGI with a motivational system to like do, do what the average person would think is the most beneficial, loving, and compassionate thing in this situation. source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • agrees and says:
    Yeah, I think those systems would be right on the boundary. So I think most emergent systems, cellular automata, things like that could be model-able by a classical system. You just sort of do a forward simulation of it and it’d probably be efficient enough. Of course there’s the question of things like chaotic systems where the initial conditions really matter and then you get to some uncorrelated end state. Now those could be difficult to model. So I think these are kind of the open questions, but I think when you step back and look at what we’ve done with the systems and the problems that we’ve solved, and then you look at things like Veo 3 on video generation sort of rendering physics and lighting and things like that, really core fundamental things in physics, it’s pretty interesting. I think it’s telling us something quite fundamental about how the universe is structured in my opinion. So in a way that’s what I want to build AGI for is to help us as scientists answer these questions like P equals NP. source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • strongly disagrees and says:
    I actually do not think it's a foregone conclusion that we will build AGI [artificial general intelligence] machines that can outsmart us all. Not because we can't … but rather we might just not want to. So why are obsessing about trying to build some kind of digital god that's going to replace us, if we instead can build all these super powerful AI tools that augment us and empower us? (2025) source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • strongly agrees and says:
    Our long-term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit. We’re bringing our two major AI research efforts (FAIR and GenAI) closer together to support this. We’re currently training our next-gen model Llama 3, and we’re building massive compute infrastructure to support our future roadmap, which also includes 350k H100s by the end of this year — and overall almost 600k H100s equivalents of compute if you include other GPUs. (2024) source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • strongly agrees and says:
    As the race to AGI intensifies, the national security state will get involved. The USG will wake from its slumber, and by 27/28 we’ll get some form of government AGI project. No startup can handle superintelligence. And so by 27/28, the endgame will be on. By 28/29 the intelligence explosion will be underway; by 2030, we will have summoned superintelligence, in all its power and might. Whoever they put in charge of The Project is going to have a hell of a task: to build AGI, and to build it fast; to put the American economy on wartime footing to make hundreds of millions of GPUs; [...] (2024) source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • strongly disagrees and says:
    Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. Some of my friends have recently reported to me that when people outside the AI industry hear about extinction risk from Artificial General Intelligence for the first time, their reaction is “maybe we should not build AGI, then.” [...] Shut it all down. We are not ready. We are not on track to be significantly readier in the foreseeable future. (2023) source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • strongly agrees and says:
    OpenAI is not a normal company and never will be. Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity. When we started OpenAI, we did not have a detailed sense for how we were going to accomplish our mission. We now see a way for AGI to directly empower everyone as the most capable tool in human history. If we can do this, we believe people will build incredible things for each other and continue to drive society and quality of life forward. [...] We believe this is the best path forward—AGI should enable all of humanity to benefit each other. It is time for us to evolve our structure. [...] We want to deliver beneficial AGI. (2025) source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
  • strongly disagrees and says:
    We don't have to do this. We have human-competitive AI, and there's no need to build AI with which we can't compete. We can build amazing AI tools without building a successor species. The notion that AGI and superintelligence are inevitable is a choice masquerading as fate. By imposing some hard, global limits, we can keep AI's general capability to approximately human level while still reaping the benefits of computers' ability to process data in ways we cannot, and automate tasks none of us wants to do. [...] Humanity must choose to close the Gates to AGI and superintelligence. To keep the future human. (2025) source Unverified
    Comment Comment X added 3d ago
    Info
    Delegate
Terms · Privacy · Contact