Biased? Add sourced quotes from experts and public figures.

Will AGI create abundance?

Cast your vote:
Results (29):
filter
Quotes (26) Users (0)
  • strongly disagrees and says:
    Tech CEOs, futurists, and venture capitalists describe artificial general intelligence (AGI) as if it were an inevitable and ultimate goal for technology development. In reality, the term is a vague signifier for a technology that will somehow lead to endless abundance for humankind — and conveniently also a means to avoid accountability as tech moguls make off with billions in capital investment and, more alarmingly, public spending. AGI is a term that famously lacks a precise meaning, and certainly does not refer to any particular imminent technology. Definitions range broadly in ways that primarily suit the economic arrangements of the individuals and organizations ostensibly trying to create it, or the cultural mystique of a set of adherents to a set of fringe ideologies. (2025) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    Because post-superintelligence abundance will be so great, agreements to share power and benefits should strongly be in the leader’s national self-interest: as we noted in the section on abundance, having only 80% of a very large pie is much more desirable than an 80% chance of the whole pie and 20% chance of nothing. Of course, making such commitments credible is very challenging, but this is something that AI itself could help with. Slowing the intelligence explosion. If we could slow down the intelligence explosion in general, that would give decision-makers and institutions more time to react thoughtfully. One route to prevent chaotically fast progress is for the leading power (like the US and allies) to build a strong lead, allowing it to comfortably use stabilising measures over the period of fastest change. (2024) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • disagrees and says:
    I’ve heard many people say something like “money won’t matter post-AGI”. This has always struck me as odd, and as most likely completely incorrect. First: labour means human mental and physical effort that produces something of value. Capital goods are things like factories, data centres, and software—things humans have built that are used in the production of goods and services. The key economic effect of AI is that it makes capital a more and more general substitute for labour. There’s less need to pay humans for their time to perform work, because you can replace that with capital (e.g. data centres running software replaces a human doing mental labour). Overall, this points to a neglected downside of transformative AI: that society might become permanently static, and that current power imbalances might be amplified and then turned immutable. (2024) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    I don’t know exactly when it’ll come, I don’t know if it’ll be 2027. I think it’s plausible it could be longer than that. I don’t think it will be a whole bunch longer than that when AI systems are better than humans at almost everything. Better than almost all humans at almost everything. And then eventually better than all humans at everything, even robotics. [...] We’ve recognized that we’ve reached the point as a technological civilization where the idea, there’s huge abundance and huge economic value, but the idea that the way to distribute that value is for humans to produce economic labor, and this is where they feel their sense of self worth. Once that idea gets invalidated, we’re all going to have to sit down and figure it out. (2025) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    The economic singularity can liberate us from wage slavery, and spark a second renaissance. But we will need a different form of capitalism – fully automated luxury capitalism – to achieve the economy of abundance. The technological singularity is when we create human-level artificial intelligence (artificial general intelligence or AGI), which goes on to become a superintelligence. Both these singularities could have wonderful consequences for us – or terrible ones. source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    Intelligence is what got us here. There’s absolutely nothing inherently wrong with intelligence, and an abundance of intelligence would solve all problems. Some of you may have heard me with Rebecca talking about sustainability, right? With enough intelligence, we can solve climate change. With enough intelligence, we can prolong human lives. With enough intelligence, the end of jobs would be an amazing thing, because by the way, we humans were not made for jobs anyway. This is the moment where we recognize that nuclear bombs and harnessing nuclear power can be good for us or bad for us. This is the moment where we get together before the first nuclear bomb and say, "People, seriously, with enough intelligence we can have enough abundance for everyone. Can we please stop fighting?" (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    Artificial intelligence is different. It's past the point where a difference in degree becomes a difference in kind. AI amplifies and multiplies the human brain, much like steam engines once amplified muscle power. Before engines, we consumed food for energy and that energy we put to work. Engines allowed us to tap into external energy sources like coal and oil, revolutionizing productivity and transforming society. AI stands poised to be the intellectual parallel, offering a near-infinite expansion of brainpower to serve humanity. AI promises a future of unparalleled abundance. However, as we transition to a post-scarcity society, the journey may be complex, and the short term may be painful for those displaced. Mitigating these challenges requires well-reasoned policy. The next 0–10 years, 10–25 years, and 25–50 years will each be radically different. The pace of change will be hard to predict or anticipate, especially as technology capabilities far exceed human intelligence and penetrate society at varying rates. (2024) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly disagrees and says:
    “What’s actually going to happen is rich people are going to use AI to replace workers,” he said. “It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That’s not AI’s fault, that is the capitalist system.” (2025) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    I honestly believe we are approaching an era of radical abundance. We are about to distill the essence of what makes us capable — our intelligence — into a piece of software, which can get cheaper, easier to use, more widely available to everybody. As a result, everyone on the planet is going to get broadly equal access to intelligence, which is going to make us all smarter and more productive. I think we are trending in the opposite direction. We are adding masses of new knowledge to the corpus of global knowledge. And that is making everyone, on average, way, way smarter and discerning. These AIs are going to catch and develop your weaknesses. They are going to lift up your strengths. We are going to evolve with these new augmentations. We are going to invent new culture, new habits and new styles to adapt. (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    I'm much more focused on the benefits to all of us. I am haunted by the fact that the industrial revolution didn't touch the parts of the world where I grew up until much later. So I am looking for the thing that may be even bigger than the industrial revolution, and really doing what the industrial revolution did for the West, for everyone in the world. So I'm not at all worried about AGI showing up, or showing up fast. Great, right? That means 8 billion people have abundance. That's a fantastic world to live in, (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly disagrees and says:
    In reality, the term is a vague signifier for a technology that will somehow lead to endless abundance for humankind. (2025) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    AGI is coming very, very soon. [...] This is the beginning of our golden age. (2025) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • Nick Bostrom
    Philosopher; 'Superintelligence' author; FHI founder
    agrees and says:
    the right economic policies would inaugurate an age of abundance. (2024) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    Welcome to the economics of abundance. (2013) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly disagrees and says:
    My question is, utopia for whom? Who is getting this utopian life that [big tech] is promising [will come from AI]? (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    It would mean a takeoff rate of economic productivity growth that would be absolutely stratospheric, far beyond any historical precedent. Prices of existing goods and services would drop across the board to virtually zero. Consumer welfare would skyrocket. Consumer spending power would skyrocket. New demand in the economy would explode. Entrepreneurs would create dizzying arrays of new industries, products, and services, and employ as many people and AI as they could as fast as possible to meet all the new demand. Suppose AI once again replaces that labor? The cycle would repeat, driving consumer welfare, economic growth, and job and wage growth even higher. It would be a straight spiral up to a material utopia that neither Adam Smith or Karl Marx ever dared dream of. (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly disagrees and says:
    This end of innovation suggests our descendants will become extremely well adapted in a biological sense to the stable components of their environment. Their behavior will be nearly locally optimal, at least for the purpose of ensuring the continuation of similar behaviors. In most places, population will rise to levels consistent with a competitive evolutionary equilibrium, with living standards near adaptive subsistence levels. Such consumption levels have characterized almost all animals in Earth history, almost all humans before 200 years ago, and a billion humans today. If the speed of light limits the speed of future communication, if the pace of local cultural change is not ridiculously slow, and if there isn’t strong universal coordination, then the physical scale of the universe should ensure that future cultures must also fragment into many local cultures. Our distant forager ancestors were well adapted to their very slowly changing world, and were quite culturally and militarily fragmented over the planet. Our distant descendants are thus likely to be more similar to our distant ancestors in these ways. Our current “dreamtime” era is cosmologically unusual; it is a brief period of a rapidly growing highly integrated global culture, with many important behaviors that are quite far from biologically adaptive. We can’t be sure in what future era the patterns of history might “turn the corner” to return to the patterns of our distant past and distant future. But we should weakly expect that without global coordination the next great era will begin to move in that direction, with a larger population of creatures that are smaller, use less energy, and have low living standards, behavior better adapted to their environment, a slower subjectively perceived rate of innovation and growth, and more fragmented cultures and societies. (2016) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • disagrees and says:
    Many go even further. Ray Kurzweil, a prominent executive, inventor, and author, has confidently argued that the technologies associated with AI are on their way to achieving “superintelligence” or “singularity”—meaning that we will reach boundless prosperity and accomplish our material objectives, and perhaps a few of the nonmaterial ones as well. He believes that AI programs will surpass human capabilities by so much that they will themselves produce further superhuman capabilities or, more fancifully, that they will merge with humans to create superhumans. Contrary to all these claims, we should not assume that the chosen path will benefit everybody, for the productivity bandwagon is often weak and never automatic. What we are witnessing today is not inexorable progress toward the common good but an influential shared vision among the most powerful technology leaders. This vision is focused on automation, surveillance, and mass-scale data collection, undermining shared prosperity and weakening democracies. (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    In my view, we are going to achieve this goal instead through the combination of a free economic system with the emerging technologies of the twenty-first century. For example, by the 2030s, renewable energy technologies such as solar power will provide our energy needs at very low cost (we have 10,000 times more free energy hitting the Earth from the sun than we need, and the use of solar energy is expanding exponentially). Vertical agriculture consisting of AI controlled factories using hydroponic plants for fruits and vegetables and in-vitro cloned muscle tissue for meat, will be able to provide all the food we need at very low cost. Three-dimensional printing will provide the physical objects we need such as clothing, and modules to snap together a house for pennies a pound. Information-based technologies, including biotechnology and nanotechnology, will keep us healthy. Open source versions of all of these information-based products will be almost free. The financial economy will be based on proprietary forms of these information products allowing people to live very well with free open source versions. (2019) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    While many are fearful of AI, there are some (including myself) who believe that beyond the AI Singularity lies an incredible world of abundance. As detailed in Abundance: The Future is Better Than you Think (2012), exponential technologies have the potential to uplift every many, women, and child—to create a world of possibility. In today’s blog, I want to share what some of these leaders are saying about the potential upside of AI. (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility. AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity. [...] On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. 1. We want AGI to empower humanity to maximally flourish in the universe. [...] 2. We want the benefits of, access to, and governance of AGI to be widely and fairly shared. (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly agrees and says:
    Situated in this specific species, place, and time, I care a lot about the condition of all of us humans, and so I would like to not only create a powerful general intelligence, but create one which is [...] going to be beneficial to humans and other life forms on the planet, even while in some ways going beyond everything that we are. There are so many virtuous cycles among these different technologies, the more you advance in any of them, the more you're going to advance in all of them. And it's the coming together of all of these that's going to create, you know, radical abundance and the technological [...] Singularity. [...] (2018) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • Elon Musk
    Founder of SpaceX, cofounder of Tesla, SolarCity & PayPal
    agrees and says:
    I think the way in which an AI or an AGI is created is very important. You grow an AGI. It's almost like raising a kid, but it’s a super genius godlike kid, and it matters how you raise such a kid. When we ultimately create a digital superintelligence that can enable this future of abundance, there is also some chance that a digital superintelligence could end humanity. I agree with Geoffrey Hinton that the probability of such a dystopian future is something like 10% or 20%. [...] I think that the probable positive scenario outweighs the negative scenario. (2024) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • agrees and says:
    Assuming we steward it safely and responsibly into the world, and obviously we’re trying to play our part in that, then we should be in a world of what I sometimes call radical abundance. [...] It should lead to incredible productivity and therefore prosperity for society. Of course, we’ve got to make sure it gets distributed fairly, but that’s more of a political question. And if it is, we should be in an amazing world of abundance for maybe the first time in human history, where things don’t have to be zero sum. And if that works, we should be travelling to the stars, really. (2025) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • strongly disagrees and says:
    Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers. We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. [...] If we actually do this, we are all going to die. (2023) source Unverified
    Comment Comment X added 1mo ago
    Info
    Delegate
  • David Krueger
    Cambridge faculty - AI alignment, DL, and existential safety. Formerly Mila, FHI, DeepMind, ElementAI, AISI.
    disagrees and says:
    Yeah AGI likely does the opposite of creating post-scarcity. I wish people would realize this. source Unverified
    Comment Comment X added 1y ago
    Info
    Delegate
Terms · Privacy · Contact