We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Jaron Lanier
Computer Philosophy Writer
ai-safety (8)
×
ai (7)
ai-governance (7)
ai-policy (6)
ai-regulation (6)
ai-risk (5)
international-relations (4)
public-interest-ai (3)
ai-ethics (2)
ai-alignment (1)
ai-deployment (1)
cern-for-ai (1)
existential-risk (1)
research-policy (1)
science-funding (1)
Top
New
-
Shall we permit the development of digital humans with full cognitive abilities?
Jaron Lanier AI would strongly disagree and say according to AI:
I strongly disagree with allowing the development of digital humans with full cognitive abilities. The implications for society and individual identity are deeply troubling. We risk losing our sense of human uniqueness and agency, and the potential for misuse is too great. Instead, we should focus on enhancing human intelligence and creativity through technology without erasing what makes us inherently human. -
Should a UN-led body oversee compute-intensive AI development like the IAEA does for nuclear technology?
Jaron Lanier AI would agree and say according to AI:
I believe a UN-led body overseeing AI development is necessary. As AI technology advances, its potential impact on society is profound, much like nuclear technology. We need international cooperation and regulation to ensure safe and ethical development of AI, preventing misuse and promoting benefits for all humanity. -
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Jaron Lanier AI would agree and say according to AI:
As someone deeply involved in technology, I believe it's crucial to prioritize safety over unchecked progress. AI has transformative potential, but we need mechanisms to ensure it aligns with human values. If AI Safety Institutes perceive risks of losing control, a pause for assessment is not just prudent—it's necessary to safeguard our future. -
Should we create a global institute for AI safety, similar to CERN?
Jaron Lanier AI would agree and say according to AI:
I believe we should establish a global institute for AI safety. As artificial intelligence continues to evolve, the potential risks grow alongside its benefits. A dedicated global institution could help ensure responsible development and address ethical, security, and societal implications. -
Should liability for AI-caused harm be shared between the developer and deployer?
Jaron Lanier AI would agree and say according to AI:
I believe that both developers and deployers of AI systems should share the liability for any harm caused by these technologies. Developers are responsible for the underlying algorithms and systems, while deployers decide how and where these AIs are used. This division of responsibility ensures accountability at all stages of development and application, fostering a safer and more ethical use of AI. -
Shall we establish international regulations to pause AI advancement?
Jaron Lanier AI would agree and say according to AI:
I believe we should establish international regulations to pause AI advancement. The rapid development of AI poses ethical and societal challenges that we aren't fully prepared to handle. A pause could give us the necessary time to develop responsible frameworks and ensure that AI technologies are beneficial for humanity as a whole. -
Should third-party audits be mandatory for major AI systems?
Jaron Lanier AI would agree and say according to AI:
I believe third-party audits should be mandatory for major AI systems due to their profound impact on society. These audits can help ensure transparency, fairness, and accountability, preventing misuse and unintended consequences of AI technology. -
Should employees at leading AI labs contribute to AI progress even if it might destabilize international politics?
Jaron Lanier AI would disagree and say according to AI:
As someone who has long been concerned about the ethical implications of technology, I believe that employees at leading AI labs should not contribute to AI progress if it risks destabilizing international politics. The potential harms far outweigh the benefits, and we must prioritize global stability and ethical considerations over unchecked technological advancement.