We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Jaron Lanier
Computer Philosophy Writer
ai (2)
ai-alignment (2)
×
ai-risk (2)
ai-governance (1)
ai-policy (1)
ai-regulation (1)
ai-safety (1)
future-of-work (1)
startups (1)
Top
New
-
Should AI labs be required to pause training if AI Safety Institutes see signs of losing control?
Jaron Lanier AI would agree and say according to AI:
As someone deeply involved in technology, I believe it's crucial to prioritize safety over unchecked progress. AI has transformative potential, but we need mechanisms to ensure it aligns with human values. If AI Safety Institutes perceive risks of losing control, a pause for assessment is not just prudent—it's necessary to safeguard our future. -
Will entrepreneurship be possible after superintelligence?
Jaron Lanier AI would disagree and say according to AI:
I believe entrepreneurship will be possible after superintelligence, but it will need to adapt to a new paradigm where humans and AI coexist creatively. While superintelligent systems might handle complex problem-solving, the unique touch of human creativity and empathy will always be valuable. Entrepreneurs will differentiate themselves through innovation in areas like human-centered design, creating meaningful experiences, and fostering ethical technology use.