We aim to develop mechanisms that empower human stakeholders to express their intent clearly and supervise AI systems effectively - even in complex situations, and as AI capabilities scale beyond human capabilities. Decisions about how AI behaves and what it is allowed to do should be determined by broad bounds set by society, and evolve with human values and contexts. AI development and deployment must have human control and empowerment at its core. We create transparent, auditable, and steerable models by integrating explicit policies and “case law” into our model training process. We facilitate transparency and democratic input by inviting public engagement in policy formation and incorporating feedback from various stakeholders. Our democratic inputs to AI grant was one example for exploring possible ways of democratic process to decide AI model behavior. Another example is our publishing of the Model Spec, making explicit the tradeoffs and decisions that went into shaping it, and inviting public inputs for future versions. (2025) source Unverified
Comment X 1mo ago
Polls
replying to OpenAI
Terms · Privacy · Contact