“We the people” must take charge of AI, not tech corporations or political elites. Google’s recent mishaps with its Gemini AI system highlight this urgency.
Gemini fails to rank historical figures accurately, claiming that Hitler is not worse than Elon Musk's tweets. It shies away from drafting policy documents that advocate for fossil fuel use. Furthermore, it generates images inaccurately portraying America’s founding fathers in different races and genders than they actually were.
These examples may seem absurd, but they signify a looming dystopia where unaccountable bureaucrats in private AI firms dictate what ideas and values can be expressed. This should be unacceptable to all, regardless of ideology.
Seeking government intervention to regulate AI speech poses its own risks. While regulation is essential for AI safety and fairness, a free society must resist allowing governments to dictate which ideas can be voiced or suppressed.
Clearly, neither corporations nor government entities should dictate these decisions. Yet, as users turn to AI tools for information and content generation, they will have diverse expectations about the values these tools should reflect.
A viable solution exists: empower users to govern AI.
Strategies for User Empowerment in AI
Over the past five years, I have collaborated with the tech industry as an academic political scientist to explore ways to empower users in governing online platforms. Here’s what I've learned about effectively putting users in charge of AI.
1. User-Defined Guardrails: We should create a marketplace for diverse, fine-tuned AI models. Different stakeholders—journalists, religious groups, organizations, and individuals—should have the ability to customize and select versions of open-source models that resonate with their values. This would alleviate the pressure on companies to act as “arbiters of truth.”
2. Centralized Guardrails: While the marketplace approach reduces some pressures, fundamental guardrails must still exist. Certain content—especially illegal material or ambiguous instances of satire, slurs, or politically sensitive imagery—requires unified standards across all models. Users should have a say in establishing these minimal, centralized guardrails.
Some tech companies are already experimenting with democratization. For instance, Meta initiated a community forum in 2022 to gather public input for their LlaMA AI tool, while OpenAI sought "democratic inputs to AI," and Anthropic released a user-co-created AI constitution.
3. Real Democratic Power: To create a robust democratic structure, users should propose, debate, and vote on significant issues, with their decisions binding for the platform. While proposals should be limited to avoid legal violations, they must still empower users over central guardrails.
Although no platform has yet implemented such a voting mechanism, experiments in web3 offer valuable insights. Here are four key lessons that AI platforms can adopt:
- Tangible Voting Stakes: Tie voting power to digital tokens with platform utility, like purchasing AI compute time.
- Delegated Voting: Users should delegate their votes to verified experts who explain their decisions publicly.
- Participation Rewards: Encourage active governance participation by rewarding users with additional tokens for meaningful engagement.
- Clear Constitution: Establish a constitutional framework outlining the scope of proposals, voting power distribution, and the company’s commitment to relinquishing control over central guardrails.
Building Trust in AI Systems
AI platforms can pilot this democratic model on a small scale, gradually expanding its impact. For it to succeed, AI companies must commit to relinquishing control over central guardrails. Only then can society trust that the information and answers provided by these tools are not manipulated by unaccountable actors who don’t align with the values of the users they serve.
Andrew B. Hall is the Davies Family Professor of Political Economy at the Graduate School of Business, Stanford University.