OpenAI Launches Team to Develop ‘Crowdsourced’ Governance Concepts for Enhanced Model Integration

OpenAI has announced its intention to incorporate public ideas to ensure that its future AI models align with humanity’s values. To facilitate this process, the AI startup is establishing a new Collective Alignment team comprised of researchers and engineers. This team will focus on creating a systematic approach to gather and 'encode' public feedback regarding the behavior of OpenAI's models into its products and services.

In a recent blog post, OpenAI stated, "We will continue collaborating with external advisors and grant teams, running pilots to integrate prototypes that will guide our models." The company is actively recruiting research engineers from varied technical backgrounds to join this initiative.

The Collective Alignment team is a direct outcome of OpenAI's public program, launched in May, which aims to offer grants for experiments that establish a “democratic process” for determining the rules that AI systems should adhere to. OpenAI's goal for this program is to support individuals, teams, and organizations in developing proof-of-concept projects focused on governance and safety guardrails for AI technologies.

In its latest blog post, OpenAI highlighted the efforts of its grant recipients, whose projects ranged from video chat interfaces to platforms for crowdsourced audits of AI models, and methodologies to map beliefs for refining model behavior. Furthermore, OpenAI made all the code used by these grant recipients public today, sharing brief summaries of each proposal alongside key takeaways.

While OpenAI has positioned this program as independent of its commercial aims, some critics find this perspective challenging to accept, especially given CEO Sam Altman’s prior criticisms of regulation in the EU and beyond. Along with OpenAI president Greg Brockman and chief scientist Ilya Sutskever, Altman has consistently argued that the rapid pace of AI innovation outstrips the capabilities of existing regulatory bodies, thereby emphasizing the importance of crowdsourced solutions.

Rival companies, including Meta, have accused OpenAI of attempting to secure “regulatory capture of the AI industry” by lobbying against open AI research and development. OpenAI denies these allegations and likely cites the grant program and the formation of the Collective Alignment team as evidence of its commitment to transparency.

Meanwhile, OpenAI is facing increased scrutiny from policymakers globally. In the U.K., the company is currently under investigation due to its close partnership with investor Microsoft. Recently, OpenAI has also taken steps to mitigate regulatory risks in the EU regarding data privacy. It has established a Dublin-based subsidiary to limit the authority of certain privacy watchdogs within the European Union.

In an effort to address regulatory concerns, OpenAI announced it is collaborating with organizations to curb the potential misuse of its technology in elections. These initiatives include implementing clearer labeling for AI-generated images and developing techniques for identifying manipulated generated content.

Most people like

Find AI tools in YBX