OpenAI's Six-Member Board to Determine ‘When We Achieve AGI’

According to OpenAI, its nonprofit board of directors will be responsible for determining when the company achieves "artificial general intelligence" (AGI), defined as a highly autonomous system that surpasses human performance in most economically valuable tasks. Due to its for-profit subsidiary being legally bound to uphold the nonprofit's mission, once AGI is declared, OpenAI’s technology will no longer be subject to IP licenses or commercial agreements with Microsoft, which apply only to pre-AGI systems.

The concept of AGI has no universally accepted definition, raising questions about the implications of a decision made by just six individuals—what it signifies for OpenAI and the broader world, as well as the potential impact on Microsoft, the company's largest investor.

OpenAI developer advocate Logan Kilpatrick addressed this topic in a recent thread on X, responding to Microsoft president Brad Smith's claims that OpenAI's nonprofit status enhances its credibility compared to Meta, which is shareholder-owned. This assertion came despite reports of OpenAI pursuing a valuation as high as $90 billion for existing shares.

Smith remarked, “Meta is owned by shareholders. OpenAI is owned by a nonprofit. Which would you trust more for your technology?”

Kilpatrick cited information from OpenAI’s website, detailing its nonprofit/capped profit structure. As outlined, the for-profit subsidiary, OpenAI Global, LLC, is "fully controlled" by the nonprofit and is permitted to generate profit but must align with the nonprofit’s mission.

Despite the affiliation, OpenAI CEO Sam Altman expressed to Microsoft CEO Satya Nadella that he is excited about their partnership in developing AGI. Additionally, in a Financial Times interview, Altman noted that their collaboration was "working really well" and anticipated further investment from Microsoft, highlighting significant ongoing computational needs for AGI development.

From inception, Microsoft agreed to “leave AGI technologies and governance to the nonprofit and humanity.” An OpenAI spokesperson reiterated that the organization aims to create safe and beneficial AGI, governed by a board that integrates diverse expert perspectives in its decision-making.

Currently, the board includes chairman Greg Brockman, chief scientist Ilya Sutskever, and CEO Sam Altman, alongside non-employee members Adam D’Angelo, Tasha McCauley, and Helen Toner, all of whom have connections to the Effective Altruism movement. OpenAI has previously attracted criticism for its relationships within this realm, especially following scandals involving notable figures like Sam Bankman-Fried.

The spokesperson clarified that none of the board members identify themselves as effective altruists, emphasizing their role as independent contributors focused on AI-related safety and ethics.

The decision-making process surrounding AGI has been described as "unusual." Legal expert Suzy Fulton commented that while it may seem unconventional for a board to make such determinations, it aligns with the nonprofit’s duty to prioritize humanity's welfare over shareholder interests. With a majority of independent board members, OpenAI aims for a structure that prioritizes its mission.

Legal perspectives suggest that while having the board decide on AGI is atypical, it is not legally prohibitive, given the board's obligation to oversee critical, mission-related issues.

However, skepticism remains regarding the timeline for achieving AGI. Some argue that focusing on this goal could detract from addressing the immediate impacts of current AI technologies. Experts like Merve Hickok highlight a potential lack of diverse viewpoints within OpenAI, calling for caution over the legitimacy of its AGI mission.

OpenAI's ambiguous definition of AGI complicates its implications, even as its leadership envisions a future where multiple AGIs coexist, promoting diverse perspectives.

The ramifications of OpenAI's AGI pursuit for Microsoft remain uncertain, particularly in light of their relationship structure. Legal scholar Anthony Casey noted potential conflicts arising from OpenAI's dual entities, suggesting that if profit motives clashed with the nonprofit’s mission, it could lead to significant disputes.

With profit caps easier to implement than resolving conflicts of interest, it remains to be seen how this unique structure will navigate the complex landscape of AGI development and governance.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles