AI Giants Make a Small Gesture Towards Supporting AI Safety Researchers

The Frontier Model Forum, an organization dedicated to exploring advanced "frontier" AI models like GPT-4 and ChatGPT, has announced a commitment of $10 million toward a new research fund aimed at developing tools for testing and evaluating the most sophisticated AI systems.

This fund, supported by major players such as Anthropic, Google, Microsoft, and OpenAI, aims to assist researchers from academic institutions, research organizations, and startups. Initial funding will come from both the Forum and its philanthropic partners, which include the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, former Google CEO Eric Schmidt, and Estonian billionaire Jaan Tallinn.

The fund will be administered by the Meridian Institute, a Washington, D.C.-based nonprofit. The Forum states that they will issue a call for proposals in the coming months. The Institute will be guided by an advisory committee comprising external experts, AI professionals, and seasoned grantmakers, although specific names and the committee size have not been disclosed.

According to a press release from the Frontier Model Forum, “We anticipate additional contributions from other partners.” The fund primarily aims to foster the development of innovative model evaluations and techniques, focusing on the assessment of potentially dangerous abilities in frontier AI systems.

While a pledge of $10 million is significant, it appears modest within the context of AI safety research, particularly compared to the commercial investments made by Forum members. For example, Anthropic has raised billions from Amazon to create an advanced AI assistant, building on a $300 million investment from Google. Microsoft has allocated $10 billion to OpenAI, which generates annual revenues exceeding $1 billion and is reportedly negotiating a share sale that could elevate its valuation to $90 billion.

Furthermore, the fund is small relative to other AI safety grants. Open Philanthropy, co-founded by Facebook's Dustin Moskovitz, has invested approximately $307 million in AI safety efforts, according to the blog Less Wrong. The Survival and Flourishing Fund, primarily supported by Tallinn, has contributed around $30 million to AI safety initiatives. Additionally, the U.S. National Science Foundation plans to spend $20 million on AI safety research over the next two years, with some backing from Open Philanthropy grants.

AI safety researchers may not necessarily be creating GPT-4-level models from scratch; however, even smaller, less capable models can be costly to develop with current technology, ranging from hundreds of thousands to millions of dollars. This estimate does not include other operational expenses, such as salaries for researchers, who are typically well-compensated.

The Frontier Model Forum hints at a larger funding initiative in the future. Should this materialize, it could potentially have a significant impact on AI safety research, assuming that the for-profit supporters do not unduly influence the research agenda. Nevertheless, the initial funding appears too limited to drive substantial progress in the field.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles