2023 Report: Generative AI Secures $25.2 Billion in Funding

Funding in the generative AI sector has experienced a remarkable surge in 2023, with significant increases in capital flowing into this innovative space. According to the latest report from Stanford University, the 2024 AI Index reveals that funding for generative AI firms skyrocketed to $25.2 billion, representing an astonishing eightfold increase compared to previous years. This surge means that generative AI now accounts for over one-quarter of all private investments in artificial intelligence.

Last year marked a pivotal moment for major investments in the industry, with notable deals such as Microsoft’s $10 billion collaboration with OpenAI, Cohere’s $270 million raise in June 2023, and Mistral’s $415 million funding round in December. However, the report highlights a contrasting trend in corporate spending on AI, which dropped by 20% to $189.2 billion, primarily attributed to a marked reduction in mergers and acquisitions, which fell by 31.2% from the previous year. Nonetheless, AI remained a key topic, with nearly 80% of earnings calls from Fortune 500 companies mentioning its significance.

Investment patterns point to the U.S. as the dominant player, with $67.2 billion directed toward AI initiatives, nearly nine times the amount invested by China, which stood at $7.8 billion. At the same time, private investments in AI in China and the EU saw declines in 2023 compared to 2022, while U.S. spending rose by 22.1%.

The competitive landscape of AI is also influencing job market dynamics, with notable disparities in salaries for AI roles in different regions. For instance, a hardware engineer in the U.S. can expect an average salary of $140,000, significantly higher than the global average of $86,000. Similarly, cloud infrastructure engineers in the U.S. command an impressive average salary of $185,000, compared to the global average of $105,000.

In terms of investment sectors, 2023 saw the highest capital drawn toward AI infrastructure, research, and governance, amounting to $18.3 billion. This indicates a focus on large-scale applications like GPT-4 Turbo and Claude 3, driven by major players including OpenAI and Anthropic. Following closely, the natural language processing and customer support sector attracted $8.1 billion, as businesses increasingly seek solutions to optimize workflows through automation.

China led in investment for facial recognition technology, spending $130 million compared to the U.S.’s $90 million, while semiconductor investments showed a close race—$790 million in the U.S. versus $630 million in China. This uptick in semiconductor spending reflects global efforts to strengthen supply chains in light of the hardware chip shortages experienced in 2020.

As companies like OpenAI attract significant funding, they are also incurring larger expenses for model training—costs that have escalated in 2023. The AI Index noted that training advanced AI models has become significantly more expensive, with OpenAI spending an estimated $78 million on training its GPT-4 and Google's flagship Gemini model costing around $191 million. In contrast, earlier models were trained at far lower costs; the original Transformer model in 2017 required about $900 to train, while Facebook’s RoBERTa Large system from 2019 cost roughly $160,000.

Training expenses are not just financial; they also involve greater computational demands. For instance, the training requirements for Google's 2017 Transformer model were around 7,400 petaFLOPs. By comparison, Gemini Ultra’s training required a staggering 50 billion petaFLOPs, amplifying concerns over accessibility for academic institutions due to the expensive nature of these power-intensive systems.

The landscape of AI innovation continues to be predominantly shaped by U.S. institutions, which have led in producing large-scale AI systems. Of the 109 significant models released in 2023, the U.S. accounted for the majority, while China contributed only 20. Furthermore, the report underscores the increasing trend of multimodal AI models that can process various forms of data—combining text, images, and audio, thus widening their applications and capabilities.

“This year, we see more models able to perform across domains,” expressed Vanessa Parli, director of research programs at Stanford HAI. “An exciting frontier in AI research lies in merging these large language models with robotics and autonomous agents, facilitating more effective real-world applications for robots.”

Most people like

Find AI tools in YBX