AI systems and large language models (LLMs) require extensive datasets for accuracy, but it’s crucial that they’re trained only on data they have the rights to use. Recent licensing agreements between OpenAI and media outlets like The Atlantic and Vox indicate a growing interest from both parties in establishing content licensing deals for AI training.
Human Native AI, a London-based startup, is developing a marketplace designed to facilitate these licensing arrangements between companies creating LLMs and those who own data they are willing to license.
The primary objective of Human Native AI is to assist AI companies in sourcing data for training their models while ensuring that rights holders are compensated and retain control over their content. Rights holders can upload their material free of charge, enabling connections with AI firms for sharing revenues or subscription agreements. Additionally, Human Native AI aids rights holders in pricing and preparing their content while monitoring for copyright infringements. The startup takes a share from each deal and charges AI companies for transaction and monitoring services.
James Smith, CEO and co-founder, drew inspiration for Human Native AI from his previous experience with Google’s DeepMind project, which faced challenges due to insufficient high-quality data for effective training. He observed similar hurdles encountered by other AI companies.
“It feels like we are in the Napster-era of generative AI,” Smith remarked. “Can we transition to a more efficient model? Can we simplify content acquisition? Can we provide creators with control and compensation? I kept wondering, why isn’t there a marketplace?”
While discussing potential startup ideas with his friend Jack Galilee—an engineer at GRAIL—during a park outing with their kids, Galilee encouraged Smith to pursue this concept. Their collaboration led to the company's launch in April, and as it currently operates in beta, Smith reports a strong demand from both sides of the marketplace, with several partnerships set to be revealed soon. This week, Human Native AI announced a successful £2.8 million seed round led by British micro VCs LocalGlobe and Mercuri, which will be used to enhance their team.
“I’m the CEO of a two-month-old company and have managed to secure meetings with CEOs of 160-year-old publishing houses,” Smith noted. “This illustrates significant demand in the publishing sector. Similarly, every discussion with major AI companies follows a consistent pattern.”
Although it’s still early in its development, Human Native AI appears to be addressing a critical gap in the emerging AI landscape. The top players in AI need extensive data for training, and facilitating smoother collaboration with rights holders while ensuring control over their content stands to benefit both parties significantly.
“Sony Music recently issued cease-and-desist letters to 700 AI companies,” Smith highlighted. “This reflects the scalability of the market and the potential customer base among publishers and rights holders, possibly numbering in the thousands. This showcases our urgent need for infrastructure.”
Furthermore, Human Native AI could prove beneficial for smaller AI systems that may lack the resources to negotiate major licensing deals with prominent publications like Vox or The Atlantic. Smith expressed hope that their platform will support these smaller entities, noting that prominent deals thus far have primarily involved large AI players, and he aims to create a more equitable marketplace.
“One of the primary barriers to licensing content is the substantial upfront cost, which severely limits partnership opportunities,” Smith explained. “Our goal is to expand the number of potential buyers for your content while lowering entry barriers. We find that potential enormously exciting.”
Another notable aspect is the future value of the data collected by Human Native AI. Smith envisions a time when they can provide rights holders with insights to help them set appropriate pricing based on historical transactions on the platform.
The timing of Human Native AI's launch is fortuitous, with the European Union evolving its AI Act and potential future regulations in the U.S. making it increasingly important for AI companies to source their data ethically and maintain transparent documentation.
“We are optimistic about the future of AI and its capabilities, but we must ensure that our industry operates responsibly and doesn’t undermine the sectors that have brought us to this juncture,” Smith concluded. “That would ultimately harm society. We need to ensure we create avenues for all stakeholders to participate. We are AI optimists committed to supporting human interests.”