This Week in AI: OpenAI Faces Challenges in Retaining Top Talent

This week in the world of AI, OpenAI experiences another shift in leadership.

John Schulman, a crucial figure behind the creation of ChatGPT, OpenAI’s flagship AI chatbot, has departed to join rival Anthropic. Schulman shared his decision on X (formerly Twitter), citing a keen interest in AI alignment — the field dedicated to ensuring AI systems operate as intended — and a desire for more hands-on technical engagement. However, the timing of his exit, coinciding with OpenAI president Greg Brockman’s extended leave until the year's end, raises questions about whether this move was strategic.

On the same day Schulman announced his departure, OpenAI disclosed a significant change to its DevDay event: it will now feature a series of roadshows for developer engagement rather than a large-scale conference. A spokesperson indicated that OpenAI wouldn't unveil a new model during DevDay, hinting that progress on a successor to GPT-4o may be lagging. The delay of Nvidia’s Blackwell GPUs could further impact timelines.

Is OpenAI facing challenges ahead? With the current climate significantly less favorable than it was just a year ago, concerns are mounting. Ed Zitron, a public relations expert and tech commentator, recently detailed various hurdles OpenAI must navigate if it hopes to thrive. His article extensively discusses the increasing expectations placed on the company.

Reports suggest that OpenAI is set to lose $5 billion this year. To address rising expenditures from recruitment (AI researchers come with hefty price tags), model training, and large-scale operations, the company will need to secure substantial funding in the next 12 to 24 months. Microsoft, which holds a 49% stake in OpenAI, seems like the most likely benefactor, especially considering its collaborative relationship with OpenAI despite existing rivalries. However, with Microsoft’s capital expenditures rising by 75% year-over-year (now totaling $19 billion) as it anticipates AI returns, will it be willing to invest more into a high-risk long-term venture?

Despite these pressures, it's hard to imagine that OpenAI — the leading player in the AI industry — won't eventually find a way to raise the necessary funds. However, this financial lifeline might come attached to conditions that could alter the much-discussed capped-profit structure of the company.

To survive, OpenAI may need to drift further from its original mission and navigate uncharted waters — a reality that Schulman and his associates might find difficult to accept. With increasing skepticism from investors and enterprises alike, the entire AI sector, not just OpenAI, is at a pivotal crossroads.

News

- Apple’s AI Capabilities Face Challenges: Recently, Apple let users experience its new Apple Intelligence features through the iOS 18.1 developer beta. However, as Ivan points out, the Writing Tools feature struggles with sensitive topics like swearing and crime.

- Google Revamps Its Thermostat: After nine years, Google has announced an upgrade to the original Nest Learning Thermostat, launching Nest Learning Thermostat 4 — 13 years after its precursor. This comes just ahead of the Made by Google 2024 event next week.

- X's Chatbot Misleads on Elections: The chatbot Grok on X has spread false information regarding Vice President Kamala Harris’ eligibility for the 2024 U.S. presidential election. This concern has been voiced by five secretaries of state in an open letter to Elon Musk, who oversees Tesla and SpaceX, along with X.

- YouTuber Initiates Class Action Against OpenAI: A YouTuber is pursuing a class-action lawsuit against OpenAI, claiming that the company used millions of transcripts from YouTube videos for training its generative AI models without consent or compensation for the creators.

- Surge in AI Advocacy: At the federal level, AI lobbying is intensifying, driven by the ongoing generative AI boom and an election year that could shape future regulations. The number of lobbying groups focused on AI rose from 459 in 2023 to 556 in the first half of 2024.

Research Paper of the Week

Open models like Meta’s Llama family offer flexibility to developers but come with significant risks. While many contain licensing restrictions and safety measures, the potential for misuse — such as disseminating misinformation — remains.

A collaborative team of researchers from Harvard and the Center for AI Safety proposed in a technical paper a "tamper-resistant" approach designed to maintain a model's positive features while mitigating undesirable behavior. Their experiments indicate effectiveness in blocking manipulative "attacks," albeit at a minor cost to the model's accuracy.

However, there’s a caveat: the method struggles to scale with larger models due to computational challenges that require optimization, as acknowledged by the researchers. Thus, while the preliminary results are promising, widespread implementation is unlikely in the near future.

Model of the Week

A new player has joined the image-generation arena, presenting a formidable challenge to established models like Midjourney and OpenAI's DALL-E 3.

Meet Flux.1, developed by Black Forest Labs, a startup founded by former Stability AI researchers who contributed to creating Stable Diffusion. Recently, Black Forest Labs announced a successful seed funding round, totaling $31 million, led by Andreessen Horowitz.

While the most advanced model, Flux.1 Pro, is available via API, two smaller variants, Flux.1 Dev and Flux.1 Schnell (meaning “fast” in German), are now accessible to developers on Hugging Face with minimal commercial restrictions. Reports from Black Forest Labs claim that these models can compete effectively with Midjourney and DALL-E 3, particularly in generating images with integrated text — a task that previous models have struggled to execute.

However, Black Forest Labs has been tight-lipped about the training data used for these models, raising concerns about copyright issues. The startup also hasn't detailed its strategies for preventing potential misuse of Flux.1, opting for a hands-off approach at this stage — a decision that warrants caution.

Grab Bag

Generative AI firms increasingly adopt the fair use defense when utilizing copyrighted data without permission from the original creators. Suno, an AI music generation platform, for example, recently contended in court that it holds the right to use songs from artists and labels without their consent.

Similarly, Nvidia is reportedly pursuing a massive video-generating model, codenamed Cosmos, using content from platforms like YouTube and Netflix. High-level managers believe this initiative will withstand legal scrutiny based on current interpretations of U.S. copyright law.

So, will the fair use doctrine shield companies like Suno, Nvidia, OpenAI, and Midjourney from legal repercussions? It remains to be seen, as these lawsuits will likely take considerable time to resolve. The generative AI landscape may change significantly before any legal precedent is established, potentially leading to either substantial payouts for creators or an unsettling acceptance that publicly shared work can serve as training data for generative AI enterprises.

Most people like

Find AI tools in YBX