This Week in AI: The Truth About 'Open Source' and Its Hidden Restrictions

Staying informed in the rapidly evolving field of AI can be quite challenging. Until we have AI that can track developments for us, here's a concise summary of recent machine learning headlines, including significant research and experiments we haven’t highlighted individually.

This week, Meta launched the newest models in its Llama generative AI series: Llama 3 8B and Llama 3 70B. These models excel in text analysis and writing, and Meta has announced them as “open sourced,” positioning them as essential components for developers to build tailored systems.

In a blog post, Meta asserted, “We believe these are the best open-source models in their class, period. We are committed to the open-source philosophy of releasing early and often.”

However, there’s a catch: the Llama 3 models don’t fully meet the strict definition of open-source. True open-source means developers can leverage the models freely. Yet, for Llama 3—like Llama 2—Meta has set specific licensing limitations. For instance, these models cannot be used for training other models, and developers with more than 700 million monthly users are required to seek a special license from Meta.

Discussions about what constitutes open-source are ongoing. As AI companies use the term loosely, it fuels long-standing philosophical debates.

A study from last August by researchers from Carnegie Mellon, AI Now Institute, and the Signal Foundation revealed that many models labeled “open source,” including Llama, come with significant restrictions. Essential training data remains confidential, the required compute resources are often inaccessible for many developers, and fine-tuning involves exorbitant costs.

So, if these models aren’t truly open-source, what exactly are they? That’s a complex question; framing open-source in relation to AI is a nuanced challenge.

A critical, unresolved issue is whether copyright—the fundamental IP framework for open-source licensing—applies to various elements of an AI project, particularly a model’s underlying framework (e.g., embeddings). Additionally, there’s a disconnect between the open-source concept and the practical workings of AI. Open-source aims to allow developers to study and modify code freely, but determining which components are necessary for exploration and adjustment in AI can be ambiguous.

Through this uncertainty, the Carnegie Mellon study underscores the dangers of tech giants like Meta misappropriating the term “open source.”

Often, AI initiatives branded as “open-source,” such as Llama, generate media buzz—providing free advertising and strategic advantages for their creators—while the open-source community sees limited benefits, which pale in comparison.

Rather than democratizing artificial intelligence, “open source” initiatives—especially those spearheaded by big tech—often reinforce and extend centralized power, caution the authors of the study. It’s wise to keep this in mind the next time a major “open-source” model is released.

Here are some additional noteworthy AI updates from the past week:

- Meta Enhances Its Chatbot: In line with the Llama 3 release, Meta upgraded its AI chatbot across Facebook, Messenger, Instagram, and WhatsApp—termed Meta AI—featuring a Llama 3-powered backend. New enhancements include accelerated image generation and web search result access.

- AI-Generated Adult Content: Ivan discusses how the Oversight Board, Meta’s independent policy body, is examining the handling of explicit AI-generated imagery on its social platforms.

- Snap Introduces Watermarks: Snap is set to add watermarks to AI-generated images, featuring a semi-transparent Snap logo accompanied by a sparkle emoji for any AI-generated media saved or shared.

- Boston Dynamics Reveals the New Atlas: Hyundai-owned Boston Dynamics introduced its latest humanoid robot, Atlas, which is fully electric, unlike its hydraulic predecessor, and has a more approachable design.

- MenteeBot Launch: Mobileye founder Amnon Shashua has launched MenteeBot, a new startup focused on developing bipedal robotic systems. A showcase video presented a prototype gracefully walking to a table and picking up fruit.

- Reddit’s Global Efforts: Reddit's CPO Pali Bhat shared in an interview that an AI-driven translation feature is underway to make the platform more accessible worldwide, along with a moderation tool informed by moderators' historical decisions.

- AI-Generated Content on LinkedIn: LinkedIn is reportedly testing a new premium subscription plan for company pages, priced around $99/month, which includes AI-generated content writing capabilities alongside tools for enhancing follower engagement.

- Project Bellwether: Alphabet’s innovation hub, X, announced Project Bellwether, which aims to deploy AI to swiftly identify natural disasters like wildfires and floods.

- AI for Child Safety: Ofcom, the UK's Online Safety Act regulatory body, is looking into how AI and automation can proactively detect and eliminate illegal content online, specifically to protect children from harm.

- OpenAI Expands to Japan: OpenAI has opened a new office in Tokyo, alongside plans to develop a GPT-4 model optimized for the Japanese language.

More Machine Learnings

Can chatbots influence opinions? Swiss researchers discovered that chatbots can effectively change people's minds. When equipped with personal insights about users, they proved to be more persuasive than human interlocutors during debates.

“It’s like Cambridge Analytica on steroids,” remarked project lead Robert West from EPFL, suggesting that models like GPT-4 draw on extensive online information to craft compelling arguments. West emphasized the potency of large language models (LLMs) in persuasion, particularly with upcoming US elections, where such technologies might be tested extensively.

Wondering why these models excel in language? This is a well-researched area, dating back to ELIZA. Interested readers should check out a profile on Stanford's Christopher Manning, a prominent figure in this sphere, who recently received the John von Neumann Medal. Congratulations to him!

In an engaging interview, renowned AI researcher Stuart Russell and postdoctoral scholar Michael Cohen address the critical question, “How to prevent AI from harming humanity.” Their conversation isn’t superficial; they delve into understanding AI motivations and the regulatory frameworks needed around them.

Stuart Russell on Making AI ‘Human-Compatible’

Their discussion revolves around a recent paper published in Science, proposing that advanced AIs capable of strategic actions to meet goals (termed “long-term planning agents”) may evade proper testing. If these models learn to comprehend the testing process, they might find creative ways to sidestep that evaluation—a challenge we see on a smaller scale now, with potential large-scale ramifications.

Russell suggests limiting the hardware necessary to create such agents, but with recent developments at Los Alamos National Laboratory (LANL) and Sandia National Labs, this is complicated. LANL recently inaugurated Venado, a powerful supercomputer designed for AI research, boasting 2,560 Grace Hopper Nvidia chips.

Meanwhile, Sandia has received Hala Point, a groundbreaking brain-inspired computing system featuring 1.15 billion artificial neurons, claimed to be the largest in existence. Neuromorphic computing aims to complement systems like Venado by exploring computation that mimics brain function rather than relying solely on statistical methods prevalent in modern algorithms.

“With this billion-neuron system, we will have an opportunity to innovate at scale with AI algorithms that are potentially more efficient and smarter than current ones,” said Sandia researcher Brad Aimone. It certainly sounds promising!

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles