This Week in AI: Chevron's Setback Signals a Stalled Future for AI Regulation

In this week’s AI update, the U.S. Supreme Court made a significant decision by abolishing “Chevron deference,” a ruling that has shaped federal agencies’ authority for the past 40 years by requiring courts to rely on agencies’ interpretations of congressional laws.

Chevron deference permitted agencies to create their own regulations when Congress left certain aspects of statutes vague. Now, courts are expected to exercise their judgment more actively, leading to potentially broad consequences. Axios’ Scott Rosenberg notes that Congress, which is already struggling to function effectively, must now draft legislation that anticipates future developments, as agencies can no longer invoke generalized rules for new enforcement scenarios. This change may ultimately hinder efforts to establish a comprehensive AI regulatory framework nationwide.

Congress has been finding it challenging to create even a basic AI policy, leading state regulators from both parties to intervene. Going forward, any regulations must be highly detailed to withstand legal scrutiny—a daunting task given the rapid evolution of the AI sector.

During oral arguments, Justice Elena Kagan specifically referenced AI:

“Let’s imagine that Congress enacts an artificial intelligence bill with various delegations. Given the nature of the subject, numerous gaps will likely exist. Do we want the courts or an agency to fill those gaps?”

Now, the courts will take on that role. If federal lawmakers believe they cannot effectively regulate AI, they might abandon their AI legislation altogether. Regardless of the outcome, regulating AI in the U.S. just became significantly more complex.

News Highlights

- Google’s Environmental AI Costs: Google released its 2024 Environmental Report, detailing its efforts to leverage technology for environmental stewardship. However, it fails to address the energy consumption associated with its AI initiatives, as Devin notes (AI is known to be power-intensive).

- Figma Disables Design Feature: Figma CEO Dylan Field announced the temporary suspension of the “Make Design” AI feature, which faced criticism for mimicking designs from Apple’s Weather app.

- Meta Revamps AI Label: Following complaints from photographers about mislabeling real photos, Meta is changing its “Made with AI” tag to “AI info” on all its platforms, as reported by Ivan.

- Robot Pets for the Elderly: Brian covers New York State's initiative to distribute thousands of robotic animals to combat the loneliness epidemic among seniors.

- Apple Integrates AI into Vision Pro: Apple is expanding its AI initiatives beyond the previously announced features for iPhone, iPad, and Mac, with plans to integrate them into its Vision Pro mixed-reality headsets, according to Bloomberg’s Mark Gurman.

Research Paper of the Week

Text-generating models like OpenAI’s GPT-4 have become essential tools in technology today, powering applications from email completion to coding. However, the mechanisms behind these models’ “understanding” of human-like text are still not fully understood. Researchers from Northeastern University examined tokenization, the process of breaking text into manageable units called tokens, to gain insights.

Current text-generating models process text as sequences of tokens drawn from a predetermined "token vocabulary." A token might represent a single word (like “fish”) or a fragment of a longer word (such as “sal” and “mon” in “salmon”). Researchers found that models also develop a latent vocabulary that associates groups of tokens—like multi-token words (e.g., “northeastern”) or common phrases (e.g., “break a leg”)—with meaningful concepts.

Using their findings, the researchers devised a technique to explore any open model’s implicit vocabulary. For example, they extracted terms such as “Lancaster” and “World Cup players,” as well as niche phrases like “Bundesliga players” from Meta’s Llama 2. Though not yet peer-reviewed, the researchers believe this work could pave the way towards understanding the inner workings of lexical representations in AI models.

Model of the Week

A research team at Meta has successfully trained models capable of generating 3D assets (shapes with textures) from textual descriptions, suitable for applications in mobile and video game development. Although many shape-generating models exist, Meta claims its approach is “state-of-the-art,” as it supports physically-based rendering, allowing developers to adjust the lighting of objects.

By integrating two models, AssetGen and TextureGen—inspired by Meta’s Emu image generator—into a cohesive pipeline called 3DGen, they can produce high-quality 3D shapes efficiently. AssetGen interprets text prompts (e.g., “a t-rex wearing a green wool sweater”) into a 3D mesh, while TextureGen enhances the mesh and applies textures for the final result.

According to the researchers, generating a new shape takes about 50 seconds. They state, “By leveraging their strengths, 3DGen delivers premium 3D object synthesis from text prompts in under a minute.” Evaluations by professional 3D artists indicate a preference for 3DGen's outputs over existing industry alternatives, particularly for complex requests.

Meta seems ready to integrate tools like 3DGen into its efforts in metaverse game development, as reflected in a recent job listing seeking expertise in researching and prototyping VR, AR, and mixed-reality games utilizing generative AI technologies.

Grab Bag

Apple is poised to secure an observer seat on OpenAI’s board due to their recent partnership. According to Bloomberg, Phil Schiller, who oversees the App Store and Apple events, may join OpenAI’s board as its second observer alongside Microsoft’s Dee Templeton.

This move would signify Apple’s growing influence, especially as it plans to incorporate OpenAI’s AI-powered chatbot ChatGPT into its devices this year as part of a broader suite of AI offerings. Interestingly, Apple isn’t expected to pay OpenAI for this integration, suggesting that the public relations benefits could outweigh financial considerations. Furthermore, there are indications that Apple might negotiate to receive a share of revenue from any premium ChatGPT features that OpenAI introduces on Apple platforms.

As my colleague Devin Coldewey pointed out, this situation puts Microsoft, a significant investor and partner of OpenAI, in a potentially awkward position, effectively subsidizing Apple’s integration of ChatGPT while receiving little in return. It seems that Apple gets its way, often leaving its partners to navigate the repercussions.

Most people like

Find AI tools in YBX