Staying abreast of the rapidly evolving landscape of AI can be overwhelming. Until a dedicated AI can handle it for you, here's a concise summary of key developments in the machine learning arena, including significant research and experiments that we didn’t spotlight independently.
This Week in AI: d-Matrix, a pioneering AI chip startup, has secured $110 million in funding to advance what it calls a “first-of-its-kind” inference compute platform. d-Matrix asserts that its innovative technology enables the execution of AI models more cost-effectively than traditional GPU-based solutions.
“D-Matrix is the company that will make generative AI commercially viable,” enthused Sasha Ostojic, partner at Playground Global and a supporter of d-Matrix.
Whether d-Matrix can fulfill this ambitious claim remains to be seen. However, the excitement surrounding it—and similar startups like NeuReality and Tenstorrent—highlights a growing acknowledgment within the tech industry about the critical AI hardware shortage. As generative AI becomes more mainstream, the demand for chips that power these models, particularly from suppliers like Nvidia, is outpacing supply.
Recently, Microsoft cautioned its shareholders about possible interruptions to Azure AI services if it can't secure enough AI chips, specifically GPUs, for its data centers. Nvidia’s top AI chips are reportedly sold out until 2024, driven largely by extreme demand from major international tech firms such as Baidu, ByteDance, Tencent, and Alibaba.
In response, tech giants like Microsoft, Amazon, and Meta are investing heavily in the development of proprietary next-gen AI chips for inference tasks. For companies lacking such resources, hardware from startups like d-Matrix might represent a crucial alternative.
In an optimistic scenario, d-Matrix and its peers could serve as an equalizing force, creating more opportunities for startups in the generative AI sector and beyond. A recent analysis from AI research firm SemiAnalysis indicates the AI landscape is bifurcating into “GPU rich” (largely comprising established companies like Google and OpenAI) and “GPU poor” categories, which mostly includes European startups and government-affiliated supercomputers like France’s Jules Verne.
The issue of inequity permeates the AI field, from annotators who label data essential for training generative AI models to the biases that often emerge in these models. Hardware could become yet another point of disparity. However, optimism remains, as new AI techniques and architectures could also help balance the scales. Accessible, economical AI inference chips promise to be a vital component of this solution.
Other Noteworthy AI Developments:
- Imbue Secures $200 Million: Formerly known as Generally Intelligent, the AI research lab Imbue has raised $200 million in a Series B funding round, pushing its valuation over $1 billion. Launched last October, Imbue aims to explore the fundamental aspects of human intelligence that machines currently lack.
- eBay Introduces AI Listing Generation: eBay is debuting a new AI tool that creates product listings from a single photo, a feature that could save marketplace sellers significant time. However, initial user feedback indicates that the quality of this generative AI tool may be lacking.
- Anthropic Debuts Claude Pro: Anthropic, co-founded by former OpenAI employees, has launched its first premium subscription service for Claude 2, its text-analyzing AI chatbot.
- OpenAI Announces Developer Conference: OpenAI will hold its inaugural developer conference, OpenAI DevDay, on November 6, featuring a keynote and breakout sessions led by its technical team. The event will showcase new tools and encourage idea exchanges.
- Zoom Enhances AI Capabilities: In a move to stay competitive, Zoom has rebranded several of its AI-driven features, including the generative AI assistant previously known as Zoom IQ.
- Are AI Models Destined to Hallucinate?: Large language models, like OpenAI’s ChatGPT, are known to fabricate information. Experts weigh in on whether this inherent flaw will always persist.
- Prosecutors Tackle AI-Enabled Child Exploitation: Attorneys general from all 50 U.S. states and four territories are urging Congress to act against AI-facilitated child sexual abuse. Although generative AI is primarily neutral in this area, it can be exploited in adverse ways.
- Artisse Generates AI Photos: A newly launched tool, Artisse, enables users to create AI-generated images of themselves by uploading multiple selfies. While similar services exist, Artisse claims to enhance both input options and the realism of the generated images, set against imaginative backdrops.
Exciting Advances in Machine Learning:
A recently developed AI-driven high-speed drone has outpaced human world champions in navigating series of gates at speeds up to 100 km/h. Trained within a simulator to minimize early-stage setbacks, this model calculated in real-time, finishing with the best lap time faster than its human counterparts by half a second.
Additionally, Osmo published a groundbreaking paper in Science on quantifying scents, likening their Principal Odor Map (POM) to a three-dimensional color model, which could revolutionize scent mapping in commercial fragrance synthesis.
In wildlife research, biologists at Imperial College London used machine learning on nearly 36,000 hours of audio collected across Costa Rica to track wildlife behavior. This analysis, which would have traditionally required decades, revealed that howler monkeys are particularly sensitive to environmental changes and human presence.
Microsoft's AI for Good Research Lab is also utilizing similar methodologies to sift through vast amounts of data related to conservation efforts. By quantifying the effects of deforestation, these projects present valuable data to support environmental policies.
In the medical domain, researchers at Yale have discovered that machine learning models can analyze heart ultrasounds to detect severe aortic stenosis, a dangerous heart condition. Speedy and accurate diagnosis has the potential to save lives by alerting non-specialist care providers when further consultation is warranted.
Finally, a critical examination from ChinaTalk analyzed Baidu's latest large language model, Ernie. Operating primarily in Chinese, the model reflects the country's strict AI regulations, showing biases in handling sensitive topics while also demonstrating the ability to refrain from engaging in discussions deemed risky.
AI continues to evolve, making waves across multiple domains, and this week highlights just a few glimpses of its potential and challenges.