This Week in AI: Gen Z's Mixed Reactions to Artificial Intelligence

This week, we dive into the diverse perspectives of Gen Z on artificial intelligence—an age group frequently spotlighted in mainstream media.

Recent surveys conducted by Samsung involving over 5,000 Gen Z individuals across France, Germany, South Korea, the U.K., and the U.S. reveal intriguing insights into their views on AI and technology. Remarkably, nearly 70% identified AI as a trusted tool for both work-related tasks (like summarizing documents and conducting research) and personal projects—including inspiration and brainstorming.

However, a separate study from EduBirdie, a professional essay-writing service, finds a contrasting sentiment: over a third of Gen Zers who utilize OpenAI's chatbot, ChatGPT, and similar AI platforms at work express guilt about their usage. Many respondents worry that reliance on AI might erode their critical thinking abilities and stifle their creativity.

It’s important to approach these surveys critically. Samsung has a stake in promoting AI positively, given its development and sale of AI-powered products. Similarly, EduBirdie competes in the landscape significantly affected by AI writing assistants, possibly swaying its stance against AI technology.

Despite these biases, it appears that Gen Z—a group reluctant to abandon or completely reject AI—possesses a keen awareness of its implications, often more so than previous generations. A study from the National Society of High School Scholars indicates that 55% of Gen Z believes AI will yield more negative than positive societal effects over the next decade, particularly concerning personal privacy.

This demographic's opinions carry weight. According to NielsenIQ, Gen Z is on track to become the wealthiest generation ever, with a spending potential projected to reach $12 trillion by 2030, surpassing baby boomers' spending by 2029.

As some AI startups allocate more than 50% of their revenue to essential resources such as hosting and software (according to accounting firm Kruze), it’s evident that addressing Gen Z’s concerns about AI is a prudent business strategy. However, whether these concerns can be alleviated remains to be seen as technological, ethical, and legal hurdles persist. Nonetheless, attempting to engage with these concerns can only be beneficial.

News

OpenAI partners with Condé Nast: OpenAI has forged a deal with Condé Nast—the publisher behind renowned publications like The New Yorker, Vogue, and Wired—to feature stories from its outlets in OpenAI's ChatGPT and SearchGPT platforms, as well as to train its AI on Condé Nast's content.

AI's impact on water resources: The surge in AI has escalated the demand for data centers, significantly increasing water consumption. Virginia, known for housing the world's largest concentration of data centers, reported a near two-thirds spike in water usage from 2019 to 2023, climbing from 1.13 billion to 1.85 billion gallons, according to the Financial Times.

Gemini Live and Advanced Voice Mode reviews: Google and OpenAI have launched two AI-driven, voice-focused chat experiences—Gemini Live and OpenAI’s Advanced Voice Mode. Users can enjoy realistic voices and the option to interrupt the AI as needed.

Trump and Taylor Swift deepfakes: Recently, former President Donald Trump shared a collection of memes on Truth Social that appeared to show Taylor Swift and her fans endorsing his candidacy. As my colleague Amanda Silberling notes, such AI-generated images could have significant consequences in the political arena, especially as new legislation begins to take shape.

The debate surrounding California's SB 1047: The California bill SB 1047, aimed at preventing AI-related real-world disasters, continues facing notable opposition. Recently, Congresswoman Nancy Pelosi labeled the bill as “well-intentioned” but “ill-informed.”

Research paper of the week

The transformer model, introduced by a team of Google researchers in 2017, has emerged as the leading architecture for generative AI. Transformative applications include OpenAI’s video-generating model Sora, the latest Stable Diffusion version, and Flux, as well as text-generating models like Anthropic's Claude and Meta’s Llama.

Recently, Google has adopted transformer technology to enhance YouTube Music recommendations. According to a new blog post from Google Research, the system evaluates user actions (like interrupting a track) and other signals to suggest related songs. Google claims this transformer-based recommender significantly reduced music skip rates and increased overall listening time—a win for the tech giant.

Model of the week

While not entirely new, OpenAI's GPT-4o is my model of the week, primarily because it has recently enabled fine-tuning on custom datasets.

On Tuesday, OpenAI publicly announced this fine-tuning capability for GPT-4o, allowing developers to tailor the model's responses and adhere to specific "domain-related" instructions using proprietary data. Though fine-tuning isn’t without limitations, OpenAI emphasizes its significant potential to enhance model performance.

Grab bag

In another development in the realm of generative AI, a class-action lawsuit has arisen against Anthropic. A group of authors and journalists has accused the company of committing “large-scale theft” by training its AI chatbot Claude on pirated e-books and articles.

The plaintiffs claim that Anthropic has built a multi-billion-dollar business by using hundreds of thousands of copyrighted works without permission. They argue that lawful copies contribute essential compensation to authors and creators, which pirated materials do not.

Typically, AI models train on data from publicly available websites and datasets, while companies argue that fair use protects their data-scraping practices. However, copyright holders disagree and are increasingly pursuing legal action against these practices.

This new lawsuit accuses Anthropic of utilizing "The Pile," a dataset collection that includes many pirated e-books from a library called Books3. Anthropic has since confirmed to Vox that “The Pile” was part of Claude's training data.

The plaintiffs are seeking unspecified damages and an injunction to prevent Anthropic from exploiting their works unlawfully.

Most people like

Find AI tools in YBX