This Week in AI: How Generative AI is Flooding Academic Journals with Spammed Content

This week in AI, generative AI is infiltrating academic publishing, raising concerns about disinformation in the field.

In a recent post on Retraction Watch—a blog dedicated to tracking retractions in academic studies—Gary A. P. and research colleagues Tomasz Żuradzk and Leszek Wroński highlighted a troubling trend involving three journals published by Addleton Academic Publishers. What’s particularly alarming is that these journals appear to consist entirely of AI-generated articles.

These publications feature papers that follow monotonous templates filled with jargon like “blockchain,” “metaverse,” “internet of things,” and “deep learning.” Intriguingly, the journals boast the same editorial board—10 of whose members are deceased—and share an unremarkable address in Queens, New York, which seems to be a residential home.

You might wonder, what’s the significance of this? Isn’t encountering AI-generated clickbait just part of modern internet life?

While it's true that spammy content is pervasive online, these counterfeit journals expose a vulnerability in the evaluation systems used for academic promotions and hiring. This situation could indeed foreshadow challenges for knowledge workers across various sectors.

For instance, on the widely used metric, CiteScore, these dubious journals rank in the top 10 for philosophy research. How is this achievable? They engage in extensive cross-citation practices (since CiteScore incorporates citations in its rankings). Żuradzk and Wroński observed that out of 541 citations in one Addleton journal, 208 are sourced from the publisher’s other counterfeit publications.

"[These rankings] often serve as quality indicators for universities and funding bodies," Żuradzk and Wroński explain. "They significantly impact decisions about academic awards, hiring, and promotions, therefore influencing researchers' publication strategies."

One could argue that the flaw lies within CiteScore itself, which is a valid point. However, it's equally crucial to recognize how the misuse of generative AI is complicating systems that impact people's livelihoods in unforeseen and potentially damaging ways.

Future developments could see generative AI prompting us to rethink and enhance evaluation systems like CiteScore to be more equitable and comprehensive. Unfortunately, if left unchecked, the current trajectory of generative AI may lead to substantial professional chaos.

I sincerely hope we can correct this course soon.

News Highlights

- DeepMind's Soundtrack Creator: DeepMind, Google's AI research lab, is innovating AI technology that generates soundtracks for videos. By analyzing descriptions of soundtracks (like “jellyfish pulsating underwater, marine life, ocean”) in tandem with video content, DeepMind's AI produces bespoke music, sound effects, and dialogue aligned with the video’s tone and themes.

- Robot Chauffeur: Researchers at the University of Tokyo have developed a “musculoskeletal humanoid,” named Musashi, capable of autonomously driving a small electric car through a designated test track. Using two cameras, mimicking human vision, Musashi perceives the road ahead and monitors the views displayed in the vehicle’s side mirrors.

- Innovative AI Search Engine: Genspark, a new AI-based search platform, utilizes generative AI to craft tailored summaries in response to user queries. Having raised $60 million from investors, including Lanchi Ventures, Genspark’s latest funding round values the company at an impressive $260 million as it competes against rivals like Perplexity.

- Understanding ChatGPT Costs: Curious about the costs associated with OpenAI's continuously evolving ChatGPT platform? To help users navigate the various subscription options, we've compiled an updated guide detailing ChatGPT pricing.

Research Paper of the Week

Autonomous vehicles often face a myriad of edge cases based on location and situational contexts. For example, if a driver signals with a left blinker on a two-lane road, is that a cue to change lanes or a suggestion to overtake? This decision may vary drastically depending on whether you're driving on I-5 or the Autobahn.

Researchers from Nvidia, USC, UW, and Stanford recently published a paper at CVPR revealing how AI can navigate ambiguous situations by referring to local driving manuals. Their Large Language Driving Assistant (LLaDa) allows large language models access to specific state, country, or region driving regulations. When faced with unlikely events—such as honking, high beams, or even a flock of sheep—LLaDa can recommend the appropriate actions (like pulling over, stopping, or honking back).

Although this isn’t a complete driving solution, it presents a novel approach to overcoming challenges in autonomous driving systems, while also offering insights for drivers venturing into unfamiliar territory.

Model of the Week

On Monday, Runway unveiled Gen-3 Alpha, their latest generative AI tool designed for film and image creators. Trained on an extensive library of images and videos, Gen-3 can generate video clips based on text descriptions and static images.

Runway claims that Gen-3 Alpha represents a significant enhancement in generation speed and quality compared to their previous model, Gen-2. It also offers fine-tuned control over the structure, style, and motion in the videos it produces. Additionally, Gen-3 can be adapted for more consistent and stylistically controlled characters, catering to specific artistic and narrative goals.

Gen-3 Alpha has some limitations, including a maximum video length of 10 seconds. However, Runway co-founder Anastasis Germanidis has promised that this model is only the beginning of a series of upcoming video-generation tools set to leverage Runway's improved infrastructure.

Emerging alongside other generative video platforms such as OpenAI's Sora, Luma's Dream Machine, and Google's Veo, Gen-3 Alpha signifies a potential revolution in the film and television industry—assuming it can successfully navigate copyright challenges.

Grab Bag

AI Won't Take Your Next McDonald's Order: This week, McDonald’s announced the discontinuation of automated order-taking technology, which it had been testing for nearly three years across over 100 locations. This collaborative effort with IBM garnered attention last year due to its frequent inaccuracies and misunderstandings with customers.

A recent report in The Takeout indicates that AI's integration into fast-food operations is waning, despite earlier enthusiasm for its efficiency potential. Presto, a key player in AI-assisted drive-thrus, recently lost a major client, Del Taco, and is contending with growing financial difficulties.

The primary issue has been accuracy. According to McDonald’s CEO, Chris Kempczinski, its voice-recognition technology achieved about 85% accuracy, necessitating human intervention in approximately 20% of orders. Conversely, Presto's most advanced system reportedly only manages around 30% of orders autonomously.

While AI drastically transforms specific segments of the gig economy, certain roles—especially those requiring comprehension of diverse accents and dialects—remain resistant to automation—at least for now.

Most people like

Find AI tools in YBX