OpenAI's Chatbot Store Overrun by Spam: What You Need to Know

When OpenAI CEO Sam Altman introduced GPTs—custom chatbots driven by OpenAI's generative AI models—during the company's inaugural developer conference in November, he highlighted their ability to “accomplish all sorts of tasks” ranging from coding to exploring niche scientific topics and even providing workout tips.

“Because [GPTs] integrate instructions, extensive knowledge, and actions, they can be incredibly beneficial,” Altman explained. “You can create a GPT for virtually anything.”

He wasn’t exaggerating.

Recent findings reveal that the GPT Store, OpenAI’s official marketplace for these custom chatbots, is overflowing with eccentric and potentially copyright-violating entries, raising concerns about OpenAI's moderation practices. A quick search uncovers GPTs claiming to generate art inspired by Disney and Marvel, but they often function as mere gateways to third-party paid services while promoting the ability to evade AI content detection tools like Turnitin and Copyleaks.

The Lack of Moderation

To feature GPTs in the GPT Store, developers must verify their user profiles and submit their creations for OpenAI’s review, which combines human oversight with automated systems. An OpenAI representative explained the process:

“We employ a mix of automated systems, human evaluations, and user reports to identify and assess GPTs that may breach our policies. Violations can trigger actions such as warnings, sharing restrictions, or removal from the GPT Store and monetization eligibility.”

Creating GPTs is accessible; no coding skills are required. Developers can specify their desired capabilities in the GPT-building tool, GPT Builder, which will then attempt to generate a GPT tailored to those needs.

As a result of this low barrier to entry, the GPT Store's growth has been remarkable—OpenAI recently reported about 3 million GPTs available. However, this rapid expansion appears to have compromised quality and compliance with OpenAI's policies.

Copyright Concerns

The GPT Store showcases several GPTs that replicate popular media franchises without authorization—claims that raise copyright issues. For instance, one GPT generates creatures reminiscent of “Monsters, Inc.,” while another promises text-based experiences in the “Star Wars” universe.

These unauthorized GPTs, alongside those that allow users to interact with trademarked characters like Wario and Aang from “Avatar: The Last Airbender,” could spark legal challenges.

Kit Walsh, a senior staff attorney at the Electronic Frontier Foundation, noted:

“[These GPTs] can produce transformative works or infringe on copyrights. While infringement suspects may be liable, creators of lawful tools can inadvertently engage in liability if they encourage violations. Trademark concerns also arise when trademarked names are used in a way that misleads users about endorsements.”

While OpenAI may not face liability for copyright infringements due to the safe harbor provision in the Digital Millennium Copyright Act, the current situation is not ideal for a company entangled in intellectual property litigation.

Academic Integrity Risks

OpenAI explicitly prohibits developers from creating GPTs that promote academic dishonesty. Yet, the GPT Store contains numerous entries claiming to bypass AI content detectors—tools that educators use to check for plagiarism.

One GPT labels itself as a “sophisticated” rephrasing tool that is “undetectable” by popular AI detection software like Originality.ai and Copyleaks. Another, Humanizer Pro—ranked second in the Writing category of the GPT Store—claims to “humanize” content to evade detection, assuring users that their text retains its “meaning and quality” while achieving a “100% human” score.

Some GPTs serve as thinly veiled conduits to premium services. For instance, Humanizer invites users to sign up for a “premium plan” boasting the “most advanced algorithm,” which processes text through a third-party plug-in, GPTInf. Subscriptions for GPTInf start at $12 per month for 10,000 words, or $8 per month with an annual plan, adding to the cost of OpenAI’s ChatGPT Plus at $20 per month.

Though previous analyses demonstrate shortcomings in AI content detection accuracy, OpenAI continues to host tools that encourage academically dishonest practices, even if they do not achieve the intended outcomes.

OpenAI stated: “GPTs that promote academic dishonesty, including cheating, violate our policies. We encounter some GPTs intended for ‘humanizing’ text. We are still learning from real-world usage but recognize that users may want AI-generated content that feels less ‘AI-generated.’”

Impersonation Issues

OpenAI also prohibits GPT developers from creating impersonations of individuals or organizations without proper consent. Despite this, many GPTs in the store claim to emulate the perspectives or personalities of various public figures.

A search for names like “Elon Musk,” “Donald Trump,” “Leonardo DiCaprio,” “Barack Obama,” and “Joe Rogan” reveals a multitude of GPTs—some comedic, some more serious—simulating conversations with these figures. Others pose as authorities on well-known brands, such as MicrosoftGPT, an “expert on Microsoft products.”

The question arises: do these instances constitute impersonation, especially when targeting public figures and potentially functioning as satire? This remains for OpenAI to clarify.

An OpenAI spokesperson remarked: “We allow creators to instruct their GPTs to respond ‘in the style of’ someone real as long as they do not impersonate them, which includes not identifying as that person, fully emulating them, or using their image in a profile.”

The company recently took action against a GPT that mimicked Rep. Dean Phillips' political persona, which included a disclaimer indicating it was an AI tool. OpenAI clarified that the GPT's removal was due to violations of its political campaigning policies as well as impersonation.

Jailbreaking Attempts

Surprisingly, the GPT Store also features attempts at jailbreaking OpenAI’s models, albeit with limited success. Several GPTs leverage the DAN method (short for “Do Anything Now”), a popular approach for prompting models without their standard restrictions. However, tests have shown these GPTs were generally unresponsive to inappropriate prompts.

An OpenAI representative stated: “GPTs designed to evade OpenAI safeguards or to contravene our policies are prohibited. However, those steering model behavior in permissible manners are acceptable.”

Growing Pains

OpenAI promoted the GPT Store as a carefully curated collection of efficiency-enhancing AI tools. While it fulfills that purpose, it has also turned into a hub for dubious, spammy, and potentially harmful GPTs that openly defy OpenAI’s norms.

With monetization on the horizon, further complications may arise. OpenAI plans to allow GPT developers to earn money based on user engagement, possibly leading to subscriptions for individual GPTs. There are concerns about how companies like Disney or the Tolkien Estate will respond if creators of unauthorized GPTs inspired by their intellectual properties begin generating revenue.

OpenAI’s ambition with the GPT Store is evident. As noted by my colleague Devin Coldewey, the Apple App Store model has proven extremely profitable, and OpenAI aims to replicate that success. GPTs are hosted, developed, and promoted within OpenAI’s ecosystem. Recently, they can also be accessed directly through the ChatGPT interface for ChatGPT Plus users, providing an incentive to upgrade.

However, the GPT Store is grappling with the same early challenges faced by many large-scale digital marketplaces. A recent report indicated that developers struggle to attract users, partly due to limited back-end analytics and a poor onboarding experience.

One might expect that OpenAI—given its emphasis on quality control and the significance of safeguards—would proactively avoid these common pitfalls. Unfortunately, the state of the GPT Store is chaotic, and without swift interventions, it risks remaining in this disarray.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles