ChatGPT is credited as the author or co-author of over 200 books in Amazon’s Kindle Store, according to recent reports. However, the actual number of books created by AI is likely much higher since Amazon does not mandate authors to disclose their use of AI in writing. This surge of AI-generated content raises ethical concerns in the literary market following the release of OpenAI’s free tool.
Brett Schickler, a salesman from Rochester, NY, exemplifies this trend. He self-published a children’s book, The Wise Little Squirrel: A Tale of Saving and Investing, using AI for writing and illustration. Priced at $2.99 for a digital copy and $9.99 for a print version, Schickler notes that he earned less than $100 since its January release, investing only a few hours on the project with prompts such as “write a story about a dad teaching his son about financial literacy.”
Other AI-generated titles in the Kindle Store include The Power of Homework, a children’s story, the poetry collection Echoes of the Universe, and the sci-fi narrative Galactic Pimp: Vol. 1. Mary Rasenberger, executive director of the Authors Guild, cautions that the influx of such books could disrupt the market, stressing the need for transparency from authors and platforms in the creation process to prevent a flood of low-quality content.
Meanwhile, Clarkesworld Magazine has temporarily ceased short-story submissions following a significant increase in suspected AI-generated articles. Editor Neil Clarke noted that 38% of submissions this month had been flagged as spam. While rejecting these submissions has been straightforward, Clarke expressed concern that the volume is increasing, indicating a need for operational changes. The magazine currently prohibits works that are “written, co-written or assisted by AI,” resulting in over 500 user bans for submitting questionable content.
In addition to ethical concerns, issues related to misinformation and plagiarism are emerging. Many AI systems, including ChatGPT and Microsoft’s Bing AI, are known to “hallucinate,” generating false information confidently. Furthermore, these models are often trained on human-created content without the original authors' knowledge or consent, occasionally replicating identical phrases from existing works.
Last year, tech publication CNET used an in-house AI model to generate at least 73 economic explainers. However, the approach faced scrutiny for its lack of transparency and numerous factual inaccuracies, leading CNET to pause its use of the model after significant corrections. Despite this setback, one of its sister sites has already begun experimenting with AI-generated content again, highlighting the ongoing tension in balancing innovation with responsible content creation.