As generative AI becomes more prevalent, Kickstarter, the popular crowdfunding platform, faces challenges in establishing a clear policy that addresses concerns from all stakeholders involved in this debate.
Today’s generative AI tools, such as Stable Diffusion and ChatGPT, utilize publicly available images and text from the internet for training. Unfortunately, many artists, photographers, and writers whose content was used have not received credit, compensation, or even the option to withdraw their work from such processes.
Proponents of these AI tools maintain that they are protected under the fair use doctrine, particularly in the United States. However, many content creators disagree, especially when it comes to monetizing AI-generated content or the tools themselves.
In response to these concerns, Kickstarter has announced that going forward, projects on its platform utilizing AI tools to generate images, text, and other outputs (including music and audio) will be required to provide “relevant details” on their project pages. This information must include how project owners intend to use AI-generated content and specify which elements of their project are original and which are created with AI tools.
Additionally, Kickstarter will require new projects focused on developing AI technology to disclose their sources of training data. Project creators must indicate how they address consent and credit for these sources and implement their own safeguards, such as opt-in or opt-out options for content creators.
While an increasing number of AI vendors offer opt-out mechanisms, Kickstarter's stricter training data disclosure requirement may lead to disputes, especially as various organizations worldwide, including the European Union, push to formalize these practices into law. OpenAI, for instance, has refrained from disclosing the specific sources of training data for its latest systems, citing competitive and potential legal liability concerns.
This new policy from Kickstarter will take effect on August 29, though it will not be applied retroactively to projects submitted before that date, according to Susannah Page-Katz, Kickstarter’s director of trust and safety.
“We want to ensure that any project funded through Kickstarter involves human creative input and that all referenced artistic work is properly credited and permissions are obtained,” said Page-Katz in a blog post. “The policy mandates transparency about how creators utilize AI in their projects, fostering trust and setting the stage for success.”
To implement this policy, Kickstarter will introduce a series of questions for project submissions that will clarify whether AI technology is used to generate artwork or if the project primarily concentrates on developing generative AI tech. Additionally, creators will need to confirm if they have received consent from the owners of works that contribute to AI-generated portions of their project.
After submission, AI project creators' work will undergo Kickstarter’s standard human moderation. If accepted, AI aspects will be highlighted in a new section labeled “Use of AI” on the project page, as noted by Page-Katz.
“Throughout our discussions with creators and backers, we found that our community prioritized transparency,” she emphasized, warning that failure to disclose AI usage adequately during submission could lead to project suspension. “We’re responding to our community’s call for transparency by adding a dedicated section for backers to learn about AI’s role in a project directly from the creator.”
Kickstarter first hinted at potential policy changes regarding generative AI back in December, when it began reassessing whether the media used in algorithm training constituted an infringement of artists' rights. Since then, the platform has made gradual strides toward a final policy.
Toward the end of last year, Kickstarter banned Unstable Diffusion, a group attempting to fund a generative AI art project lacking safety filters, which allowed users to generate unrestricted artwork, including explicit content. Kickstarter justified this action by suggesting the project risked harming particular communities.
More recently, the platform approved, then removed, a project that used AI to plagiarize an original comic book, underscoring the inherent challenges in overseeing AI-generated works.