New Chinese Video Generation Model Censors Politically Sensitive Topics: What You Need to Know

A groundbreaking video-generating AI model has just been released, but there's an important caveat: it appears to be censoring content deemed politically sensitive by the Chinese government.

The model, named Kling, was developed by Kuaishou, a Beijing-based firm. Initially, it was available only to users with Chinese phone numbers through a waitlist. As of today, anyone can access it by signing up with their email. Once registered, users can input prompts, prompting the model to create five-second videos based on their descriptions.

Kling performs largely as expected. It generates 720p videos in one to two minutes without straying from user prompts. Additionally, Kling exhibits a decent simulation of natural phenomena, such as rustling leaves and flowing water, comparable to video-generating models from other companies like Runway’s Gen-3 and OpenAI’s Sora.

However, Kling does not generate videos on specific sensitive subjects. For instance, prompts like “Democracy in China,” “Chinese President Xi Jinping walking down the street,” and “Tiananmen Square protests” result in vague error messages.

The censorship appears to occur solely at the prompt entry level. Although Kling can animate still images, it will generate a video of a portrait of Xi Jinping if the prompt refrains from naming him explicitly (e.g., “This man giving a speech”).

We have reached out to Kuaishou for further insights into this issue.

The peculiar limitations of Kling likely stem from significant political pressure from the Chinese government regarding generative AI operations. Earlier this month, the Financial Times reported that China’s top internet regulator, the Cyberspace Administration of China (CAC), is set to test AI models to ensure their responses on sensitive topics align with “core socialist values.” CAC officials are tasked with benchmarking model responses to various queries, particularly those involving Xi Jinping and criticisms of the Communist Party.

Reportedly, the CAC has proposed a blacklist of sources that cannot be used for training AI models. Companies seeking approval must prepare thousands of questions to assess whether their models deliver “safe” responses.

Consequently, many AI systems refrain from answering questions that could provoke scrutiny from Chinese regulators. A BBC report last year highlighted how Ernie, Baidu’s flagship AI chatbot, avoided politically controversial queries such as “Is Xinjiang a good place?” or “Is Tibet a good place?”

Such stringent policies may hinder China's AI progress. Not only do they require the removal of sensitive data, but they also demand significant development resources to establish ideological limits—limits that can still be bypassed, as seen with Kling.

For users, China's regulatory approach has resulted in a dichotomy among AI models: some are severely restricted, while others are comparatively unrestricted. Is this truly beneficial for the global AI landscape?

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles