A powerful new video-generating AI model has become widely available today. Still, there’s a catch: The model appears to be censoring topics deemed too politically sensitive by the government in its country of origin, China.
Availability and Functionality
The model, Kling, developed by Beijing-based company Kuaishou, launched in waitlisted access for users with a Chinese phone number earlier in the year. Today, it rolled out for anyone willing to provide their email. After signing up, users can enter prompts to have the model generate five-second videos of what they’ve described.
Model Performance of Kling
Kling works pretty much as advertised. Its 720p videos, which take a minute or two to generate, don’t deviate too far from the prompts. Kling appears to simulate physics, like the rustling of leaves and flowing water, about and video-generating models like AI startup Runway’s Gen-3 and OpenAI’s Sora.
Censorship and Sensitive Topics
But Kling won’t generate clips about certain subjects outright. Prompts like “Democracy in China,” “Chinese President Xi Jinping walking down the street,” and “Tiananmen Square protests” yield a nonspecific error message.
The filtering appears to be happening only at the prompt level. Kling supports animating still images, and it’ll uncomplainingly generate a video of a portrait of Jinping, for example, as long as the accompanying prompt doesn’t mention Jinping by name (e.g., “This man giving a speech”).
Political Pressure and Regulations
Kling’s curious behavior is likely the result of intense political pressure from the Chinese government on generative AI projects in the region.
Earlier this month, the Financial Times reported that AI models in China will be tested by China’s leading internet regulator, the Cyberspace Administration of China (CAC), to ensure that their responses on sensitive topics “embody core socialist values.” According to the Financial Times report, CAC officials will benchmark models for their responses to a variety of queries, many related to Jinping and criticism of the Communist Party.
Impact on AI Model Development
The result is that AI systems decline to respond to topics that might raise the ire of Chinese regulators. Last year, the BBC found that Ernie, Chinese company Baidu’s flagship AI chatbot model, demurred and deflected. When asked questions that might be perceived as politically controversial, such as “Is Xinjiang a good place?” or “Is Tibet a good place?”
Consequences of Censorship
The draconian policies threaten to slow China’s AI advances. Not only do they require scouring data to remove politically sensitive information. But they also necessitate investing an enormous amount of dev time in creating ideological guardrails that might still fail, as Kling exemplifies.
Model User Perspective and Broader Implications
From a user perspective, China’s AI regulations are already leading to two classes of models. Some are hamstrung by intensive filtering, and others are decidedly less so. Is that a good thing for the broader AI ecosystem?