China will require all artificial intelligence(AI)-generated content to have clear labels starting September 1, 2025, joining global efforts to regulate fake media and fight digital misinformation.
The Cyberspace Administration of China (CAC), working with three other government agencies, announced rules requiring both visible and hidden labels for AI-created text, images, audio, video, and virtual scenes.
“The Labeling Law will help users identify disinformation and hold service suppliers responsible for labeling their content,” the CAC stated, as translated by Bloomberg. “This is to reduce the abuse of AI-generated content.”
Labeling Requirements
Under the new rules, online platforms must check AI-generated content before publishing it and add proper labels. Users must declare when they post AI-created material, while service providers must keep records for at least six months.
The regulations require two types of identification: explicit labels visible to users through text, sound, or graphics, and implicit labels hidden as metadata within files.
For text content, labels must appear at the beginning, end, or middle. Images need clear markings in appropriate spots. Videos and audio need identifiers at the start, with optional markers in the middle and end.
App stores must verify whether developers provide AI-generated content services and review their labeling systems before approving them.
Implementation Challenges
China’s move follows similar efforts worldwide. The European Union’s AI Act includes rules for labeling AI-generated media, while various U.S. proposals aim to address fake content.
The rapid growth of AI technology has raised concerns about its potential misuse. Deepfakes and other AI-generated content can spread false information, violate copyright laws, and enable online scams when users can’t tell what’s real.
However, experts point to major challenges in putting these rules into practice. Real-time AI applications like live streaming make it hard to add labels without slowing down or reducing quality.
A report by the Information Technology and Innovation Foundation highlighted problems with current labeling technologies. Watermarks, digital fingerprints, and encrypted metadata can be removed or changed through editing. Different platforms use different standards, making detection unreliable.
Critics argue labels alone cannot fully address AI misuse. Early attempts by platforms like Meta’s “Made with AI” label ran into problems. Meta would sometimes wrongly tag real photos as AI-generated.
While the announcement doesn’t specify exact penalties for violations, enforcement will fall under existing Chinese internet regulations. Similar rules in Spain could result in fines of up to 35 million euros ($38.2 million) or 7% of a company’s global annual turnover.