“Metadata” is also an invisible tool that serves as a digital labeling system, enabling effective tracking and verification. This regulatory move addresses the rapid surge in AI adoption across China, prompting government agencies to call for more stringent oversight.
Data from the China Internet Network Information Center (CNNIC) reveals that the number of Generative AI users in China reached 515 million as of June 2025. This represents a staggering increase of 266 million users compared to December 2024, effectively doubling the user base in a span of just six months.
Tangible solutions
Chinese regulatory bodies are concerned that synthetic images, videos, and audio clips could mislead the public or be weaponized for fraudulent activities. As Generative AI content continues to saturate social media landscapes, mandatory labeling has emerged as a strategic solution that restores transparency without stifling technological innovation. Short-video platforms such as Douyin, Chinese counterpart of TikTok, along with the video-sharing application Kuaishou, have already implemented notification systems that prompt users to disclose whether their content was generated by artificial intelligence. Meanwhile, an online audio platform, Ximalaya, has integrated warnings in both spoken and written formats to communicate with its audience.
Following four months of regulatory enforcement, major AI content generation platforms, including Doubao, DeepSeek, Qwen, and Yiyan, have applied “AI-generated” labels to over 150 billion synthetic content, spanning text, imagery, audio, and video. Concurrently, leading social media platforms have implemented explicit on-screen disclosures for more than 220 million AI-generated items.
A research team from Xi’an Jiaotong University observed that since these regulations took effect, user vigilance toward unfamiliar content has increased by nearly 40 percent. Furthermore, the integration of embedded metadata has empowered regulatory bodies to swiftly identify the specific generative tools used and trace dissemination paths, thereby accelerating accountability measures. An example is a cross-border investigation into AI-generated fake news, where the time required to track the source was slashed from an average of 72 hours to just 12 hours.
New challenges
As visible labeling becomes more prevalent, efforts to circumvent these markers have risen in tandem. Evasion services designed to strip AI-generated identifiers are now being openly advertised, ranging from basic tools priced at a mere 9.9 yuan (approximately 45 baht) to sophisticated, bespoke modification services costing thousands of yuan. Experts observe that these methods have evolved from simple image cropping into complex, multi-layered processes involving metadata scrubbing, repetitive file format conversions, and cross-platform reposting. Such tactics ensure that content flagged on one platform can bypass detection on another.
A group of experts and observers further noted that penalties for violating labeling mandates remain ambiguous, while the format of the markers themselves lacks a unified standard.
Regarding this issue, Jiang Yanshuang, assistant researcher at the Institute of Social Education and Development at Beijing Normal University, pointed out that the oversight technologies currently used by most platforms remain fragile. She proposed an urgent step in establishing standardized AI-labeling technologies and clearer technical requirements tailored to specific platforms and content types. Such measures are essential to closing the regulatory loopholes caused by technical inconsistencies.

What is the status of Thailand’s AI regulation?
The Ministry of Digital Economy and Society (MDES), through the Electronic Transactions Development Agency (ETDA), revealed the Draft AI Legal Principles, which emphasize the regulation of high risks. These principles push for new standards to ensure that AI usage does not violate rights and remains responsible, a move introduced in June 2025.
Dr. Sak Segkhoonthod, Senior Advisor at the Electronic Transactions Development Agency (ETDA), has provided information stating that the design of the draft Artificial Intelligence law involves three key considerations. (1) Unlocking legal issues that act as barriers to the development or application of AI, (2) establishing promotional measures to ensure clarity in driving this initiative, such as providing grants, tax reductions, and incentives, and (3) addressing protection or AI governance.
The core takeaway from studying international AI law enforcement is that establishing AI governance is not limited to legal frameworks alone. It can be implemented across various levels, ranging from Guidelines and Best Practices to Soft Law and Hard Law. The intensity of oversight must be appropriately calibrated to the specific nuances of each issue, while simultaneously accounting for the rapid pace of technological change. Consequently, enacting a law that covers every risk at every level, a “one-size-fits-all” approach, may not be the most suitable strategy for Thailand. The primary challenge lies in regulating and overseeing AI without hindering technological development.
Nonetheless, the “Draft AI Legal Principles” will encompass several key areas, including establishing fundamental principles for the legal system. Implementing necessary legal promotion measures, such as Text and Data Mining and Regulatory Sandboxes, and defining risk management frameworks, particularly for High-Risk AI applications. It also focuses on promoting responsible AI usage through context-appropriate oversight mechanisms, empowering sector-specific regulators to issue flexible and timely control or promotional measures, and designating organizations to support the implementation of this legislation.
This draft law is not merely a legal instrument, but a means of striking a “balance” between leveraging AI technology for social benefit and protecting the safety, rights, and human dignity of individuals in the digital age. It further aims to ensure that citizens from all sectors can participate in shaping the future of AI in Thailand.
While Thailand’s AI law has not yet been fully finalized, the use of AI to generate content for disseminating fake news remains subject to existing laws. According to information from the Royal Thai Police (RTP), creating trends or soliciting likes through AI-generated false imagery and data that causes damage, incites public alarm, or spreads falsehoods is a criminal offense punishable by imprisonment for both creators and sharers.
Sources: Xinhua, cybernewscentre, ETDA and the Royal Thai Police



