TLDRs;
- Korea to mandate visible AI labels on synthetic ads beginning January 2026.
- Fake-expert and deepfake ads surged, prompting strict disclosure and enforcement rules.
- Platforms must enforce labeling, provide standardized tools, and prevent label deletion.
- New penalties include punitive damages up to five times actual harm.
South Korea is moving decisively to rein in the rapid spread of AI-generated advertising, unveiling a sweeping set of rules that will require clear and permanent labels on synthetic content starting January 2026. The mandate responds to a sharp rise in misleading digital promotions using deepfake doctors, fabricated experts, and AI-generated celebrity likenesses, particularly in the food and pharmaceutical categories.
Government data underscores the acceleration. The Ministry of Food and Drug Safety (MFDS) identified nearly 97,000 AI-generated food and drug ads in 2024, a dramatic climb from roughly 59,000 in 2021. Regulators say these ads frequently present fabricated medical authority figures or manipulated testimonials, often pushing unapproved products or overstating health benefits. Existing platform-level guidelines have failed to contain the problem, prompting the government to implement enforceable national requirements.
Mandatory Labels Across the Content Lifecycle
At the center of the new policy is a strict disclosure rule: any person or entity involved in producing, editing, or uploading AI-generated images or videos must label the material as synthetic. The obligation applies across the entire content pipeline, closing loopholes that previously allowed creators or intermediaries to avoid responsibility.
Crucially, the government is also imposing a ban on removing or altering AI labels, a measure intended to prevent shell accounts, offshore advertisers, or automated tools from stripping out disclosures before ads are distributed. The rule draws directly from Korea’s broader AI Framework Act, which requires visible labeling for synthetic content that is “indistinguishable from reality.”
The Korea Communications Commission (KCC) has separately recommended that platforms adopt visible watermarks and digital markers, including standardized AI logos, to ensure that users can immediately recognize synthetic material across websites, social feeds, and video platforms.
Platforms Share Enforcement Duties
Online platforms, ranging from Korea’s own Naver and Kakao to global giants such as YouTube, TikTok, and Instagram, will be required to ensure creators follow disclosure rules and must offer standardized labeling tools and notifications. Authorities expect platforms to integrate automated checks capable of identifying unlabeled AI-generated content before and after upload.
If an advertisement features a digitally generated doctor, nutritionist, clinician, or domain expert, it must be explicitly labeled as a virtual human. Without that disclosure, regulators will deem the ad deceptive and subject to enforcement actions.
Penalties for violations will escalate beginning in 2026, including punitive damages of up to five times the financial harm caused, one of the toughest AI-ad provisions globally. Officials say the harsher liability structure is intended to dissuade repeat offenders, especially those operating large-scale ad farms using synthetic media.
Compliance and Industry Implications
Korea’s approach places it in alignment with, yet more granular than, frameworks emerging in the EU and China. Both regions have mandated transparency for AI-generated content, but South Korea’s rules stand out for assigning specific labeling duties to every participant in the content lifecycle and for prohibiting label removal outright.
The crackdown also opens a significant opportunity for ad-tech, compliance, and deepfake-detection vendors. With South Korea’s online advertising market estimated at USD 6.5 billion, regulators expect platforms to adopt C2PA metadata standards, automated audit tools, real-time detection systems, and verification APIs that can scale to high volumes of AI-generated media.
Platforms with major reach, YouTube (over 43 million users), Instagram (23.6 million users), and TikTok’s 7.18 million-strong base, will likely integrate detection services capable of flagging AI-altered faces, voices, or avatars before content goes live. Startups specializing in synthetic media integrity may find fertile ground as advertisers prepare for the 2026 enforcement deadline.
Legislative Updates Ahead
The government plans further revisions to advertising and consumer-protection laws in 2026 to reinforce the new mandate and close enforcement gaps. Officials say the overarching goal is to balance innovation with safety, ensuring AI tools can continue to advance while preventing their misuse in sectors like health and wellness, where misleading claims can have real-world consequences.
As synthetic media becomes increasingly sophisticated, and increasingly abused, South Korea is positioning itself as an early and assertive regulator, setting standards that could influence global norms for AI transparency in advertising.


