Key Highlights
- Britain experienced approximately eight million deepfake incidents during the previous year, representing a nearly 400% increase from 2023 levels
- Online betting platforms saw fraud escalate by 73% from 2022 through 2024, with synthetic media bypassing verification systems
- A 2025 assessment determined British law enforcement lacks sufficient resources to combat AI-driven fraud operations
- Internal Meta documents revealed approximately $16 billion in 2024 advertising revenue originated from fraudulent schemes and prohibited products
- Regulatory measures targeting deepfakes under Britain’s Online Safety Act are progressing, though enforcement powers over fraudulent advertisements won’t arrive before 2027
Britain confronts an escalating wave of synthetic media fraud powered by artificial intelligence, with regulatory authorities scrambling to address the challenge. Evidence continues mounting that deepfake-based scams have evolved into organized, large-scale operations, particularly devastating the digital betting sector.
Approximately eight million deepfake incidents occurred across Britain last year. This represents nearly quadruple the volume documented during 2023, based on data from the Home Office’s Accelerated Capability Environment.
Research compiled in 2026 for the AI Incident Database characterized this fraudulent activity as having achieved “industrial” proportions. Fred Heiding, a Harvard University scholar examining AI-powered fraud schemes, cautioned that “the worst is yet to come.”
Digital betting operators have suffered particularly severe impacts. Data from Gambling IQ, an industry analytics firm, revealed that fraud targeting this sector increased 73% throughout the 2022-2024 period.
Fraudsters exploit deepfake technology to circumvent identity verification protocols and execute large-scale promotional abuse across betting websites. The technology enables criminals to fabricate convincing impersonations through sophisticated audio duplication and synthetic video generation.
Police Resources Prove Insufficient
Research published in 2025 by the Alan Turing Institute concluded that British law enforcement remains “inadequately equipped to deal with AI-fuelled fraud.” Joe Burton, Professor of Security and Protection Science at Lancaster University, authored the assessment.
Burton delivered an unambiguous evaluation. “AI-enabled crime is already causing serious personal and social harm and big financial losses,” he stated.
He advocated for providing police forces with enhanced capabilities to dismantle criminal networks. Without such improvements, he cautioned, illicit applications of AI technology will proliferate unchecked.
The UK Gambling Commission presently assigns primary accountability to platform operators for crime prevention. Companies must develop and implement their own anti-fraud protocols and safeguards.
Yet as AI technology advances at breakneck speed, betting platforms cannot address the challenge independently. Numerous AI-facilitated gambling scams emerge entirely beyond the boundaries of regulated services.
Social networking platforms serve as primary distribution channels for these fraudulent schemes. Algorithmic systems can inadvertently magnify deceptive material by prioritizing user engagement above factual accuracy.
During November 2025, Reuters disclosed that Meta’s proprietary internal records indicated roughly 10% of its 2024 revenue stream—approximately $16 billion—derived from advertisements connected to fraudulent operations and prohibited merchandise.
Just last week, Reuters discovered that Meta neglected to eliminate fraudulent content from its British platform more than 1,000 times within a seven-day span. The scams included unlicensed digital casinos employing deepfake technology to lure victims.
Legislative Response Remains Sluggish
Ofcom has begun developing regulatory frameworks governing deepfakes pursuant to the 2023 Online Safety Act and the 2025 Data Use and Access Act. However, the watchdog’s published guidance exposes significant gaps in existing oversight mechanisms.
Certain AI conversational systems remain completely beyond regulatory jurisdiction. These operate as self-contained environments and fail to meet definitions for search engines or platforms facilitating user interaction.
Although the Online Safety Act commenced implementation during March 2025, authority to address paid fraudulent advertising has been postponed until 2027 at minimum. This leaves enforcement reliant upon voluntary cooperation from corporations such as Meta.
Neither the Financial Conduct Authority nor Ofcom possesses immediate jurisdiction over these advertisements currently. Material generated without external inputs, encompassing synthetic imagery and video, frequently escapes oversight unless meeting particular criteria.
The weight of combating deepfake fraud remains concentrated on platforms and individual users, despite the technological infrastructure enabling these threats existing beyond their influence.


