News In Brief Business and Economy
News In Brief Business and Economy

Meta Platforms Urged to Strengthen Oversight of AI-Generated Fake Videos

Share Us

87
Meta Platforms Urged to Strengthen Oversight of AI-Generated Fake Videos
11 Mar 2026
5 min read

News Synopsis

Meta Platforms is facing increasing pressure to strengthen its oversight of artificial intelligence-generated content after its own advisory body warned that fake AI videos are spreading rapidly across social media platforms.

The company’s 21-member Oversight Board raised concerns about the growing presence of misleading AI-generated media on Meta’s platforms, which include Facebook, Instagram and WhatsApp.

The board criticised Meta for failing to label an AI-generated video that falsely claimed to show widespread damage in Haifa, Israel caused by Iranian forces.

The advisory group warned that the rise of such misleading content during global conflicts could undermine public trust in online information and make it increasingly difficult for users to differentiate between authentic content and fabricated media.

Oversight Board Raises Alarm Over Fake AI Content

Concerns Over “Proliferation” of AI-Generated Videos

The Oversight Board said the increasing spread of AI-generated videos online poses a serious challenge for social media platforms.

According to the board, Meta needs to take stronger steps to tackle the "proliferation" of AI-generated misinformation.

It warned that the growth of such content, particularly during geopolitical crises, could damage the credibility of information circulating online.

The board said that the rising number of manipulated videos linked to military conflicts had "challenged the public's ability to distinguish fabrication from fact ... risking a general distrust of all information."

Controversial AI Video Triggered the Review

Fake Video Depicted Damage in Haifa

The case that prompted the board’s review involved a video uploaded to Facebook last June by an account based in the Philippines that described itself as a news outlet.

The video falsely portrayed extensive damage in the Israeli city of Haifa during tensions involving Iran.

Despite several complaints from users pointing out that the video was generated using artificial intelligence and depicted events that never occurred, Meta did not remove or label the content.

The video eventually gathered almost 1 million views before the issue reached the Oversight Board.

Video Part of Larger Wave of AI Misinformation

According to a news agency's analysis conducted at the time, the Haifa video was part of a broader trend of AI-generated content appearing online during the conflict.

Multiple fake videos — some promoting pro-Israel narratives and others supporting pro-Iran viewpoints — circulated widely across social media platforms.

Together, the misleading videos accumulated at least 100 million views, highlighting the rapid spread of AI-generated misinformation.

Oversight Board Criticises Meta’s Moderation Approach

Delay in Responding to Complaints

The Oversight Board noted that Meta did not initially take action despite receiving several user complaints about the video.

It was only after a Facebook user appealed directly to the board that Meta responded to the concerns.

When the company eventually reviewed the case, it argued that the video did not require labeling or removal because it did not "directly contribute to the risk of imminent physical harm."

Board Says Current Threshold Is Too High

The advisory body rejected Meta’s reasoning, stating that the company’s standards for labeling AI content are insufficient.

According to the board, the threshold set by Meta is too strict, particularly when dealing with misinformation linked to armed conflicts.

The board concluded that the video should have been labeled as high-risk misinformation and should have carried a "high risk AI label."

Calls for Stronger AI Labeling Policies

Current System Relies on User Disclosure

At present, Meta largely depends on users to disclose whether the content they upload has been generated using artificial intelligence.

If creators fail to provide that information, the company typically waits for users to report the content before moderation teams review it.

The Oversight Board said this reactive approach is not sufficient given the speed at which AI content spreads online.

Need for More Proactive Moderation

The board argued that Meta should proactively identify and label fake AI videos.

It said the company should be labeling AI-generated content "much more frequently."

The current system, the board added, was "neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content, particularly during a crisis or conflict where there is heightened engagement on the platform."

Meta Responds to Oversight Board Recommendations

Meta said it would comply with the board’s ruling regarding the Haifa video and label the content within seven days.

The company also stated that it would apply the board’s recommendations in similar cases in the future.

In its response, Meta said the guidance would apply when the platform encounters "identical" content that appears "in the same context" as the video reviewed by the board.

Growing Debate Over AI Misinformation on Social Media

The controversy highlights broader concerns about the role of generative AI technologies in spreading misinformation online.

AI tools are becoming increasingly capable of creating realistic videos, images and audio recordings that can mislead viewers.

Technology companies around the world are now facing pressure from regulators, policymakers and civil society groups to develop stronger safeguards to detect and label such content.

As AI technology becomes more widely accessible, experts warn that the risk of manipulated media influencing public opinion during elections, conflicts and major global events will continue to increase.

Conclusion

The Oversight Board’s criticism underscores the growing challenges that social media platforms face in managing AI-generated misinformation. As generative AI tools become more powerful and widely available, fake videos and manipulated content can spread rapidly across digital platforms, especially during politically sensitive moments or armed conflicts.

The Haifa video controversy illustrates the limitations of relying primarily on user reporting and voluntary disclosure for AI content moderation. Experts argue that platforms like Meta will need to adopt more proactive monitoring systems and clearer labeling policies to maintain trust among users.

Going forward, the ability of technology companies to effectively detect, label and manage AI-generated media will likely play a critical role in shaping the future of online information integrity and digital communication.

TWN Exclusive