Share this @internewscast.com
In a decisive move, Elon Musk has initiated measures to address the rising issue of artificial intelligence-generated videos depicting the tumultuous situations in the Middle East being shared on the platform X. These AI-crafted videos have become a source of concern, particularly due to their potential to mislead audiences about real-world events.
In response, X has announced a strict policy targeting users who distribute such AI-generated war content without proper labeling. Effective immediately, those who violate this rule will face a 90-day suspension from X’s monetization program. This measure underscores the platform’s commitment to maintaining the integrity of information shared on its network.
Nikita Bier, the company’s head of product, emphasized the importance of this policy on Tuesday. He highlighted the ease with which today’s AI technologies can fabricate misleading content, posing a significant challenge to ensuring that users receive accurate and reliable information, especially during times of conflict.
Bier remarked, “During times of war, it is critical that people have access to authentic information on the ground.” This statement reflects the company’s dedication to promoting transparency and truthfulness in the digital age.
The introduction of this policy comes on the heels of heightened tensions in the region, following military actions by the US and Israel against Iran on Saturday. This escalation has led to a surge in misleading AI-generated posts flooding social media, further complicating the landscape of information access during the conflict.
AI war fakes flood social media after strikes
The new policy comes after the US and Israel struck Iran on Saturday, plunging the region into war and sparking a wave of misleading AI-generated posts on social media.
One of the heinous fake videos included shots of supposed Israeli soldiers weeping in fear, purportedly at an Iranian strike. That clip has more than 1.4million views.
Another fabricated clip viewed by more than 2.1million people showed Dubai’s Burj Khalifa completely engulfed in flames after supposedly being attacked by Iran.
A separate video posted on X claimed to show ‘Iranian missiles hit[ting] central Israel,’ with footage appearing to depict a massive blast on a building.
In reality, the clip was AI-generated — and it was marked as such by users on X.
The company said Tuesday that AI-made content would be marked either through crowdsourced notes from users or by metadata and other signals indicating generative AI tools.
Another video shared on X falsely claimed that Iranian ballistic missiles had obliterated ‘everything in their path’ in Tel Aviv.
The AI-generated footage showed what appeared to be a barrage of rockets raining down on the Mediterranean city.
Explosions and clouds of smoke could be seen in the distance, as the user apparently filming the footage zoomed in.
In another post, an attack on an unnamed Israeli airport was described and apparently captured on video.
However, the seemingly terrifying scenes were actually entirely fabricated by AI.
Some ways to spot whether a video has been generated by AI include low picture quality and very short durations, according to the BBC.
How to spot AI-generated war footage online
Some AI bots are also using out-of-date information, which can pop into videos and depict locations inaccurately.
Strange textures or an almost airbrushed look can also be indicators of AI-generated content, per the Better Business Bureau.
Physical inconsistencies, unnatural shadows and lights are also tells.
Other giveaways include physical inconsistencies, unnatural shadows or lighting.
Strangely enough, typos can actually be an encouraging sign – because humans are likelier to make them than machines.
Musk has predicted that AI-made video is the future of content, even as his own platform seeks to combat misinformation propagated by the technology.
‘Most of what people consume in five or six years – maybe sooner than that – will be just AI-generated content,’ Musk said in October.
X rolls out ‘Made with AI’ labels for posts
Under the new guidelines, X users will need to add the ‘Made with AI’ label by pressing the menu on the post and selecting Add Content Disclosures.
The movement was praised by the Trump administration.
‘This is a great complement to X’s community notes system, which results in less ‘reach’ (thus monetization) for content annotated as inaccurate,’ Sarah Rogers, the under secretary of state for public diplomacy, said.
Rogers added: ‘You don’t need a Ministry of Truth to incentivize truth online.’
The shift comes as the company continues to tighten its AI guardrails.
Last month, X announced that it would make tweaks to its AI tool Grok in order to prevent overly sexualized photos from being created.
Grok had previously come under fire for posting about antisemitic tropes and claims of white genocide.
Share your thoughts with us in the comments