The internet is now filled with fabricated videos depicting direct confrontations between civilians and ICE agents: a school principal wielding a bat, a diner throwing hot noodles, citizens enforcing Fourth Amendment rights. These clips, though clearly AI-generated, have gone viral, reflecting a growing trend where digital fantasy fuels real-world resistance. This surge in AI-made content comes after two US citizens – Renee Nicole Good and Alex Pretti – were fatally shot by federal agents during the Trump administration’s crackdown on immigration in Minneapolis.

Why This Matters : The rise of AI-altered reality is not just about entertainment. It’s about control of the narrative. When official sources are distrusted, people turn to their own means of truth-telling, even if that means fabricating it. This creates a dangerous feedback loop: distrust in real footage, increased reliance on fakes, and a further erosion of shared reality.

The Appeal of Digital Justice

These videos offer a cathartic alternative to the brutal reality of ICE’s actions. They imagine a world where accountability exists, where agents face immediate consequences for abuses of power. The clips tap into deep-seated anger and frustration with a system perceived as unjust. AI creator Nicholas Arter notes that this is a pattern repeating across tech shifts: people use the tools at hand to express emotions, fears, and resistance.

One prolific poster, operating under the name Mike Wayne, has uploaded over 1,000 such videos since January 7th, often showing people of color standing up to ICE. These clips present a counter-narrative where agents are arrested, slapped, or ejected from churches by defiant citizens. One viral clip shows ICE agents being confronted at a sporting event, racking up 11 million views in 72 hours.

The Double-Edged Sword

While these videos may feel empowering, they also distort reality. Experts warn that they can reinforce existing biases, fuel skepticism towards authentic footage, and even undermine legitimate movements. Joshua Tucker of NYU’s Center for Social Media, AI, and Politics suggests that the goal is to flood social media with anti-ICE content, hoping for virality and political capital.

The Trump administration has also weaponized AI manipulation. A week ago, the White House posted an altered photo of Nekima Levy Armstrong after her arrest during a protest, labeling her a “far-left agitator.” This highlights how easily AI can be used to discredit opponents and reinforce preferred narratives.

The Future of Resistance

AI is already deeply embedded in political influence. According to a recent Graphite study, over 50% of new online articles are now AI-generated. As resistance movements adapt, AI will become unavoidable, both as a tool for empowerment and as a weapon against it. Filmmaker Willonious Hatcher argues that these videos expose a deeper truth: people are forced to fabricate liberation because the real thing remains out of reach.

“The oppressed have always built what they could not find… These videos are not delusion. They are diagnosis.”

However, the proliferation of AI-generated content risks undermining the very evidence needed to hold authorities accountable. Video evidence was crucial in documenting ICE’s actions and disproving false narratives surrounding the deaths of Good and Pretti. Yet, as the flood of fakes increases, trust in all footage erodes. Even verified clips, like one of Alex Pretti confronting ICE before his death, are now met with accusations of being AI-generated.

The core issue is that AI’s ability to manipulate perception now outpaces our ability to verify reality. This is not just a technical problem; it is a fundamental crisis of trust in the digital age.