A disturbing trend is rapidly escalating online: the weaponization of AI-generated deepfakes to spread misinformation, reinforce stereotypes, and exploit Black creators. The case of Livie Rose Henderson, the DoorDash delivery driver who alleged a sexual assault, became a focal point for this abuse when AI-generated videos surfaced using the likeness of Black journalist Mirlie Larose to discredit her claims and justify her firing.
The Rise of Digital Blackface
The phenomenon, termed “digital blackface” by culture critic Lauren Michele Jackson, involves the appropriation of Black imagery, slang, and culture in online content. This is amplified by platforms like TikTok, where short-form video and AI tools like Sora 2 make it easier for non-Black users to adopt racialized personas through deepfakes. One bot account, uimuthavohaj0g, posted AI-generated videos using Larose’s face, parroting arguments that minimized Henderson’s allegations and justified DoorDash’s decision to terminate her employment.
The DoorDash Controversy as a Case Study
Henderson was fired for sharing customer information online, but the backlash intensified when AI-generated videos falsely implicated her in privacy violations. TikTok removed the original footage, then repeatedly deleted re-uploads, leading to multiple strikes against her account. Meanwhile, deepfakes using Larose’s face and that of other Black creators circulated, spreading misinformation and reinforcing harmful stereotypes. Larose’s likeness was used in at least 19 AI-generated videos, with TikTok initially refusing to remove them until public outcry forced action.
AI-Generated Content Fuels Misinformation
The problem extends beyond the DoorDash case. AI-generated content has been used to spread false narratives about Black communities, including fabricated clips of Black women complaining about welfare benefits. OpenAI’s Sora 2, despite policies against impersonation, has facilitated the proliferation of racist, sexist, and classist biases. OpenAI spokesperson Niko Felix stated that the company is working to detect and remove such content, but enforcement remains a challenge.
Legal and Regulatory Responses
Some Black content creators, like Zaria Imani, are pursuing legal action under copyright infringement laws. The Take It Down Act, signed in May 2025, criminalizes the distribution of nonconsensual intimate imagery, including AI-generated deepfakes. However, advocacy groups like Data for Black Lives argue that systemic change is necessary to hold tech companies accountable.
“This is about harnessing violent stereotypes of Black people for political agendas. It’s social engineering to drive engagement and chaos,” says Yeshimabeit Milner, founder of Data for Black Lives.
The Future of AI Accountability
The rise of AI-generated deepfakes targeting Black creators highlights a critical need for stronger regulation and enforcement. Without collective action and legislative intervention, the spread of misinformation and exploitation will continue. The digital landscape requires not just technological solutions but a fundamental shift in how platforms address algorithmic bias and protect marginalized communities.






















