Watch out for AI fakes and misinformation in the wake of ICE shooting
Published in News & Features
MINNEAPOLIS — The fatal shooting of 37-year-old Renee Good by a federal agent in Minneapolis on Jan. 7 has already spurred a flood of online misinformation that may be shared unwittingly by people with good intentions.
Here are three examples and some tips for telling fake videos and images from real ones:
Washington Post reporter Drew Harwell published a video about AI-generated images that have spread widely on social media that “remove” the agent’s mask. As Harwell notes, AI tools have no idea what the agent’s face actually looks like, and they are just making a guess.
A second AI-generated image purports to show the agent speaking to the victim.
“People pass along crap like this on social media hours after a news event all the time,” Harwell said. “It makes it so much harder for people to understand what’s going on.”
An AI-generated image making the rounds on Facebook and elsewhere purports to show a huge protest spanning many blocks in downtown Minneapolis at dusk, with U.S. Bank Stadium in the background. Upon closer examination, anyone familiar with the geography of the city will recognize glaring inaccuracies in the image, including buildings that don’t exist and gibberish on signs.
It’s true that thousands of people gathered at a vigil for Good near the scene of the shooting Wednesday evening before marching to Lake Street in south Minneapolis, but the crowd never made its way downtown. There was a march in downtown Minneapolis earlier in the day, well before sunset.
Here’s what the vigil really looked like from our photographer at the scene Wednesday night:
Shortly after Good was identified as the victim Wednesday, an image of a young woman with tousled hair and bright red lipstick appeared widely across social media platforms, purported to be Good.
This actual photo of Good originated in a 2020 Facebook post from the English Department at Old Dominion University in Virginia, which named Good (formerly Renee Macklin) as the winner of a poetry prize:
The misidentified woman was a runner-up in the contest.
In the heat of a breaking news event, it can be tempting to share dramatic information or images quickly without checking first to make sure they are real. Here’s what to look for to identify AI-generated images or other forms of misinformation.
Identify the source: Where did this image, video or information come from? Is it being shared by a reputable news source or by a random social media account? If you can’t easily identify the source or authenticity of an image or video, don’t share it.
AI technology is improving, but it makes mistakes: When looking at a photo or video, do you see things that are out of place? AI often has trouble including coherent and legible text in an image. Obvious misspellings or objects appearing where they don’t belong are clear signs an image was produced by AI. If something looks wrong, don’t share it.
Consider the perspective: AI-generated videos are often cinematic in appearance, with camera angles that are obviously different from videos taken by a person holding a cellphone.
Follow these rules for sharing information in a breaking news situation: OnTheMedia.org has produced a Breaking News Consumer’s Handbook with simple rules for what to share, or not share, when a big story breaks. The rules mostly amount to being skeptical, looking for the original source and being wary of things that appear to be designed to make you angry.
________
©2026 The Minnesota Star Tribune. Visit at startribune.com. Distributed by Tribune Content Agency, LLC.







Comments