Facebook flagged and removed emergency wildfire information as 'spam'

Facebook is reportedly taking down important safety information as wildfire season spreads across the western part of the country.

An investigation by The Washington Post found dozens of instances of the tech giant flagging and removing posts since June, in areas that saw a total of 20 active fires. Posts often contained links or screenshots from the fire safety app Watch Duty, for example, a nonprofit resource that provides live alerts of wildfire movements. Daily posts from federal and state agencies, such as Cal Fire, warning residents in fire-prone California towns were also removed — including those made on dedicated fire update pages.

Many of these takedowns affect small town community pages, vital resources for live updates and information sharing. According to the publication, volunteer responders, fire and sheriff departments, news stations, and disaster nonprofit workers have also been targeted with takedowns across multiple states.

Mashable Light Speed

Want more out-of-this world tech, space and science stories?
Sign up for Mashable’s weekly Light Speed newsletter.

By signing up you agree to our Terms of Use and Privacy Policy.

Thanks for signing up!

SEE ALSO:

5 details Apple didn’t tell you at iPhone 16 launch event

Facebook’s automatic explanation for the removals implies the posts have been taken down not for spreading misinformation, but for violating community spam guidelines: “It looks like you tried to get likes, follows, shares or video views in a misleading way,” the reports told users. The Post found that most of the targeted takedown involved external links, even those to official, government sources. Another problem: None of the users were notified that their posts were removed.


Mashable Games
Mashable Games

Facebook has long been faced blowback for its role in spreading deceptive information and posts, and now works with third-party experts to combat political and public health misinformation, including spotlighting reputable sources. As the platform’s misinformation policy reads: “We remove misinformation where it is likely to directly contribute to the risk of imminent physical harm…We focus on slowing the spread of hoaxes and viral misinformation, and directing users to authoritative information.” But its Community Guidelines, under which these posts are removed, are more broad. “We seek to prevent abusive tactics, such as spreading deceptive links to draw unsuspecting users in through misleading functionality or code, or impersonating a trusted domain,” the platform writes. “Online spam is a lucrative industry.”


Related Stories


How drones are changing the way we fight wildfires
Yes, climate change is impacting hurricanes in big ways. Here’s how.
How can we adapt cities to extreme heat?
Meta briefly blocked a local news organization critical of Facebook
Meta changes AI labels on Instagram and Facebook after backlash

Across the site, posts frequently slip through the platform’s moderation gates, or are unfairly taken down and then reinstated, but emergency and fire workers told the Post that the problem has escalated in recent months — and in this case it could threaten lives. Facebook spokesperson Erin McPike told the publication that the company is “investigating this issue and working quickly to address it.”

About Chase DiBenedetto

Check Also

Google searches will now detect origin of AI-manipulated images

Google is expanding efforts to properly label AI-generated content, updating its in-house “About This Image” …

Leave a Reply