Whistleblower Says Facebook Is Allowing Spread of ISIS Propaganda
Facebook is auto-generating and helping to mass promote Islamic State-created propaganda as CEO Mark Zuckerberg cheers the gains his company has made taking extremist content offline, a whistleblower alleges in a complaint Thursday to the Securities and Exchange Commission.
The social media company is likely violating securities laws prohibiting companies from misleading shareholders and the public, according to a petition filed by the National Whistleblower Center. The complaint includes a study that shows Facebook used its auto-generating to produce videos for ISIS terrorists detailing their exploits over the year.
One video begins with a photo of the black flags of jihad and then flashes highlights of a year of social media posts from a user calling himself “Abdel-Rahim Moussa, the Caliphate.” It then contains plaques of anti-Semitic verses, and a picture of men carrying more jihadi flags while they burn the American flag.
One profile of an al-Qaida affiliated group listed the user’s employer as Facebook.
The video concludes with Facebook’s famous salutation. “Thanks for being here, from Facebook,” the video concludes before flashing the company’s thumb-up image. Researchers monitored the Facebook pages of users in 2018 who affiliated themselves with groups the U.S. has designated as terrorists.
Nearly 38 percent of the posts with symbols of extremist groups were removed, their research showed. Much of the banned content cited in the study — an execution video and images of severed heads — remained on the platform as of May, media reports show.
The complaint comes as Zuckerberg claims his company has made a big dent in ISIS material.
“In areas like terrorism, for al-Qaida and ISIS-related content, now 99 percent of the content that we take down in the category our systems flag proactively before anyone sees it,” he said during an earnings call in April. Zuckerberg added: “That’s what really good looks like.”
The researchers involved in the project argue that there is likely a lot of profiles dotting the platform. “I mean, that’s just stretching the imagination to beyond incredulity,” Amr Al Azm, one researcher, told an AP reporter. “If a small group of researchers can find hundreds of pages of content by simple searches, why can’t a giant company with all its resources do it?”
Facebook said it’s working on the problem. “After making heavy investments, we are detecting and removing terrorism content at a far higher success rate than even two years ago,” a company representative said in a statement. “We don’t claim to find everything and we remain vigilant in our efforts against terrorist groups around the world.”
Al Azm’s researchers in Syria looked at 63 profile accounts that liked the auto-generated page for Hay’at Tahrir al-Sham, an al-Qaida group affiliated with al-Nusra Front.
Researchers confirmed that 31 of the profiles matched real people in Syria. Experts believe Facebook’s algorithmic tools are not up to the task of effectively moderating the company’s massive platform, which registers more than 2 billion users per month.
Facebook’s artificial intelligence system is failing, according to Hany Farid, a digital forensics expert at the University of California, Berkeley, who advises the Counter-Extremism Project.
“The whole infrastructure is fundamentally flawed,” he told the AP. “And there’s very little appetite to fix it because what Facebook and the other social media companies know is that once they start being responsible for material on their platforms, it opens up a whole can of worms.” He’s not the only one warning people about ineffective AI systems.
Emily Williams, a data scientist and founder of Whole-Systems Enterprises, for one, argues that Facebook’s lack of transparency about the frailties of its AI-deep learning instruments makes it difficult for conservatives to understand why and how their content is being throttled.
Conservatives, meanwhile, argue that Facebook is targeting them because of politics. President Donald Trump’s social media director Dan Scavino Jr., for instance, was temporarily blocked in March from making public Facebook comments. The president then later told his Twitter followers that he is looking into complaints that big-tech companies are targeting conservatives.
Williams believes that Facebook’s algorithm likely has a 70-percent success rate, which means roughly 30 percent of the time the company’s moderators are nixing conservatives who are sharing provocative content but not that which the Silicon Valley company might prohibit.
Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact licensing@dailycallernewsfoundation.org.
A version of this article appeared on The Daily Caller News Foundation website.
Truth and Accuracy
We are committed to truth and accuracy in all of our journalism. Read our editorial standards.
Advertise with The Western Journal and reach millions of highly engaged readers, while supporting our work. Advertise Today.