More on KentOnline
Social media companies have been accused of failing to act on user reports of anti-Semitism after new research found that no action was taken over more than 80% of posts containing such abuse.
The Centre for Countering Digital Hate (CCDH) collected and reported 714 posts containing anti-Jewish hate, which it had said had been viewed more than 7.3 million times across Facebook, Instagram, TikTok, Twitter and YouTube.
But the organisation said 84% of the reported posts were not acted upon, with Facebook performing worst by failing to act on 89% of the harmful content reported to it.
Our research concludes that the platforms are failing to remove hateful and anti-Semitic content even after it is specifically reported and flagged
The CCDH accused the platforms of failing to enforce their own rules and allowing their sites to become “safe places to spread racism and propaganda against Jews”.
According to the report, 80% of posts containing Holocaust denial and 70% of posts identified as neo-Nazi were not acted upon despite clearly being in breach of platform rules around hateful content.
It said Instagram, TikTok and Twitter also allowed anti-Semitic hashtags to be used, with those used in the posts identified by the CCDH gaining more than 3.3 million impressions.
TikTok removes just 5% of accounts that directly racially abuse Jewish users, the figures showed.
CCDH chief executive Imran Ahmed said: “Our research concludes that the platforms are failing to remove hateful and anti-Semitic content even after it is specifically reported and flagged.
“Our methodology sidesteps debates about algorithms and claims by the companies about automated hate removal that they refuse to have independently verified.
“Instead, we measured the effectiveness of the platforms’ opposition to anti-Semitism by assessing what they do with user reports of anti-Jewish hatred.
“We believe this sample to be a fraction of the anti-Semitic content hosted on major platforms and endemic to Big Tech’s failure to address the hatred that its platforms host.
Platforms must aggressively remedy their moderation systems which have been proven to be insufficient, and governments must find way to hold platforms accountable for their failures to act
“Platforms must aggressively remedy their moderation systems which have been proven to be insufficient, and governments must find way to hold platforms accountable for their failures to act.”
Mr Ahmed said platforms must remove all groups and hashtags linked to anti-Semitism and close accounts that send abuse to Jewish people.
He also called for better training of moderators to more effectively find and remove anti-Semitism and for governments to take firmer action against companies which fail to protect their users from online abuse.
In response, Facebook – which also owns Instagram – said it has made progress on fighting anti-Semitism but “our work is never done”.
“These reports do not account for the fact that we have taken action on 15 times the amount of hate speech since 2017, the prevalence of hate speech is decreasing on our platform and, of the hate speech we remove, 97% was found before someone reported it to us,” a company spokesman said.
“Hate has no place on our platform, and, given the alarming rise in anti-Semitism around the world, we have and will continue to take significant action through our policies by removing harmful stereotypes about Jewish people and content that denies or distorts the Holocaust, while educating people about it with authoritative information.”
A Twitter spokesman said: “We strongly condemn anti-Semitism in any form. We’re working to make Twitter a safer place for online engagement, and to that end improving the speed and scale of our rule enforcement is a top priority for us.
“We recognise that there’s more to do, and we’ll continue to listen and integrate stakeholders’ feedback in these ongoing efforts.”
TikTok said: “TikTok condemns anti-Semitism and does not tolerate hate speech.
“We work aggressively to combat hate by proactively removing accounts and content that violate our policies and redirecting searches for hateful ideologies to our community guidelines.
“Hateful behaviour is incompatible with TikTok’s creative and inclusive environment, and we are adamant about continually improving how we protect our community.”
YouTube has been contacted for comment.