The European Union is on the brink of forcing Facebook, YouTube, Twitter and other social media companies to block videos with hate speech.
On Tuesday, ministers on the European Council – which sets the EU’s political direction and priorities – announced that they’ve approved a set of proposals that would require such companies to block videos that promote terrorism, incite hatred, or contain toxic, violent content.
The European Parliament needs to agree the Council’s proposals before they become law, but it sounds like this issue is being put on the fast track.
If the proposals do become law, it means that so-called “on-demand services” – that includes video-sharing platforms such as YouTube and social media services such as Twitter and Facebook – will be treated the same as TV broadcasters and held to the same rules.
One EU diplomat told Reuters that livestreaming such as Facebook Live would be exempt. The new law would apply just to videos stored on a platform.
Andrus Ansip, vice-president of the European Commission’s Digital Single Market initiative, said in a statement:
It is essential to have one common set of audiovisual rules across the EU and avoid the complication of different national laws. We need to take into account new ways of watching videos, and find the right balance to encourage innovative services, promote European films, protect children and tackle hate speech in a better way.
Europe has been threatening to take action over illegal content on social media for years.
In December 2015, Facebook, Twitter, and Google agreed with Germany’s demands and pledged to delete hate speech from their services within 24 hours in order to fight a rising tide of online racism in the wake of the country’s influx of refugees.
24 hours may sound like a quick turnaround, but Facebook, for one, had been laying the groundwork for months before that.
In September 2015, under pressure from Germany, the company launched a hate-speech task force.
In fact, before it even sat down with German justice minister Heiko Maas in September 2015, Facebook had agreed to do three things in the wake of the previous month’s anti-immigration violence:
Partner with FSM, a German non-profit that works with multimedia service providers.
Start the hate-speech task force, working with nonprofits, companies, and government officials, including Maas.
Establish a campaign to promote “counter speech” in Germany, drawing in experts from the UK and Scandinavia to develop ways to combat racism and xenophobia through discussions on social media.
In other words, the tools to battle hate speech are known. That doesn’t mean the companies have done a good job in removing illegal speech, though. A year ago, three French anti-racism groups declared that they would file legal complaints against Facebook, Twitter and YouTube for failing to remove hateful posts aimed at the Black, Jewish and LGBT communities.
Unlike the US, where all three of the big video-disseminating platforms are based, hate speech is a crime in France and other EU countries.
It’s illegal to deny the Holocaust, to justify terrorism or to spread racist, anti-semitic or homophobic messages. French law requires websites to take down such material and to tell authorities about it. Nonetheless, the French anti-racism groups said that “only a small minority” of hate speech was removed during a five-week social media survey, during which they tracked 586 examples of illegal content.
A year later, it’s still bad.
Earlier this month, British MPs threatened Facebook, Twitter and other social media companies with huge fines over their failure to remove hate speech, extremist content and child abuse material, calling their failure to do so “a disgrace”.
Yvette Cooper, chair of the Commons Home Affairs committee, said in response to a report by the committee into hate speech and extremism online:
These are among the biggest, richest and cleverest companies in the world… this isn’t beyond them to solve and yet they are failing to do so.
Germany has now approved a plan to fine social networks up to €50m per post for not taking down clearly illegal hate speech after 24 hours or more ambiguous material after a week.
One assumes that Mark Zuckerberg, for one, is not loving the news out of the EU. In the furore over hate speech and fake news, the Facebook CEO has repeatedly insisted, for years, that Facebook is “a tech company, not a media company”.
Facebook would far prefer to be seen as an impartial platform that doesn’t get its hands dirty by vetting content, but that attitude clearly hasn’t impressed Europe.