In 2018, the EU approved a series of questionable directives regarding Internet neutrality, which saprked fears of the growing censorship campaign within the bloc.
Starting as early as March, the European Commission released a statement, telling social media companies to remove illegal online terrorist content within an hour of notification. If companies did not comply, they risked facing EU-wide legislation. This ultimatum was part of a new set of recommendations that applies to all forms of supposedly “illegal content” online. This content ranges “from terrorist content, incitement to hatred and violence, child sexual abuse material, counterfeit products and copyright infringement.”
The March 2018 recommendation followed a September 2017 Communication, which gave citizens and companies 6 months to question the action.
The European Commission presented the new recommendations in the following manner:
“… The Commission has taken a number of actions to protect Europeans online – be it from terrorist content, illegal hate speech or fake news… we are continuously looking into ways we can improve our fight against illegal content online. Illegal content means any information which is not in compliance with Union law or the law of a Member State, such as content inciting people to terrorism, racist or xenophobic, illegal hate speech, child sexual exploitation… What is illegal offline is also illegal online.”
“Illegal hate speech”, is then broadly defined by the European Commission as “incitement to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin.”
The voluntary removal of such content was then reinforced by the introduction of legislation that makes it mandatory. On September 12th the European Commission said the following in a press release:
“The Commission has already been working on a voluntary basis with a number of key stakeholders – including online platforms, Member States and Europol – under the EU Internet Forum in order to limit the presence of terrorist content online. In March, the Commission recommended a number of actions to be taken by companies and Member States to further step up this work. Whilst these efforts have brought positive results, overall progress has not been sufficient.”
According to the press release, the new rules will include draconian fines issued to internet companies who fail to live up to the new legislation:
“Member States will have to put in place effective, proportionate and dissuasive penalties for not complying with orders to remove online terrorist content. In the event of systematic failures to remove such content following removal orders, a service provider could face financial penalties of up to 4% of its global turnover for the last business year.”
In May 2016, the European Commission and Facebook, Twitter, YouTube, and Microsoft, agreed on a “Code of Conduct on countering illegal online hate speech online” (Google+ and Instagram also joined the Code of Conduct in January 2018). The Code of Conduct commits the social media companies to review and remove, within 24 hours, content that is deemed to be “illegal hate speech.” According to the Code of Conduct, when companies receive a request to remove content, they must “assess the request against their rules and community guidelines and, where applicable, national laws on combating racism and xenophobia…” In other words, the social media giants act as voluntary censors on behalf of the European Union.
The press releases seem to convey the notion that this Code of Conduct is to be put in legislation.
The entire outcry against these actions, is not because people are defending “hate speech”, but rather the fluid definition it seems to be given. Critics fight the censorship, not because they support “hate speech” or want to further “terrorist agenda,” they criticize it because an unclear definition of such a term could allow for the shaping of the media narrative and further impose a sort of censorship on alternative, independent and conservative media outlets.
Another EU directive that caused a stir is the Copyright Directive. Few pieces of legislation have polarized Europe this much in recent years. Critics said the vote heralded the death of the internet, while supporters congratulated themselves for saving the livelihoods of starving artists and giving US tech giants a poke in the eye.
Much of the outrage has been over two parts of the directive: Articles 11 and 13. But their intent, when described by supporters anyway, is pretty benign. Article 11 simply gives publishers the right to ask for paid licenses when their news stories are shared by online platforms, while Article 13 says that online platforms are liable for content uploaded by users that infringes copyright.
Both measures attempt to redress an imbalance at the core of the contemporary web: big platforms like Facebook and Google make huge amounts of money providing access to material made by other people, while those making the content (like music, movies, books, journalism, and more) get an ever-shrinking slice of the pie.
Article 11 is the so-called “link tax,” which gives publishers a right to ask for paid licenses when online platforms share their stories. The obvious target is aggregators like Google News, but opponents worry the law could have broader applications. However, it’s not clear what counts as a commercial platform.
The far bigger headache is Article 13, dubbed the “upload filter” by critics. It says that platforms “storing and giving access to large amounts of works and other subject-matter uploaded by their users” are liable for copyright infringement committed by users. (Meaning they can be sued by rights holders.) So, platforms and copyright holders must “cooperate in good faith” to stop this infringement from happening in the first place.
The difficulty, again, is working out what this provision actually means and how it might be enforced.
All of this follows GDPR (General Data Protection Regulation), which forced many websites to change their ToS to comply with EU or risk being fined or lose access to the bloc’s users. Many websites outside the EU have decided to block the countries in the EU, instead of complying to the legislation.