The European Union (EU) has instructed tech firms to remove illicit content from their platforms, according to a Reuters report. Notably, any company that fails to comply with this directive will face severe legal consequences.
The EU’s Order to Remove Illegal Content
The recent conflict involving Hamas, the militant Islamic group, and Israel’s airstrike response in Gaza has resulted in several pieces of false information regarding the situation. A few of such data include distressing images of violence, mislabeled videos, and manipulated images.
He noted how users are spreading illicit content and misinformation, particularly after the recent Middle East violence.
Furthermore, Breton also contacted Mark Zuckerberg, the CEO of Meta, highlighting the need for strict adherence to European regulation on his social media platforms.
Notably, in the letters to both CEOs, Breton demanded that their companies must report on how they intend to fight such harmful content on their platforms within 24 hours.
Besides these points, the European Commission (EC), which is the executive part of the EU, has also given its directive regarding the case. They stated that all social media platforms are now obligated to prevent any form of sharing harmful content related to the Hamas attack.
The EU’s Commitment to Tackling Disinformation
Interestingly, the European Union has erected a special operations centre comprising several skilled workers, including those proficient in both Hebrew and Arabic. This centre focuses on tracking and responding to the ongoing situation, particularly due to its fast escalation rate.
An EU Commission spokesperson emphasized the commission’s effort to enforce the Digital Services Act (DSA) and closely oversee the implementation of the Terrorist Content Online (TCO) Regulation.
Notably, companies found violating the DSA could pay penalties of up to 6% of their global revenue. Furthermore, repeated infractions could lead to their exclusion from operating in Europe altogether.
Interestingly, this obligation to social media platforms aligns with the EU’s move to prevent disinformation across such channels, as revealed on September 26, 2023. According to the post, some companies already reported their compliance.
Google took measures to prevent a substantial EUR 31 million in advertising from reaching disinformation sources within the European Union during the first half of 2023.
Furthermore, Google facilitated the display of 20,441 political ads across the EU, amounting to almost EUR 4.5 million in value. However, they rejected 141,823 political ads due to failed identity verification processes.
Furthermore, Meta applied fact-checking labels to over 40 million pieces of content on Facebook and more than 1.1 million on Instagram. Notably, when users encountered content bearing a fact-checking label, a striking 95% opted not to click on it.
Additionally, a substantial portion (37% on Facebook and 38% on Instagram) chose to refrain from sharing fact-checked content upon receiving a warning.
Lastly, in the case of TikTok, nearly 30% of users decided against sharing content marked as ‘unverified.’ Moreover, TikTok enforced its misinformation policy by removing 140,635 videos, collectively amassing over 1 billion views.