WEF Calls for Imprisonment of Social Media Owners Who Allow ‘Non-Mainstream’ Content on Their Platforms

90
WEF Calls for Imprisonment of Social Media Owners Who Allow ‘Non-Mainstream’ Content on Their Platforms

The World Economic Forum (WEF) has called on government’s worldwide to begin imprisoning social media owners who allow ‘non-mainstream’ content on their platforms.

Klaus Schwab’s right-hand man Yuval Noah Harari says people like Elon Musk must be held liable for so-called ‘misinformation’ on X.

Harari’s argument challenges the longstanding defense used by social media companies: freedom of speech. “They always try to protect themselves by appealing to freedom of speech… the companies need to take responsibility for their algorithm,” Harari declared. He insists that platforms must be held accountable not just for user-generated content but for the ways their algorithms amplify or suppress specific messages.

This represents a significant shift in focus, moving beyond content creators and aiming directly at the mechanisms of content distribution. Harari’s stance is clear: “If the algorithm is writing it, then you’re definitely liable for what your algorithm is writing.” This brings into focus the growing role of artificial intelligence (AI) in moderating online content.

Targeting Elon Musk’s Vision

Harari’s comments appear to be a direct shot at Elon Musk, who purchased X with the aim of promoting an open flow of information. Musk has long championed the idea that information should be free, and that a thriving democracy depends on the free exchange of ideas, even controversial ones. Harari dismisses this viewpoint, saying, “Information isn’t truth. Most information in the world is junk… If you think that you just flood the world with information and the truth will float up—no, it will sink.”

Harari suggests that Musk’s hands-off approach—allowing all information to flow freely—creates an environment where truth is overwhelmed by noise. He argues that institutions like newspapers, universities, and courts are necessary to sift through information and determine what is reliable. He draws a comparison to an editor at a major newspaper, noting that just as an editor is responsible for what is placed on the front page, social media companies should be held to the same standard when their algorithms promote content.

A Threat to Free Speech?

The crux of Harari’s argument is that content moderation should no longer be an afterthought and that platform owners should not be able to hide behind claims of free speech to avoid accountability. But critics see a different story unfolding. This shift in thinking could easily lead to an assault on free speech, especially if platforms like X are punished for allowing controversial or non-mainstream left-wing views to be seen.

Harari’s remarks highlight a broader concern about what counts as “reliable” information and who gets to decide. If platforms are held liable for content their algorithms push, this could effectively stifle speech that does not align with the views of those in power—especially non-WEF, non-establishment voices. By placing legal responsibility on the owners of these platforms, Harari is advocating for a new level of control over online discourse, one that many see as a threat to the very principle of free speech.

The implications of Harari’s call for liability are significant. For instance, if a platform like X is held legally responsible for content deemed unreliable, it could be forced to adopt extreme censorship measures, fearing litigation. This is particularly concerning as we head into a politically charged season, where platforms like X are vital tools for public discourse.

Harari references the long-standing debate over Section 230 in the U.S., which protects platforms from being held responsible for the content their users post. This has been a cornerstone of Internet speech protections. However, Harari seems to believe that the rise of AI and algorithms complicates this dynamic, warranting a new approach where platforms are responsible not just for hosting content but also for how it’s distributed.

By singling out Elon Musk and platforms like X, Harari is essentially advocating for a system where freedom of speech is secondary to algorithmic responsibility. This would open the door to legal repercussions for those who don’t comply with WEF or establishment-approved narratives. This move could have profound consequences for the future of free speech on the internet, especially in the context of AI-driven censorship.

A Chilling Precedent

Harari’s call for liability could have far-reaching consequences, especially for free speech. While addressing misinformation may seem reasonable on the surface, holding platforms like X accountable for their algorithms will likely lead to extreme censorship—something we already witnessed between 2020 and 2023 regarding the COVID virus and potential treatments. By singling out Elon Musk and others, Harari advocates a future where freedom of speech is subordinate to algorithmic control.

There are at least two major concerns: Who decides what constitutes ‘reliable’ information, and is it more harmful for a republic to encounter potentially incorrect information than to have state regulations controlling what can be said in public forums? If platforms face legal repercussions for amplifying non-mainstream content, they may suppress dissenting voices to avoid litigation. This could turn the internet into a space where gatekeepers, rather than users, control discourse—a dangerous precedent in the digital age.