cirle 1

Legislate Against Ominous Incitement Now

The question of how societies should legislate against incitement has long been a contentious one.
I previously argued that governments ought to adopt laws directly targeting incitement in all its forms, whether it concerns violence, hate, or discrimination. The reasoning is simple: incitement corrodes public trust, destabilizes social cohesion, and, when left unchecked, often escalates into real-world harm. Yet, the idea is far from new. Politicians and lawmakers in various contexts have debated it for decades, only to stumble over the same obstacles: the danger of interfering with basic democratic rights and the difficulty of drawing sharp, consistent legal boundaries.

These obstacles are real, but they need not be insurmountable. To move this debate forward, we should turn our attention to the legal texts most people already live under every day without much notice: the written terms of use created by major social media platforms. These rules — defining violations such as hate speech, harassment, incitement, or disinformation — already perform a quasi-legal function. They serve as contractual obligations binding billions of users worldwide, with immediate consequences for breaches ranging from content removal to account termination. In many ways, they have become informal global standards of acceptable behavior online. Even though some of these safeguards have been weakened under new corporate ownership, their structure and language remain important building blocks that legislators could adapt to the level of public law.

If drafted carefully, legislation inspired by these terms could distinguish between different forms and degrees of incitement, thus reducing the risks of overreach while still addressing harms. A tiered approach would prove particularly effective. The most severe cases — direct calls to violence, whether against individuals, groups, or institutions — should unquestionably be treated as criminal offenses. By contrast, other forms of harmful behavior, such as hate speech, harassment, or broad discriminatory incitement, could be handled under civil or administrative law. This structure allows both proportionality and flexibility, recognizing the difference between immediate threats to safety and corrosive but less direct harms.

Any such framework, however, must contain rigorous safeguards for free expression. The legitimacy of the law will depend on its ability to protect satire, journalism, artistic freedom, parody, and legitimate political debate. Independent review panels could be established to prevent politicized abuse, while clear and accessible appeal processes would provide citizens with recourse against unreasonable enforcement. The guiding principle must be that laws against incitement are not meant to curtail debate or silence unpopular views, but rather to protect societies against speech that directly undermines the democratic freedoms they depend upon.

International cooperation would also be indispensable. The nature of online platforms means incitement easily crosses borders, amplifying its harms across continents within moments. A harmonized legal framework, inspired by global sets of terms of use and informed by regional cooperation, could align national legislation. Such a model would not be unprecedented: the European Union’s General Data Protection Regulation (GDPR) already influenced laws far beyond Europe’s borders, creating higher global standards for user privacy. In the same way, legislation targeting incitement could provide a legal framework broad enough to operate internationally, while still leaving room for national courts and panels to adapt implementation to local contexts.

Finally, transparency must be emphasized as a dual obligation shared by both governments and platforms. Laws in this domain will only succeed if their enforcement is consistent, predictable, and shielded from hidden agendas. Governments should be required to publicly document their enforcement actions, and social media companies should be held to similar standards when moderating and removing content. Transparent oversight, coupled with independent monitoring, would ensure both accountability and credibility.

The stakes are high. Incitement in its many forms has become inseparable from the digital landscape, and the patchwork of current responses — some overcautious, others dangerously lax — has failed to protect democratic societies from polarization and harm. By borrowing from the frameworks already embedded in platform terms of use, legislatures have an opportunity to construct laws that are at once practical, enforceable, and respectful of rights. Framed with proportionality, strengthened by safeguards, and coordinated through international cooperation, such legislation could finally bring clarity and responsibility to an area of law that has too long remained unresolved.

Back to top