From day to day, tech companies – be it social media networks or chat services – moderate and remove large quantities of problematic content. The sheer mass of such material indicates that there are necessarily responses on their part, for the sake of their members and other readers. One might say that the necessity of moderation now meets the zeitgeist in more and more countries of the world.


On the other hand, single contributions portraying hate speech, misinformation, or disinformation may not have a large audience after all. Some of them even go unnoticed. In that regard, is it justified to eliminate problematic but little-noticed content?

Equal Standards

The answer is yes. For one, there is the principle of equal treatment. Similarly grave contributions must meet the same standards of response. Secondly, an author of such content might not be adequately flagged over time if his, her or its (in the case of groups) output are not scaled as problematic early on. Thirdly, there is the issue of spontaneous virulence. A contribution that is questionable or even worse may suddenly get much exposure, either because of user reactions and shares or because of – in our age of potentially automatized spread of contribution and and taken the capabilities of the use of artificial intelligence – bots amplifying it. Hence, a timely reaction on the part of the hosting platforms is of the essence – even in the case of small scale – with a view on risky content.

A De-Ontological Necessity

Fourth, there are legal considerations: text, audio, pictures such as memes, or videos are to be treated under a judicial regime – for instance in the European Union (EU) – which requires tech companies to react to it through moderation, whether automatic, manual, or a mix of both. Thus, there is a legal de-ontology of which the requisition for the companies is to treat content on a case by case basis but under similar considerations in a way that harm – offense, harassment and threats, as well as the spread and make-belief of false narratives – is avoided as best as ever possible. This has become self-evident for many of not most major tech companies, while smaller platforms concentrate their common resources.

Finally, even a small audience can be harmed by dangerous social media posts or chat messages. If not offended or worse, single readers, viewers or listeners, mid-sized, or large audiences might react in a way that they try to escape the effect of such content by avoiding social media exposure, which renders them victims as well, albeit silent ones.

Conclusion

In short, as there is potential of shady messages to render undesirable effects, be it on a large scale or a small one, there is a need to treat such messages with equal rigor, as that content has the potential to unfold a myriad of undesired political, cultural, or religious consequences.

Thorsten Koch, MA, PgDip
Policyinstitute.net
20 May 2024

By author

Leave a Reply

Your email address will not be published. Required fields are marked *