The Social Effects of Artificial Intelligence-Based Content Filtering Algorithms, with Special Regard to Social Media Platforms

Keywords: English, Artificial Intelligence, Content Moderation, Freedom of Expression, Social Media, Machine Learning

Abstract

Online communication spaces, especially social media platforms, have become the dominant means of information flow, but they are increasingly exposed to the spread of extremist ideologies, illegal and sometimes disturbing content. Such platforms are increasingly using AI-based algorithms to filter and moderate content to address these challenges. The purpose of this study is to provide a comprehensive picture of the reasons for the spread of automated content moderation, its technical solutions, and its social effects. The study discusses in detail the legal environment that allowed these systems to take off, presents the most used technical methods, and analyzes how they affect freedom of speech, social discourse, and political polarization. The results highlight that although artificial intelligence can significantly increase the effectiveness of moderation, its use involves serious challenges and risks that affect democratic values ​​and the functioning of social media. The study also makes recommendations on the importance of regulation and human oversight to ensure the protection of free speech and the preservation of social diversity in the online space.

References

Aczél, Petra – Veszelszki, Ágnes (szerk.): Deepfake: a valótlan valóság. Budapest, Gondolat, 2023.

Apodaca, Tomas – Uzcátegui-Liggett, Natasha: How Automated Content Moderation Works (Even When It Doesn’t). The Markup, 2024. március 2. https://tinyurl.com/f6f8j9v6

Ardia, David S.: Free Speech Savior or Shield for Scoundrels: An Empirical Study of Intermediary Immunity under Section 230 of the Communications Decency Act. Loyola of Los Angeles Law Review, Vol. 43. (2010)

Barrett, Paul M.: Who Moderates the Social Media Giants? A Call to End Outsourcing. NYU Stern Center for Business and Human Rights. NYU STERN, 2020 június. https://tinyurl.com/4t2zuwfs

Chadha, Anupama – Kumar, Vaibhav – Kashyap, Sonu – Gupta, Mayank: Deepfake: An Overview. In: Proceedings of Second International Conference on Computing, Communications, and Cyber-Security. Springer, Singapore, 2021. https://doi.org/10.1007/978-981-16-0733-2_39

Chaffey, Dave: Global social media statistics research summary. Smart Insights, 2024. május 1. https://tinyurl.com/sm74a5za

Chandrasekharan, Eshwar – Mattia, Savvas – Jhaver, Shagun – Charvat, Hannah – Bruckman, Amy – Lampe, Cliff – Eisenstein, Jacob – Gilbert, Eric: The internet's hidden rules: an empirical study of Reddit norm violations at micro, meso, and macro scales. In: Proceedings of the ACM on Human-Computer Interaction, 2(CSCW) (2018) Article 32. https://doi.org/10.1145/3274301

Chen, Feng – Wang, Liqin – Hong, Julie – Jiang, Jiaqi – Zhou, Li: Unmasking bias in artificial intelligence: a systematic review of bias detection and mitigation strategies in electronic health record-based models. Journal of the American Medical Informatics Association, Vol. 31., No. 5. (2024) https://doi.org/10.1093/jamia/ocae060

Ciolli, Anthony: Chilling Effects: The Communications Decency Act and the Online Marketplace of Ideas. University of Miami Law Review, Vol. 63. (2008) https://doi.org/10.2139/ssrn.1101910

Colacci, Michael – Huang, Yu Qing – Postill, Gemma – Zhelnov, Pavel – Fennelly, Orna – Verma, Amol – Straus, Sharon – Tricco, Andrea C.: Sociodemographic bias in clinical machine learning models: a scoping review of algorithmic bias instances and mechanisms. Journal of Clinical Epidemiology, Vol. 178., 111606. (2025) https://doi.org/10.1016/j.jclinepi.2024.111606

Datar, Moses – Immorlica, Nicole – Indyk, Piotr et al.: Locality-sensitive hashing scheme based on p-stable distributions. In: Proceedings of the Twentieth Annual Symposium on Computational Geometry. New York, NY, ACM, 2004. https://doi.org/10.1145/997817.997857

Davenport, Thomas H. – Beck, John C.: The Attention Economy. Ubiquity, May 2001. https://doi.org/10.1145/376625.376626

Davis, Antigone – Rosen, Guy: Open-Sourcing Photo- and Video-Matching Technology to Make the Internet Safer. Meta, 2019. augusztus 1. https://tinyurl.com/4p8e2d2f

Delacroix, Sylvie: Beware of “algorithmic regulation”. SSRN Electronic Journal, February 1, 2019. https://doi.org/10.2139/ssrn.3327191

Devlin, Jacob – Chang, Ming-Wei – Lee, Kenton – Toutanova, Kristina: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Minneapolis, Minnesota, Association for Computational Linguistics, 2019. https://doi.org/10.48550/arXiv.1810.04805

Faddoul, Marc: COVID-19 is triggering a massive experiment in algorithmic content moderation. Brookings, 2020. április 28. https://tinyurl.com/2p732haw

Farid, Hany: An Overview of Perceptual Hashing. Journal of Online Trust and Safety, Vol. 1., No. 1. (2021) https://doi.org/10.54501/jots.v1i1.24

Gillespie, Tarleton: Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven, CT, Yale University Press, 2019. https://doi.org/10.12987/9780300235029

Gorwa, Robert – Binns, Reuben – Katzenbach, Christian: Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, Vol. 7. No. 1. (2020) https://doi.org/10.1177/2053951719897945

Gosztonyi, Gergely – Lendvai, Gergely Ferenc: Twitter kontra Taamneh és Gonzalez kontra Google, avagy ki a felelős az online platformokra feltöltött tartalomért? Magyar Jog, 2023/10.

Gosztonyi, Gergely – Lendvai, Gergely: Deepfake és dezinformáció – Mit tehet a jog a mélyhamisítással készített álhírek ellen? Médiakutató, 2024/1. https://doi.org/10.55395/MK.2024.1.3

Gosztonyi, Gergely: A platformszolgáltatók felelősségének új szabályozása az európai uniós digitális szolgáltatásokról szóló rendelet alapján. Pro Futuro, 2023/3.

Gosztonyi, Gergely: Cenzúra Arisztotelésztől a Facebookig. Budapest, Gondolat, 2022. https://doi.org/10.58528/JAP.2023.15-1.171

Gosztonyi, Gergely: Human and Technical Aspects of Content Regulation. Erdélyi Jogélet, Vol. 2., No. 4. (2022) https://doi.org/10.47745/ERJOG.2021.04.01

Grimmelmann, James: The Virtues of Moderation. Yale Journal of Law & Technology, Vol. 17. (2015) https://doi.org/10.31228/osf.io/qwxf5

Haider, Syed Ali – Borna, Sahar – Gomez-Cabello, Cesar A. – Pressman, Sophia M. – Haider, Clifton R. – Forte, Antonio Jorge: The Algorithmic Divide: A Systematic Review on AI-Driven Racial Disparities in Healthcare. Journal of Racial and Ethnic Health Disparities, 2024. 12. 18., online ahead of print. https://doi.org/10.1007/s40615-024-02237-0

Halfaker, Aaron – Riedl, John: Bots and cyborgs: Wikipedia’s immune system. Computer, Vol. 45., No. 3. (2012) https://doi.org/10.1109/MC.2012.82

Hartmann, David – Wang, Sonja – Pohlmann, Lena – Berendt, Bettina: A systematic review of echo chamber research: comparative analysis of conceptualizations, operationalizations, and varying outcomes. Journal of Computational Social Science, Vol. 8., Article n. 52. (2025) https://doi.org/10.1007/s42001-025-00381-z

He, Qinglai – Hong, Yili – Raghu, T. S.: Platform Governance with Algorithm-Based Content Moderation: An Empirical Study on Reddit. Information Systems Research, Vol. 36., No. 2. (2024) https://doi.org/10.1287/isre.2021.0036

Hoitsma, Ferry – Nápoles, Guillermo – Güven, Çiğdem et al.: Mitigating implicit and explicit bias in structured data without sacrificing accuracy in pattern classification. AI & Society, Vol. 40. (2024) https://doi.org/10.1007/s00146-024-02003-0

Holt, Kris: Meta’s Oversight Board Raises Concerns Over Automated Moderation of Hate Speech. Engadget, January 23, 2024. https://tinyurl.com/b39t8uuk

Hunter, Richard J. Jr. – Lozada, Hector R. – Shannon, John H.: Distributor vs. Publisher vs. Provider: That Is the High-Tech Question: But is an Extension of Liability the Answer? International Journal of Education and Social Science, Vol. 8., No. 1. (2021)

Imre, Melinda: Az internet-szolgáltatók felelősségének szabályozása a szerzői jogot sértő tartalmak tekintetében – Az amerikai, a közösségi és a magyar szabályozás bemutatása. Iustum Aequum Salutare, 2006/1–2.

Johnson, Ash – Castro, Daniel: Overview of Section 230: What It Is, Why It Was Created, and What It Has Achieved. Information Technology & Innovation Foundation, 2021. február 22. https://tinyurl.com/mr2w953n

Karabulut, Dogus – Ozcinar, Cagri – Anbarjafari, Gholamreza: Automatic content moderation on social media. Multimedia Tools and Applications, Vol. 82. (2023) https://doi.org/10.1007/s11042-022-11968-3

Klayman, Joshua: Varieties of Confirmation Bias. Psychology of Learning and Motivation, Vol. 32. (1995) https://doi.org/10.1016/S0079-7421(08)60315-1

Koltay, András – Nyakas, Levente (szerk.): Magyar és európai médiajog. Budapest, Wolters Kluwer, 2017. https://doi.org/10.55413/9789632956305

Kosseff, Jeff: A User's Guide to Section 230, and a Legislator's Guide to Amending It (or Not). Berkeley Technology Law Journal, Vol 37., No. 2. (2022) https://doi.org/10.15779/Z38VT1GQ97

Kosseff, Jeff: The Twenty-Six Words That Created the Internet. Ithaca, NY, Cornell University Press, 2019. https://doi.org/10.7591/9781501735783

Kubin, Emily – Sikorski, Christian von: The Role of (Social) Media in Political Polarization: A Systematic Review. Annals of the International Communication Association, Vol. 45., No. 3. (2021) https://doi.org/10.1080/23808985.2021.1976070;

Lessig, Lawrence: What Things Regulate Speech. In: Code: And Other Laws of Cyberspace. New York, Basic Books, 1999.

Li, Guangli – Gomez, Randy – Nakamura, Keisuke – He, Bo: Human-Centered Reinforcement Learning: A Survey. IEEE Transactions on Human-Machine Systems, Vol. 49., No. 4. (2019) 337–349. https://doi.org/10.1109/THMS.2019.2912447

Llansó, Emma – Hoboken, Joris van – Leerssen, Paddy – Harambam, Jaron: Artificial Intelligence, Content Moderation, and Freedom of Expression. Transatlantic Working Group on Content Moderation Online and Freedom of Expression, 2020. február 26. https://tinyurl.com/4tcdtfpr

Lymn, Tom – Bancroft, Jessica: The use of algorithms in the content moderation process. Responsible Technology Adoption Unit Blog, 2021. augusztus 5. https://tinyurl.com/yt9x75a6

Muraközi, Gergely: A szerzői jog és az internet – Az internet technikai megvalósítása a szerzői jog tükrében. Jogi Fórum, én. https://www.jogiforum.hu/files/publikaciok/drMurakozi-A_szerzoi_jog_es_az_internet(jf).pdf;

Nasteski, Vladimir: An Overview of the Supervised Machine Learning Methods. HORIZONS.B, Vol. 4. (2017) https://www.doi.org/10.56726/IRJMETS51366

New rules to protect your rights and activity online in the EU. European Commission, 2024. február 16. https://tinyurl.com/yc44sre3

Papp, János Tamás: A közösségi média szabályozása a demokratikus nyilvánosság védelmében. Budapest, Wolters Kluwer, 2022.

Papp, János Tamás: Ajánlórendszerek és szűrőbuborékok. In: Koltay András (szerk.): A vadnyugat vége? Tanulmányok az Európai Unió platformszabályozásáról. Budapest, Wolters Kluwer, 2024. https://doi.org/10.59851/9789632586328_12

Pariser, Eli: Did Facebook’s Big Study Kill My Filter Bubble Thesis? Wired, 2015. május 7. https://www.wired.com/2015/05/did-facebooks-big-study-kill-my-filter-bubble-thesis/

Pariser, Eli: The Filter Bubble: What the Internet Is Hiding from You. New York, Penguin Press, 2011. https://doi.org/10.3139/9783446431164

Partridge, Matthew: Great frauds in history: Jordan Belfort and Stratton Oakmont. MoneyWeek, 2019. augusztus 7. https://tinyurl.com/3tv7wcy4

Prodigy Communications Corporation History. FundingUniverse, én. https://tinyurl.com/mr3zs69v

Prummer, Anja: Micro-targeting and polarization. Journal of Public Economics, Vol. 188. (2020) 104210. https://doi.org/10.1016/j.jpubeco.2020.104210

Saxena, A. K.: Beyond the Filter Bubble: A Critical Examination of Search Personalization and Information Ecosystems. International Journal of Intelligent Automation and Computing, Vol. 2., No. 1. (2019)

Seering, Joseph – Wang, Tian – Youn, Joon – Kaufman, Geoff: Moderator engagement and community development in the age of algorithms. New Media & Society, Vol. 21., No. 7. (2019) https://doi.org/10.1177/1461444818821316

Softness, Nicole: Terrorist Communications: Are Facebook, Twitter, and Google Responsible for the Islamic State’s Actions? SIPA Journal of International Affairs, Vol. 70. No. 1. (2017) 201–215.

Strickland, Ruth Ann: Telekommunications Act of 1996 (1996). Free Speech Center, January 1, 2009. https://firstamendment.mtsu.edu/article/telecommunications-act-of-1996/

Tsesis, Alexander: Social Media Accountability for Terrorist Propaganda. Fordham Law Review, Vol. 86. (2017)

Tumber, Howard – Waisbord, Silvio (szerk.): The Routledge Companion to Media Disinformation and Populism. 1. edition. London, Routledge, 2021. https://doi.org/10.4324/9781003004431-1

Üveges, István: A mesterséges intelligencia közösségi médiában történő alkalmazásának társadalmi és politikai következményei. In: Kovács Zoltán (szerk.): A mesterséges intelligencia és egyéb felforgató technológiák hatásainak átfogó vizsgálata. Budapest, Katonai Nemzetbiztonsági Szolgálat, 2023.

Volpe, Benjamin: From Innovation to Abuse: Does the Internet Still Need Section 230 Immunity? Catholic University Law Review, Vol. 68. (2019)

Wendehorst, Christiane: Bias in Algorithms: Artificial Intelligence and Discrimination. Luxembourg, Publications Office of the European Union, 2022. https://tinyurl.com/5t5evc6h

Ződi, Zsolt: Platformjog. Budapest, Ludovika, 2023.

Published
2025-10-21