Identifying the good and the bad: Using machine learning to moderate user commentary on news

Studies on user commentary posted to news sites regularly show a chasm between deliberative ideals and users' actual practices. Moderation is a promising setscrew but given limited resources, outlets mostly focus on policing and banning "bad" commentary rather than encouraging "good" posts. This study seeks to contribute to the yet small body of research that explores criteria to identify user contributions’ quality. We discuss previously tested criteria and propose additional ones borrowed from deliberation theory and (in-)civility research. We tested the usefulness of (a combination of) these on a body of 980 comments from the Austrian online news Der Standard. Our results show that while text classification remains to be delicate, machine learning yields better results for categories indicating "good"” commentary. We validated our approach and classification grid through guided interviews with two members of Der Standard's community management and critically reflect on our propositions.

Haim, M., Heinzel, I., Lankheit, S., Niagu, A.-M., & Springer, N. (5/2019). Identifying the good and the bad: Using machine learning to moderate user commentary on news. Presented at the 69th Annual Conference of the International Communication Association, Washington D.C. (content_copy)