- insert_drive_file Edited Volumes
- event 2023
- translate Deutsch
- label
Content analysis is one of the main methods communication scholars regularly employ (Walter et al., 2018). It is also one of the few methods developed by our discipline (Loosen & Scholl, 2012). However, the method is undergoing fundamental changes: Visual, auditory, and audiovisual data from various online channels, often collected via automated approaches (Jünger et al., 2022), have gained a lot of importance. This has introduced challenges for sampling and analyzing relevant data (Ho, 2020; Jünger et al., 2022; Mahl et al., 2022). At the same time, concepts and methods from computer science (e.g., natural language processing, machine learning) are increasingly being incorporated into media and communication studies (Baden et al., 2022; Hase et al., 2022). This has transformed content analysis by introducing new and extending existing data, methods, and research processes (Bachl & Scharkow, 2017; Günther & Quandt, 2016; Nelson, 2020; Waldherr et al., 2019).
Consequently, the thus far institutionalized method faces several methodological debates concerning quality criteria and standards (Casas & Williams, 2022; Krippendorff, 2018, 2021; Lacy et al., 2015; Wirth et al., 2015). Discussing (the lack of) such criteria and standards, especially but not solely for computational advances (Domahidi et al., 2019; Geise & Waldherr, 2022; Haim, 2022), scholars criticize that, to date, “everyone brings the practices and standards from their original field” (Theocharis & Jungherr, 2021: p. 12). As such, “a lack of currently established standards […] can jeopardize the scholarship scrutiny which is essential in assuring additive science and replicability” (van Atteveldt et al., 2019: p. 3). Based on its great expertise with the method, communication science can play a central role in establishing these standards for modern content analysis.
This Special Issue focuses on changes and challenges concerning quality criteria and standards for the method of qualitative, quantitative, and computational content analysis. This includes validity (e.g., Chan & Sältzer, 2020; Song et al., 2020)—that is, whether results correspond to some external truth and how to approximate such—and the four “R’s” to understand how changes in data or methods influence results: reliability to estimate whether repeated measures of the same data yield similar results, reproducibility to similarly assess if others yield the same results based on the same data and methods, robustness to estimate in how far results change when using different methods but the same data (Denny & Spirling, 2018; Pipal et al., 2022; Wilkerson & Casas, 2017), and replicability to evaluate conclusions based on the same methods yet different data.
We invite contributions that (systematically) reflect upon methodological shifts in and for content analysis. We also invite contributions that seek to advance quality criteria and standards in this regard. Ultimately, this Special Issue will showcase contributions that further develop quantitative, qualitative, or computational content analysis.
M. Haim, V. Hase, J. Schindler, M. Bachl, & E. Domahidi (2023). Validity and the Four "R's": (Re-)Establishing Standards for Content Analysis. Studies in Communication and Media. (content_copy)