Algorithmic auditing through agent-based experiments
- insert_drive_file Peer-Reviewed Presentations
- event 2021
- translate English
A prominent approach to algorithmic auditing is agent-based testing (Haim, 2020). Thereby, online human behavior is entirely emulated through the use of tools originally developed for software testing, for example automated browsers. To audit algorithms, such emulations are then usually employed within experimental research designs to test algorithmic reactions under various conditions. However, experimental designs employing agent-based testing entail considerable peculiarities as opposed to common experimental designs. They usually (1) depict reactive field experiments without a known guarantee to control for every contingency, (2) entail the need to follow demands of responsible web scraping, (3) are based on entirely unknown populations, and (4) do not yield normal distributions, thus inhibiting requirements for common statistical analyses. This contribution provides a systematic review of N = 89 publications from the social and information sciences that refer to experiments using agent-based testing. After an overview of whether and how these publications address aforementioned peculiarities and pitfalls, this contribution derives suggestions on how to employ agent-based testing to audit algorithms.
Haim, M. & Unkel, J. (4/2021). Algorithmic auditing through agent-based experiments. Presented at the 65th Annual Conference of the DGPuK, Zurich. (content_copy)