Like Human, Like Algorithm: Responses to Algorithmic Discrimination Among Individuals From Protected Classes
Published online on April 13, 2026
Abstract
["British Journal of Management, Volume 37, Issue 2, April 2026. ", "\nAbstract\nAlgorithms, commonly used in business practice, often discriminate against members of protected classes (e.g. racial minorities). Previous research findings suggest that individuals, including those from protected classes, under some circumstances, may not respond negatively to discriminatory algorithms. Other evidence suggests the opposite. Given the conflicting evidence, there is an opportunity to understand how and when protected class members respond to businesses that employ algorithms when these algorithms make predictions or decisions resulting in discrimination. Drawing on an empirical package comprising one secondary data study and four experiments, our research demonstrates that when algorithms are perceived to engage in human‐like social categorization, they elicit more negative responses from members of protected classes. This effect is observed across various algorithm features, including nonrepresentative training data, proxy classification rules and non‐statistical classification rules. The research's findings extend the literature on algorithmic discrimination and business ethics, providing suggestions to mitigate algorithmic discrimination and improve societal well‐being.\n"]