Does language bias GenAI academic evaluation in humanities and social sciences? A mixed‐methods study based on Chinese‐language HSS papers
Journal of the American Society for Information Science and Technology
Published online on April 01, 2026
Abstract
["Journal of the Association for Information Science and Technology, EarlyView. ", "\nAbstract\nAs generative AI (GenAI) systems are increasingly deployed in cross‐language research evaluation, whether GenAI evaluates multilingual scholarship without language‐induced bias remains unclear. This study examines language bias patterns in GenAI evaluation of humanities and social sciences (HSS) research across models and disciplines. Using a within‐subjects design, 1150 expert‐selected papers from 23 disciplines were evaluated by GPT‐4o and DeepSeek‐V3 in Chinese and English. Results reveal opposite language biases depending on model type: GPT‐4o favors English (Cohen's d = 1.10), while DeepSeek‐V3 favors Chinese (Cohen's d = −0.87), persisting across all disciplines. Thematic analysis reveals a systematic decoupling between scores and evaluative reasoning: both models generate more critical comments for English papers, yet arrive at opposite scores through different rhetorical strategies—GPT‐4o tends to moderate its positive assessments of Chinese papers while DeepSeek‐V3 amplifies them. This decoupling suggests that bias is embedded in the multi‐layered pathways through which models generate and aggregate evaluations. This study provides controlled evidence that language bias in GenAI evaluation is bidirectional and model‐dependent, with scores not directly reflecting evaluative justifications. The findings have implications for designing fairer multilingual academic evaluation systems and for understanding the limitations of GenAI as scholarly evaluation infrastructure.\n"]