MetaTOC stay on top of your field, easily

Complex questions and quality answers: Comparing ChatGPT and Gemini as research collaborators

, , , ,

Journal of the American Society for Information Science and Technology

Published online on

Abstract

["Journal of the Association for Information Science and Technology, EarlyView. ", "\nAbstract\nAI chatbots are increasingly popular, but how they handle complex questions and how this affects the quality of their answers remains underexplored. This study examined whether chatbots such as ChatGPT and Gemini provide high‐quality answers to users' questions. To determine whether LLMs provided accurate, complete responses and support for further assistance, and addressed different difficulty levels and question types, we used ChatGPT 4o‐mini and Gemini 1.5 Flash to analyze 84 authentic library reference questions of varying complexity and types. Our analyses demonstrated a strong, statistically significant association between question complexity (READ) levels and further assistance. ChatGPT4o‐mini suggests that as complexity increases, it provides more resources but still fails to give a complete answer, whereas Gemini 1.5 Flash also reflected a significant association between question type and completeness. We conclude that, compared with ChatGPT 4o‐mini, Gemini 1.5 Flash is sensitive to all question types, suggesting it can provide more consistently high‐quality answers. These findings suggest that understanding the relationship between question complexity and answer quality can optimize LLMs for better information seeking. As LLMs are continually updating, this study used ChatGPT‐4o‐mini and Gemini 1.5 Flash. Future research should evaluate newer LLMs and human responses using a comparative methodology.\n"]