登录 EN

添加临时用户

大语言模型的认知偏差研究

Study on cognitive bias in large language model

作者:徐江州
  • 学号
    2022******
  • 学位
    硕士
  • 电子邮箱
    182******com
  • 答辩日期
    2024.09.10
  • 导师
    刘嘉
  • 学科名
    应用心理
  • 页码
    64
  • 保密级别
    公开
  • 培养单位
    070 社科学院
  • 中文关键词
    认知偏差;决策任务;大语言模型
  • 英文关键词
    cognitive bias; decision-making tasks; large language models; GPT

摘要

近年来,以ChatGPT为代表的大语言模型(Large Language Model, LLM)在各个领域的广泛应用,使其作为决策助手的潜力得到了充分认可。理解大语言模型是否能够模拟和人类类似的认知过程,对于优化该技术的应用具有重要意义。本文通过两个实验,探讨了大语言模型在决策任务中的认知偏差及其对决策行为的影响。实验一旨在研究大语言模型在不同决策情境下是否表现出与人类类似的认知偏差。实验采用多种经典决策任务,包括确定性效应、反射效应、孤立效应、框架效应、偏重权重效应和量度感知效应,通过比较模型和人类被试在这些任务中的表现,评估模型的认知偏差。结果表明,大语言模型在许多任务中表现出与人类相似的认知偏差,特别是在处理概率和损失情境时。实验二进一步探讨了大语言模型的参数规模对其认知偏差的影响。通过比较不同参数规模的大语言模型在相同决策任务中的表现,评估模型参数对认知偏差的影响。结果显示,随着模型参数的增加,模型在复杂决策任务中的表现更加细致和全面,然而也表现出更多的认知偏差。这表明模型参数规模在一定程度上影响了模型的认知行为。总之,本研究揭示了大语言模型在决策任务中的认知偏差,并探讨了模型参数对这些偏差的影响。研究结果显示,大语言模型在许多方面模拟了人类的认知过程,但在某些特定情境下仍存在差异。通过了解这些偏差,开发者可以更好地调整和优化模型,从而提高其在实际应用中的可靠性和公正性。本研究为大语言模型在决策领域的应用提供了理论基础和实践指导。

In recent years, large language models (LLMs) represented by ChatGPT have gained widespread application across various fields, showcasing their potential as decision-making assistants. Understanding whether LLMs can simulate human-like cognitive processes is crucial for optimizing the use of this technology. This study investigates the cognitive biases of LLMs in decision-making tasks and their impact on decision-making behavior through two experiments.The first experiment aims to examine whether LLMs exhibit cognitive biases similar to those of humans in different decision-making scenarios. The experiment includes various classic decision-making tasks, such as the certainty effect, reflection effect, isolation effect, framing effect, overweighting effect, and magnitude perception bias. By comparing the performance of the models and human subjects in these tasks, the cognitive biases of the models were evaluated. The results show that LLMs exhibit cognitive biases similar to those of humans in many tasks, especially in handling probabilities and loss scenarios.The second experiment further explores the impact of model size on the cognitive biases of LLMs. By comparing the performance of LLMs with different parameter sizes on the same decision-making tasks, the influence of model parameters on cognitive biases was assessed. The results indicate that as the number of parameters increases, the models perform more detailed and comprehensive analyses in complex decision-making tasks; however, they also exhibit more cognitive biases. This suggests that model parameter size affects the cognitive behavior of the models to some extent.In summary, this study reveals the cognitive biases of LLMs in decision-making tasks and explores how these biases are influenced by model parameters. The results indicate that while LLMs simulate human cognitive processes in many aspects, there are still differences in certain specific scenarios. Understanding these biases allows developers to better adjust and optimize the models, thereby enhancing their reliability and fairness in practical applications. This study provides a theoretical foundation and practical guidance for the application of LLMs in the field of decision-making.