CoT4Rec: Revealing User Preferences Through Chain of Thought for Recommender Systems
Abstract
Large Language Models (LLMs) offer groundbreaking advancements in recommender systems through superior text analysis and decision-making support. However, integrating LLMs into recommender systems still suffers from the problems of identifier uninterpretability and lack of transparency. To address these issues and fully leverage the capabilities of LLMs, we propose a chain of thought (CoT) based recommendation framework called CoT4Rec which employs LLMs as data enhancers for user preference analysis. Initially, we design a CoT reasoning strategy that can derive more behaviorally-aligned user preference features by clustering users’ historical interactions. Subsequently, we propose a two-stage recommendation model that not only makes full use of the world knowledge embedded in LLMs but also generates a logically transparent reasoning path. By integrating a user preference analyzer early in the recommendation pipeline, the model deeply analyzes users' historical interactions, helping to enhance the personalization and transparency of the recommender system. CoT4Rec demonstrates superior performance over existing state-of-the-art models in recommendation tasks across four public datasets, achieving improvements ranging from 2.2% to 12.2%.