How do language models learn facts? Dynamics, curricula and hallucinations
Top Authors
Abstract
Large language models accumulate vast amounts of knowledge during their pre-training, yet the dynamics governing this acquisition remain poorly understood. This work investigates the learning dynamics of language models on a synthetic factual recall task, uncovering three key findings: First, language models learn in three phases, with performance plateauing before they acquire precise factual knowledge. Mechanistically, this plateau coincides with the formation of attention-based circuits that support recall. Second, the training data distribution significantly impacts learning dynamics, with imbalanced distributions shortening the plateau. Finally, hallucinations appear simultaneously to knowledge, and integrating new knowledge into the model through fine-tuning is challenging, as it quickly corrupts its existing parametric associative memories. Our results emphasize the importance of data distribution in knowledge acquisition and suggest novel data scheduling strategies to accelerate neural network training.