Learning curves theory for hierarchically compositional data with power-law distributed features

4
citations
#1154
in ICML 2025
of 3340 papers
3
Top Authors
4
Data Points

Abstract

Recent theories suggest that Neural Scaling Laws arise whenever the task is linearly decomposed into units that are power-law distributed. Alternatively, scaling laws also emerge when data exhibit a hierarchically compositional structure, as is thought to occur in language and images. To unify these views, we consider classification and next-token prediction tasks based on probabilistic context-free grammars—probabilistic models that generate data via a hierarchy of production rules. For classification, we show that having power-law distributed production rules results in a power-law learning curve with an exponent depending on the rules’ distribution and a large multiplicative constant that depends on the hierarchical structure. By contrast, for next-token prediction, the distribution of production rules controls the fine details of the learning curve, but not the exponent describing the large-scale behaviour.

Citation History

Jan 28, 2026
0
Feb 13, 2026
4+4
Feb 13, 2026
4
Feb 13, 2026
4