Implicit In-Context Learning: Evidence from Artificial Language Experiments
1
citations
#294
in COLM 2025
of 418 papers
2
Top Authors
3
Data Points
Top Authors
Topics
Abstract
Humans acquire language through implicit learning, absorbing complex patterns without explicit awareness. While LLMs demonstrate impressive linguistic capabilities, it remains unclear whether they exhibit human-like pattern recognition during in-context learning at inferencing level. We adapted three classic artificial language learning experiments spanning morphology, morphosyntax, and syntax to systematically evaluate implicit learning at inferencing level in two state-of-the-art OpenAI models: gpt-4o and o3-mini. Our results reveal linguistic domain-specific alignment between models and human behaviors, o3-mini aligns better in morphology while both models align in syntax.
Citation History
Feb 12, 2026
1
Feb 13, 2026
1
Feb 13, 2026
1