Optimal Dynamic Regret by Transformers for Non-Stationary Reinforcement Learning

0
citations
#3347
in NEURIPS 2025
of 5858 papers
3
Top Authors
7
Data Points

Abstract

Transformers have demonstrated exceptional performance across a wide range of domains. While their ability to perform reinforcement learning in-context has been established both theoretically and empirically, their behavior in non-stationary environments remains less understood. In this study, we address this gap by showing that transformers can achieve nearly optimal dynamic regret bounds in non-stationary settings. We prove that transformers are capable of approximating strategies used to handle non-stationary environments and can learn the approximator in the in-context learning setup. Our experiments further show that transformers can match or even outperform existing expert algorithms in such environments.

Citation History

Jan 25, 2026
0
Jan 27, 2026
0
Jan 27, 2026
0
Jan 31, 2026
0
Feb 13, 2026
0
Feb 13, 2026
0
Feb 13, 2026
0