Abstract
Learning with noisy labels (LNL) methods have enabled the deployment of machine learning systems with imperfectly labeled data. However, these methods often struggle to identify noise in the presence of long-tailed (LT) class distributions, where the memorization effect becomes class-dependent. Conversely, LT methods are suboptimal under label noise, as it hinders access to accurate label frequency statistics. This study aims to address the long-tailed noisy data by bridging the methodological gap between LNL and LT approaches. We propose a direct solution, termed Robust Logit Adjustment, which estimates ground-truth labels through label refurbishment, thereby mitigating the impact of label noise. Simultaneously, our method incorporates the distribution of training-time corrected target labels into the LT method logit adjustment, providing class-rebalanced supervision. Extensive experiments on both synthetic and real-world long-tailed noisy datasets demonstrate the superior performance of our method.