GIO: Gradient Information Optimization for Training Dataset Selection

11citations
arXiv:2306.11670
11
citations
#1269
in ICLR 2024
of 2297 papers
2
Top Authors
4
Data Points

Abstract

It is often advantageous to train models on a subset of the available train examples, because the examples are of variable quality or because one would like to train with fewer examples, without sacrificing performance. We present Gradient Information Optimization (GIO), a scalable, task-agnostic approach to this data selection problem that requires only a small set of (unlabeled) examples representing a target distribution. GIO begins from a natural, information-theoretic objective that is intractable in practice. Our contribution is in showing that it can be made highly scalable through a simple relaxation of the objective and a highly efficient implementation. In experiments with machine translation, spelling correction, and image recognition, we show that GIO delivers outstanding results with very small train sets. These findings are robust to different representation models and hyperparameters for GIO itself. GIO is task- and domain-agnostic and can be applied out-of-the-box to new datasets and domains. We open source a pip-installable implementation of the algorithm as "pip install grad-info-opt".

Citation History

Jan 28, 2026
0
Feb 13, 2026
11+11
Feb 13, 2026
11
Feb 13, 2026
11