Antidistillation Sampling

10citations
arXiv:2504.13146
10
citations
#627
in NEURIPS 2025
of 5858 papers
8
Top Authors
6
Data Points

Abstract

Frontier models that generate extended reasoning traces inadvertently produce token sequences that can facilitate model distillation. Recognizing this vulnerability, model owners may seek sampling strategies that limit the effectiveness of distillation without compromising model performance.Antidistillation samplingprovides exactly this capability. By strategically modifying a model's next-token probability distribution, antidistillation sampling poisons reasoning traces, rendering them significantly less effective for distillation while preserving the model's utility.

Citation History

Jan 25, 2026
0
Jan 26, 2026
0
Jan 26, 2026
0
Jan 28, 2026
0
Feb 13, 2026
10+10
Feb 13, 2026
10