Leave-one-out Distinguishability in Machine Learning

17citations
arXiv:2309.17310
17
citations
#1029
in ICLR 2024
of 2297 papers
4
Top Authors
4
Data Points

Abstract

We introduce an analytical framework to quantify the changes in a machine learning algorithm's output distribution following the inclusion of a few data points in its training set, a notion we define as leave-one-out distinguishability (LOOD). This is key to measuring datamemorizationand informationleakageas well as theinfluenceof training data points in machine learning. We illustrate how our method broadens and refines existing empirical measures of memorization and privacy risks associated with training data. We use Gaussian processes to model the randomness of machine learning algorithms, and validate LOOD with extensive empirical analysis of leakage using membership inference attacks. Our analytical framework enables us to investigate the causes of leakage and where the leakage is high. For example, we analyze the influence of activation functions, on data memorization. Additionally, our method allows us to identify queries that disclose the most information about the training data in the leave-one-out setting. We illustrate how optimal queries can be used for accuratereconstructionof training data.

Citation History

Jan 28, 2026
0
Feb 13, 2026
17+17
Feb 13, 2026
17
Feb 13, 2026
17