Quantifying Statistical Significance of Deep Nearest Neighbor Anomaly Detection via Selective Inference

1
citations
#2497
in NEURIPS 2025
of 5858 papers
6
Top Authors
6
Data Points

Abstract

In real-world applications, anomaly detection (AD) often operates without access to anomalous data, necessitating semi-supervised methods that rely solely on normal data. Among these methods, deep k-nearest neighbor (deep kNN) AD stands out for its interpretability and flexibility, leveraging distance-based scoring in deep latent spaces.Despite its strong performance, deep kNN lacks a mechanism to quantify uncertainty-an essential feature for critical applications such as industrial inspection. To address this limitation, we propose a statistical framework that quantifies the significance of detected anomalies in the form of p-values, thereby enabling control over false positive rates at a user-specified significance level (e.g.,0.05). A central challenge lies in managing selection bias, which we tackle using Selective Inference-a principled method for conducting inference conditioned on data-driven selections. We evaluate our method on diverse datasets and demonstrate that it provides reliable AD well-suited for industrial use cases.

Citation History

Jan 26, 2026
1
Jan 27, 2026
1
Feb 3, 2026
1
Feb 13, 2026
1
Feb 13, 2026
1
Feb 13, 2026
1