α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Nicolas Papernot
Nicolas Papernot
21
papers
743
total citations
papers (21)
Data-Free Model Extraction
CVPR 2021
arXiv
222
citations
The Privacy Onion Effect: Memorization is Relative
NEURIPS 2022
arXiv
141
citations
Manipulating SGD with Data Ordering Attacks
NEURIPS 2021
arXiv
111
citations
Flocks of Stochastic Parrots: Differentially Private Prompt Learning for Large Language Models
NEURIPS 2023
arXiv
96
citations
Dataset Inference for Self-Supervised Models
NEURIPS 2022
arXiv
44
citations
Architectural Backdoors in Neural Networks
CVPR 2023
arXiv
34
citations
On the Limitations of Stochastic Pre-processing Defenses
NEURIPS 2022
arXiv
33
citations
Have it your way: Individualized Privacy Assignment for DP-SGD
NEURIPS 2023
arXiv
29
citations
Breach By A Thousand Leaks: Unsafe Information Leakage in 'Safe' AI Responses
ICLR 2025
arXiv
10
citations
Auditing Private Prediction
ICML 2024
arXiv
9
citations
Robust and Actively Secure Serverless Collaborative Learning
NEURIPS 2023
arXiv
5
citations
Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention
ICML 2025
arXiv
2
citations
The Fundamental Limits of Least-Privilege Learning
ICML 2024
arXiv
2
citations
Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy and Research
NEURIPS 2025
arXiv
2
citations
Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings
ICML 2025
arXiv
1
citations
What Does It Take to Build a Performant Selective Classifier?
NEURIPS 2025
arXiv
1
citations
Leveraging Per-Instance Privacy for Machine Unlearning
ICML 2025
arXiv
1
citations
Washing The Unwashable : On The (Im)possibility of Fairwashing Detection
NEURIPS 2022
0
citations
In Differential Privacy, There is Truth: on Vote-Histogram Leakage in Ensemble Private Learning
NEURIPS 2022
0
citations
Training Private Models That Know What They Don’t Know
NEURIPS 2023
0
citations
Position: Fundamental Limitations of LLM Censorship Necessitate New Approaches
ICML 2024
0
citations