by Changsheng Wang Papers
3 papers found
Conference
Invariance Makes LLM Unlearning Resilient Even to Unanticipated Downstream Fine-Tuning
Changsheng Wang, Yihua Zhang, jinghan jia et al.
ICML 2025arXiv:2506.01339
11
citations
LLM Unlearning Reveals a Stronger-Than-Expected Coreset Effect in Current Benchmarks
Soumyadeep Pal, Changsheng Wang, James Diffenderfer et al.
COLM 2025paperarXiv:2504.10185
10
citations
The Fragile Truth of Saliency: Improving LLM Input Attribution via Attention Bias Optimization
Yihua Zhang, Changsheng Wang, Yiwei Chen et al.
NEURIPS 2025spotlight