by Mingye Zhu Papers
3 papers found
Conference
Leveraging Importance Sampling to Detach Alignment Modules from Large Language Models
Yi Liu, Dianqing Liu, Mingye Zhu et al.
NEURIPS 2025arXiv:2505.19700
1
citations
Leveraging robust optimization for llm alignment under distribution shifts
Mingye Zhu, Yi Liu, Zheren Fu et al.
NEURIPS 2025arXiv:2504.05831
1
citations
On-the-fly Preference Alignment via Principle-Guided Decoding
Mingye Zhu, Yi Liu, Lei Zhang et al.
ICLR 2025arXiv:2502.14204
5
citations