"low-rank approximation" Papers

17 papers found

Demystifying Language Model Forgetting with Low-rank Example Associations

Xisen Jin, Xiang Ren

NEURIPS 2025arXiv:2406.14026
6
citations

Efficient Parametric SVD of Koopman Operator for Stochastic Dynamical Systems

Minchan Jeong, Jongha (Jon) Ryu, Se-Young Yun et al.

NEURIPS 2025arXiv:2507.07222
3
citations

HOT: Hadamard-based Optimized Training

Seonggon Kim, Juncheol Shin, Seung-taek Woo et al.

CVPR 2025arXiv:2503.21261

QERA: an Analytical Framework for Quantization Error Reconstruction

Cheng Zhang, Jeffrey T. H. Wong, Can Xiao et al.

ICLR 2025arXiv:2410.06040
11
citations

QSVD: Efficient Low-rank Approximation for Unified Query-Key-Value Weight Compression in Low-Precision Vision-Language Models

Yutong Wang, Haiyu Wang, Sai Qian Zhang

NEURIPS 2025spotlightarXiv:2510.16292
1
citations

Spectral Perturbation Bounds for Low-Rank Approximation with Applications to Privacy

Phuc Tran, Van Vu, Nisheeth K. Vishnoi

NEURIPS 2025oralarXiv:2510.25670
2
citations

SVDQuant: Absorbing Outliers by Low-Rank Component for 4-Bit Diffusion Models

Muyang Li, Yujun Lin, Zhekai Zhang et al.

ICLR 2025arXiv:2411.05007
98
citations

Unleashing High-Quality Image Generation in Diffusion Sampling Using Second-Order Levenberg-Marquardt-Langevin

Fangyikang Wang, Hubery Yin, Lei Qian et al.

ICCV 2025arXiv:2505.24222
3
citations

Debiased Distribution Compression

Lingxiao Li, Raaz Dwivedi, Lester Mackey

ICML 2024arXiv:2404.12290
4
citations

Fast White-Box Adversarial Streaming Without a Random Oracle

Ying Feng, Aayush Jain, David Woodruff

ICML 2024arXiv:2406.06808
3
citations

LoRAP: Transformer Sub-Layers Deserve Differentiated Structured Compression for Large Language Models

guangyan li, Yongqiang Tang, Wensheng Zhang

ICML 2024arXiv:2404.09695
8
citations

LQER: Low-Rank Quantization Error Reconstruction for LLMs

Cheng Zhang, Jianyi Cheng, George Constantinides et al.

ICML 2024arXiv:2402.02446
27
citations

LRANet: Towards Accurate and Efficient Scene Text Detection with Low-Rank Approximation

Yuchen Su, Zhineng Chen, Zhiwen Shao et al.

AAAI 2024paperarXiv:2306.15142
17
citations

On Computational Limits of Modern Hopfield Models: A Fine-Grained Complexity Analysis

Jerry Yao-Chieh Hu, Thomas Lin, Zhao Song et al.

ICML 2024arXiv:2402.04520
46
citations

Operator SVD with Neural Networks via Nested Low-Rank Approximation

Jongha (Jon) Ryu, Xiangxiang Xu, Hasan Sabri Melihcan Erol et al.

ICML 2024arXiv:2402.03655
9
citations

PELA: Learning Parameter-Efficient Models with Low-Rank Approximation

Yangyang Guo, Guangzhi Wang, Mohan Kankanhalli

CVPR 2024arXiv:2310.10700
10
citations

Reshape and Adapt for Output Quantization (RAOQ): Quantization-aware Training for In-memory Computing Systems

Bonan Zhang, Chia-Yu Chen, Naveen Verma

ICML 2024