Paper "parameter-efficient fine-tuning" Papers
14 papers found
Conference
A Wander Through the Multimodal Landscape: Efficient Transfer Learning via Low-rank Sequence Multimodal Adapter
Zirun Guo, Xize Cheng, Yangyang Wu et al.
AAAI 2025paperarXiv:2412.08979
3
citations
CITI: Enhancing Tool Utilizing Ability in Large Language Models Without Sacrificing General Performance
Yupu Hao, Pengfei Cao, Zhuoran Jin et al.
AAAI 2025paperarXiv:2409.13202
5
citations
CoPEFT: Fast Adaptation Framework for Multi-Agent Collaborative Perception with Parameter-Efficient Fine-Tuning
Quanmin Wei, Penglin Dai, Wei Li et al.
AAAI 2025paperarXiv:2502.10705
5
citations
From PEFT to DEFT: Parameter Efficient Finetuning for Reducing Activation Density in Transformers
Bharat Runwal, Tejaswini Pedapati, Pin-Yu Chen
AAAI 2025paperarXiv:2402.01911
8
citations
JEN-1 DreamStyler: Customized Musical Concept Learning via Pivotal Parameters Tuning
Boyu Chen, Peike Li, Yao Yao et al.
AAAI 2025paperarXiv:2406.12292
3
citations
One Token Can Help! Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language Models
Yutao Zhu, Zhaoheng Huang, Zhicheng Dou et al.
AAAI 2025paperarXiv:2405.19670
9
citations
Promptable Anomaly Segmentation with SAM Through Self-Perception Tuning
Hui-Yue Yang, Hui Chen, Ao Wang et al.
AAAI 2025paperarXiv:2411.17217
9
citations
RILQ: Rank-Insensitive LoRA-Based Quantization Error Compensation for Boosting 2-Bit Large Language Model Accuracy
Geonho Lee, Janghwan Lee, Sukjin Hong et al.
AAAI 2025paperarXiv:2412.01129
5
citations
A Multimodal, Multi-Task Adapting Framework for Video Action Recognition
Mengmeng Wang, Jiazheng Xing, Boyuan Jiang et al.
AAAI 2024paperarXiv:2401.11649
9
citations
ArtBank: Artistic Style Transfer with Pre-trained Diffusion Model and Implicit Style Prompt Bank
Zhanjie Zhang, Quanwei Zhang, Wei Xing et al.
AAAI 2024paperarXiv:2312.06135
49
citations
G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning for Graph Transformer Networks
Anchun Gui, Jinqiang Ye, Han Xiao
AAAI 2024paperarXiv:2305.10329
31
citations
OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Models
Changhun Lee, Jungyu Jin, Taesu Kim et al.
AAAI 2024paperarXiv:2306.02272
105
citations
Point-PEFT: Parameter-Efficient Fine-Tuning for 3D Pre-trained Models
Yiwen Tang, Ray Zhang, Zoey Guo et al.
AAAI 2024paperarXiv:2310.03059
34
citations
SAM-PARSER: Fine-Tuning SAM Efficiently by Parameter Space Reconstruction
Zelin Peng, Zhengqin Xu, Zhilin Zeng et al.
AAAI 2024paperarXiv:2308.14604
37
citations