Spotlight "vision transformers" Papers
7 papers found
Conference
Polyline Path Masked Attention for Vision Transformer
Zhongchen Zhao, Chaodong Xiao, Hui LIN et al.
NEURIPS 2025spotlightarXiv:2506.15940
Vision Transformers Don't Need Trained Registers
Nicholas Jiang, Amil Dravid, Alexei Efros et al.
NEURIPS 2025spotlightarXiv:2506.08010
15
citations
Vision Transformers with Self-Distilled Registers
Zipeng Yan, Yinjie Chen, Chong Zhou et al.
NEURIPS 2025spotlightarXiv:2505.21501
4
citations
ERQ: Error Reduction for Post-Training Quantization of Vision Transformers
Yunshan Zhong, Jiawei Hu, You Huang et al.
ICML 2024spotlight
Improving Interpretation Faithfulness for Vision Transformers
Lijie Hu, Yixin Liu, Ninghao Liu et al.
ICML 2024spotlightarXiv:2311.17983
12
citations
One Meta-tuned Transformer is What You Need for Few-shot Learning
Xu Yang, Huaxiu Yao, Ying WEI
ICML 2024spotlight
Sample-specific Masks for Visual Reprogramming-based Prompting
Chengyi Cai, Zesheng Ye, Lei Feng et al.
ICML 2024spotlightarXiv:2406.03150
13
citations