"attention mechanisms" Papers

26 papers found

Balanced Token Pruning: Accelerating Vision Language Models Beyond Local Optimization

kaiyuan Li, Xiaoyue Chen, Chen Gao et al.

NEURIPS 2025arXiv:2505.22038
4
citations

Beyond Text-Visual Attention: Exploiting Visual Cues for Effective Token Pruning in VLMs

Qizhe Zhang, Aosong Cheng, Ming Lu et al.

ICCV 2025arXiv:2412.01818
45
citations

DISCO: Disentangled Communication Steering for Large Language Models

Max Torop, Aria Masoomi, Masih Eskandar et al.

NEURIPS 2025arXiv:2509.16820
1
citations

DriveGazen: Event-Based Driving Status Recognition Using Conventional Camera

Xiaoyin Yang, Xin Yang

AAAI 2025paperarXiv:2412.11753

Fast attention mechanisms: a tale of parallelism

Jingwen Liu, Hantao Yu, Clayton Sanford et al.

NEURIPS 2025arXiv:2509.09001

Making Text Embedders Few-Shot Learners

Chaofan Li, Minghao Qin, Shitao Xiao et al.

ICLR 2025arXiv:2409.15700
89
citations

Mamba as a Bridge: Where Vision Foundation Models Meet Vision Language Models for Domain-Generalized Semantic Segmentation

Xin Zhang, Robby T. Tan

CVPR 2025highlightarXiv:2504.03193
20
citations

Neural Fractional Attention Differential Equations

Qiyu Kang, Wenjun Cui, Xuhao Li et al.

NEURIPS 2025oral

On the Role of Attention Heads in Large Language Model Safety

Zhenhong Zhou, Haiyang Yu, Xinghua Zhang et al.

ICLR 2025arXiv:2410.13708
43
citations

Rope to Nope and Back Again: A New Hybrid Attention Strategy

Bowen Yang, Bharat Venkitesh, Dwaraknath Gnaneshwar Talupuru et al.

NEURIPS 2025arXiv:2501.18795
20
citations

Scale-invariant attention

Ben Anson, Xi Wang, Laurence Aitchison

NEURIPS 2025arXiv:2505.17083
2
citations

Seeing the Trees for the Forest: Rethinking Weakly-Supervised Medical Visual Grounding

Huy Ta, Duy Anh Huynh, Yutong Xie et al.

ICCV 2025highlightarXiv:2505.15123
2
citations

Where, What, Why: Towards Explainable Driver Attention Prediction

Yuchen Zhou, Jiayu Tang, Xiaoyan Xiao et al.

ICCV 2025highlightarXiv:2506.23088
6
citations

Why Does the Effective Context Length of LLMs Fall Short?

Chenxin An, Jun Zhang, Ming Zhong et al.

ICLR 2025arXiv:2410.18745
42
citations

ZeroS: Zero‑Sum Linear Attention for Efficient Transformers

Jiecheng Lu, Xu Han, Yan Sun et al.

NEURIPS 2025spotlightarXiv:2602.05230

Active Object Detection with Knowledge Aggregation and Distillation from Large Models

Dejie Yang, Yang Liu

CVPR 2024arXiv:2405.12509
9
citations

Attention-Challenging Multiple Instance Learning for Whole Slide Image Classification

Yunlong Zhang, Honglin Li, YUXUAN SUN et al.

ECCV 2024arXiv:2311.07125
66
citations

BiSHop: Bi-Directional Cellular Learning for Tabular Data with Generalized Sparse Modern Hopfield Model

Chenwei Xu, Yu-Chao Huang, Jerry Yao-Chieh Hu et al.

ICML 2024arXiv:2404.03830
26
citations

FreeDiff: Progressive Frequency Truncation for Image Editing with Diffusion Models

Wei WU, Qingnan Fan, Shuai Qin et al.

ECCV 2024arXiv:2404.11895
11
citations

Improving Interpretation Faithfulness for Vision Transformers

Lijie Hu, Yixin Liu, Ninghao Liu et al.

ICML 2024spotlightarXiv:2311.17983
12
citations

Multi-Modal Latent Space Learning for Chain-of-Thought Reasoning in Language Models

Liqi He, Zuchao Li, Xiantao Cai et al.

AAAI 2024paperarXiv:2312.08762
37
citations

Pseudo-Label Calibration Semi-supervised Multi-Modal Entity Alignment

Luyao Wang, Pengnian Qi, Xigang Bao et al.

AAAI 2024paperarXiv:2403.01203
18
citations

RealViformer: Investigating Attention for Real-World Video Super-Resolution

Yuehan Zhang, Angela Yao

ECCV 2024arXiv:2407.13987
23
citations

Token Transformation Matters: Towards Faithful Post-hoc Explanation for Vision Transformer

Junyi Wu, Bin Duan, Weitai Kang et al.

CVPR 2024arXiv:2403.14552
16
citations

Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention

Zhen Qin, Weigao Sun, Dong Li et al.

ICML 2024arXiv:2405.17381
24
citations

VisFocus: Prompt-Guided Vision Encoders for OCR-Free Dense Document Understanding

Ofir Abramovich, Niv Nayman, Sharon Fogel et al.

ECCV 2024arXiv:2407.12594
6
citations