"self-attention mechanisms" Papers
9 papers found
Conference
Context-Aware Regularization with Markovian Integration for Attention-Based Nucleotide Analysis
Mohammad Saleh Refahi, Mahdi Abavisani, Bahrad Sokhansanj et al.
NEURIPS 2025arXiv:2507.09378
End-to-End HOI Reconstruction Transformer with Graph-based Encoding
Zhenrong Wang, Qi Zheng, Sihan Ma et al.
CVPR 2025highlightarXiv:2503.06012
1
citations
Selective induction Heads: How Transformers Select Causal Structures in Context
Francesco D'Angelo, francesco croce, Nicolas Flammarion
ICLR 2025arXiv:2509.08184
6
citations
SimpleTM: A Simple Baseline for Multivariate Time Series Forecasting
Hui Chen, Viet Luong, Lopamudra Mukherjee et al.
ICLR 2025oral
14
citations
StyleKeeper: Prevent Content Leakage using Negative Visual Query Guidance
Jaeseok Jeong, Junho Kim, Youngjung Uh et al.
ICCV 2025arXiv:2510.06827
2
citations
Dissecting Multimodality in VideoQA Transformer Models by Impairing Modality Fusion
Ishaan Rawal, Alexander Matyasko, Shantanu Jaiswal et al.
ICML 2024arXiv:2306.08889
8
citations
Object-Oriented Anchoring and Modal Alignment in Multimodal Learning
Shibin Mei, Bingbing Ni, Hang Wang et al.
ECCV 2024
1
citations
PolySketchFormer: Fast Transformers via Sketching Polynomial Kernels
Praneeth Kacham, Vahab Mirrokni, Peilin Zhong
ICML 2024arXiv:2310.01655
23
citations
Removing Rows and Columns of Tokens in Vision Transformer enables Faster Dense Prediction without Retraining
Diwei Su, cheng fei, Jianxu Luo
ECCV 2024
2
citations