All Papers

34,598 papers found • Page 33 of 692

A Training-Free Sub-quadratic Cost Transformer Model Serving Framework with Hierarchically Pruned Attention

Heejun Lee, Geon Park, Youngwan Lee et al.

ICLR 2025arXiv:2406.09827
9
citations

A Training-free Synthetic Data Selection Method for Semantic Segmentation

Hao Tang, Siyue Yu, Jian Pang et al.

AAAI 2025paperarXiv:2501.15201
5
citations

A Transfer Attack to Image Watermarks

Yuepeng Hu, Zhengyuan Jiang, Moyang Guo et al.

ICLR 2025arXiv:2403.15365
21
citations

A transfer learning framework for weak to strong generalization

Seamus Somerstep, Felipe Maia Polo, Moulinath Banerjee et al.

ICLR 2025
8
citations

A TRIANGLE Enables Multimodal Alignment Beyond Cosine Similarity

Giordano Cicchetti, Eleonora Grassucci, Danilo Comminiello

NEURIPS 2025arXiv:2509.24734
2
citations

A Trichotomy for List Transductive Online Learning

Steve Hanneke, Amirreza Shaeiri

ICML 2025

A Truncated Newton Method for Optimal Transport

Mete Kemertas, Amir-massoud Farahmand, Allan Jepson

ICLR 2025arXiv:2504.02067
3
citations

A Trusted Lesion-assessment Network for Interpretable Diagnosis of Coronary Artery Disease in Coronary CT Angiography

Xinghua Ma, Xinyan Fang, Mingye Zou et al.

AAAI 2025paper

AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples

Antonio Emanuele Cinà, Jérôme Rony, Maura Pintor et al.

AAAI 2025paperarXiv:2404.19460
18
citations

Attack by Yourself: Effective and Unnoticeable Multi-Category Graph Backdoor Attacks with Subgraph Triggers Pool

Jiangtong Li, Dongyi Liu, Kun Zhu et al.

NEURIPS 2025arXiv:2412.17213
2
citations

Attack-inspired Calibration Loss for Calibrating Crack Recognition

Zhuangzhuang Chen, Qiangyu Chen, Jiahao Zhang et al.

AAAI 2025paper
1
citations

Attack-in-the-Chain: Bootstrapping Large Language Models for Attacks Against Black-Box Neural Ranking Models

Yu-An Liu, Ruqing Zhang, Jiafeng Guo et al.

AAAI 2025paperarXiv:2412.18770
8
citations

Attack on Prompt: Backdoor Attack in Prompt-Based Continual Learning

Trang Nguyen, Anh Tran, Nhat Ho

AAAI 2025paperarXiv:2406.19753
2
citations

Attack via Overfitting: 10-shot Benign Fine-tuning to Jailbreak LLMs

Zhixin Xie, Xurui Song, Jun Luo

NEURIPS 2025arXiv:2510.02833
2
citations

Att-Adapter: A Robust and Precise Domain-Specific Multi-Attributes T2I Diffusion Adapter via Conditional Variational Autoencoder

Wonwoong Cho, Yan-Ying Chen, Matthew Klenk et al.

ICCV 2025highlightarXiv:2503.11937

Attend and Enrich: Enhanced Visual Prompt for Zero-Shot Learning

Man Liu, Huihui Bai, Feng Li et al.

AAAI 2025paperarXiv:2406.03032
1
citations

Attend to Not Attended: Structure-then-Detail Token Merging for Post-training DiT Acceleration

Haipeng Fang, Sheng Tang, Juan Cao et al.

CVPR 2025arXiv:2505.11707
6
citations

Attention as a Hypernetwork

Simon Schug, Seijin Kobayashi, Yassir Akram et al.

ICLR 2025arXiv:2406.05816
10
citations

Attention (as Discrete-Time Markov) Chains

Yotam Erel, Olaf Dünkel, Rishabh Dabral et al.

NEURIPS 2025arXiv:2507.17657
1
citations

Attention-based clustering

Rodrigo Maulen Soto, Pierre Marion, Claire Boyer

NEURIPS 2025arXiv:2505.13112
1
citations

Attention Bootstrapping for Multi-Modal Test-Time Adaptation

Yusheng Zhao, Junyu Luo, Xiao Luo et al.

AAAI 2025paperarXiv:2503.02221
2
citations

Attention Distillation: A Unified Approach to Visual Characteristics Transfer

Yang Zhou, Xu Gao, Zichong Chen et al.

CVPR 2025arXiv:2502.20235
25
citations

Attention-Driven GUI Grounding: Leveraging Pretrained Multimodal Large Language Models Without Fine-Tuning

Hai-Ming Xu, Qi Chen, Lei Wang et al.

AAAI 2025paperarXiv:2412.10840
11
citations

Attention-Imperceptible Backdoor Attacks on Vision Transformers

Zhishen Wang, Rui Wang, Lihua Jing

AAAI 2025paper

Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers

Shijie Chen, Bernal Jimenez Gutierrez, Yu Su

ICLR 2025arXiv:2410.02642
25
citations

Attention IoU: Examining Biases in CelebA using Attention Maps

Aaron Serianni, Tyler Zhu, Olga Russakovsky et al.

CVPR 2025arXiv:2503.19846
1
citations

Attention layers provably solve single-location regression

Pierre Marion, Raphaël Berthier, Gérard Biau et al.

ICLR 2025arXiv:2410.01537
11
citations

Attention-Level Speculation

Jack Cai, Ammar Vora, Randolph Zhang et al.

ICML 2025

Attention Mechanism, Max-Affine Partition, and Universal Approximation

Hude Liu, Jerry Yao-Chieh Hu, Zhao Song et al.

NEURIPS 2025arXiv:2504.19901
6
citations

Attention Mechanisms Perspective: Exploring LLM Processing of Graph-Structured Data

Guan Zhong, Likang Wu, Hongke Zhao et al.

ICML 2025arXiv:2505.02130
5
citations

Attention-Only Transformers via Unrolled Subspace Denoising

Peng Wang, Yifu Lu, Yaodong Yu et al.

ICML 2025arXiv:2506.03790
4
citations

Attention on the Sphere

Boris Bonev, Max Rietmann, Andrea Paris et al.

NEURIPS 2025arXiv:2505.11157
1
citations

AttentionPredictor: Temporal Patterns Matter for KV Cache Compression

Qingyue Yang, Jie Wang, Xing Li et al.

NEURIPS 2025oralarXiv:2502.04077
4
citations

Attention Sinks: A 'Catch, Tag, Release' Mechanism for Embeddings

Stephen Zhang, Mustafa Khan, Vardan Papyan

NEURIPS 2025arXiv:2502.00919
3
citations

Attention to Neural Plagiarism: Diffusion Models Can Plagiarize Your Copyrighted Images!

zihang zou, Boqing Gong, Liqiang Wang

ICCV 2025
1
citations

Attention to the Burtiness in Visual Prompt Tuning!

Yuzhu Wang, Manni Duan, Shu Kong

ICCV 2025

Attention to Trajectory: Trajectory-Aware Open-Vocabulary Tracking

Yunhao Li, Yifan Jiao, Dan Meng et al.

ICCV 2025arXiv:2503.08145

Attention with Markov: A Curious Case of Single-layer Transformers

Ashok Makkuva, Marco Bondaschi, Adway Girish et al.

ICLR 2025arXiv:2402.04161
39
citations

Attention with Trained Embeddings Provably Selects Important Tokens

Diyuan Wu, Aleksandr Shevchenko, Samet Oymak et al.

NEURIPS 2025arXiv:2505.17282

Attention! Your Vision Language Model Could Be Maliciously Manipulated

Xiaosen Wang, Shaokang Wang, Zhijin Ge et al.

NEURIPS 2025arXiv:2505.19911
3
citations

Attentive Eraser: Unleashing Diffusion Model’s Object Removal Potential via Self-Attention Redirection Guidance

Wenhao Sun, Xue-Mei Dong, Benlei Cui et al.

AAAI 2025paperarXiv:2412.12974
36
citations

Attraction Diminishing and Distributing for Few-Shot Class-Incremental Learning

Li-Jun Zhao, Zhen-Duo Chen, Yongxin Wang et al.

CVPR 2025
1
citations

Attractive Metadata Attack: Inducing LLM Agents to Invoke Malicious Tools

Kanghua Mo, Li Hu, Yucheng Long et al.

NEURIPS 2025arXiv:2508.02110
6
citations

AttriBoT: A Bag of Tricks for Efficiently Approximating Leave-One-Out Context Attribution

Fengyuan Liu, Nikhil Kandpal, Colin Raffel

ICLR 2025arXiv:2411.15102
15
citations

Attribute-based Visual Reprogramming for Vision-Language Models

Chengyi Cai, Zesheng Ye, Lei Feng et al.

ICLR 2025arXiv:2501.13982
5
citations

Attribute-formed Class-specific Concept Space: Endowing Language Bottleneck Model with Better Interpretability and Scalability

Jianyang Zhang, Qianli Luo, Guowu Yang et al.

CVPR 2025arXiv:2503.20301

Attribute Inference Attacks for Federated Regression Tasks

Francesco Diana, Othmane Marfoq, Chuan Xu et al.

AAAI 2025paperarXiv:2411.12697
1
citations

Attribute-Missing Multi-view Graph Clustering

Bowen Zhao, Qianqian Wang, Zhengming Ding et al.

CVPR 2025
1
citations

Attributes Shape the Embedding Space of Face Recognition Models

Pierrick Leroy, Antonio Mastropietro, Marco Nurisso et al.

ICML 2025arXiv:2507.11372

Attributing Culture-Conditioned Generations to Pretraining Corpora

Huihan Li, Arnav Goel, Keyu He et al.

ICLR 2025arXiv:2412.20760
7
citations