"self-attention layers" Papers
7 papers found
Conference
Efficient Attention-Sharing Information Distillation Transformer for Lightweight Single Image Super-Resolution
Karam Park, Jae Woong Soh, Nam Ik Cho
AAAI 2025paperarXiv:2501.15774
10
citations
ResCLIP: Residual Attention for Training-free Dense Vision-language Inference
Jinhong Deng, Yuhang Yang, Wen Li et al.
CVPR 2025arXiv:2411.15851
11
citations
Understanding and Enhancing Safety Mechanisms of LLMs via Safety-Specific Neuron
Yiran Zhao, Wenxuan Zhang, Yuxi Xie et al.
ICLR 2025
29
citations
Grounded Text-to-Image Synthesis with Attention Refocusing
Quynh Phung, Songwei Ge, Jia-Bin Huang
CVPR 2024arXiv:2306.05427
162
citations
Style Injection in Diffusion: A Training-free Approach for Adapting Large-scale Diffusion Models for Style Transfer
Jiwoo Chung, Sangeek Hyun, Jae-Pil Heo
CVPR 2024highlightarXiv:2312.09008
225
citations
Transformers, parallel computation, and logarithmic depth
Clayton Sanford, Daniel Hsu, Matus Telgarsky
ICML 2024spotlightarXiv:2402.09268
60
citations
Tuning-Free Inversion-Enhanced Control for Consistent Image Editing
Xiaoyue Duan, Shuhao Cui, Guoliang Kang et al.
AAAI 2024paperarXiv:2312.14611
12
citations