All Papers

34,598 papers found • Page 56 of 692

Can LLMs Really Learn to Translate a Low-Resource Language from One Grammar Book?

Seth Aycock, David Stap, Di Wu et al.

ICLR 2025arXiv:2409.19151
20
citations

Can LLMs Reason Over Non-Text Modalities in a Training-Free Manner? A Case Study with In-Context Representation Learning

Tianle Zhang, Wanlong Fang, Jonathan Woo et al.

NEURIPS 2025arXiv:2509.17552
2
citations

Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?

Egor Zverev, Sahar Abdelnabi, Soroush Tabesh et al.

ICLR 2025arXiv:2403.06833
48
citations

Can LLMs Solve Longer Math Word Problems Better?

Xin Xu, Tong Xiao, Zitong Chao et al.

ICLR 2025arXiv:2405.14804
26
citations

Can LLMs Understand Time Series Anomalies?

Zihao Zhou, Rose Yu

ICLR 2025arXiv:2410.05440
35
citations

Can LVLMs Obtain a Driver’s License? A Benchmark Towards Reliable AGI for Autonomous Driving

Yuhang Lu, Yichen Yao, Jiadong Tu et al.

AAAI 2025paperarXiv:2409.02914
17
citations

Can Machines Understand Composition? Dataset and Benchmark for Photographic Image Composition Embedding and Understanding

Zhaoran Zhao, Peng Lu, Anran Zhang et al.

CVPR 2025highlight

Can MLLMs Absorb Math Reasoning Abilities from LLMs as Free Lunch?

Yijie Hu, Zihao Zhou, Kaizhu Huang et al.

NEURIPS 2025arXiv:2510.14387
3
citations

Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark

Yunzhuo Hao, Jiawei Gu, Huichen Wang et al.

ICML 2025oralarXiv:2501.05444
100
citations

Can Multi-Modal LLMs Provide Live Step-by-Step Task Guidance?

Apratim Bhattacharyya, Bicheng Xu, Sanjay Haresh et al.

NEURIPS 2025arXiv:2511.21998

Can NeRFs "See" without Cameras?

Chaitanya Amballa, Yu-Lin Wei, Sattwik Basu et al.

NEURIPS 2025

Can Neural Networks Achieve Optimal Computational-statistical Tradeoff? An Analysis on Single-Index Model

Siyu Chen, Beining Wu, Miao Lu et al.

ICLR 2025
2
citations

Cannot See the Forest for the Trees: Invoking Heuristics and Biases to Elicit Irrational Choices of LLMs

Haoming Yang, Ke Ma, Xiaojun Jia et al.

ICML 2025arXiv:2505.02862
4
citations

Can One Modality Model Synergize Training of Other Modality Models?

Jae-Jun Lee, Sung Whan Yoon

ICLR 2025

Canonical Rank Adaptation: An Efficient Fine-Tuning Strategy for Vision Transformers

Lokesh Veeramacheneni, Moritz Wolter, Hilde Kuehne et al.

ICML 2025

CanonSwap: High-Fidelity and Consistent Video Face Swapping via Canonical Space Modulation

Xiangyang Luo, Ye Zhu, Yunfei Liu et al.

ICCV 2025arXiv:2507.02691
6
citations

Can People's Brains Synchronize during Remote AR Collaboration?

Jaehwan You, Myeongul Jung, Kwanguk Kim

ISMAR 2025paper
2
citations

Can Performant LLMs Be Ethical? Quantifying the Impact of Web Crawling Opt-Outs

Dongyang Fan, Vinko Sabolčec, Matin Ansaripour et al.

COLM 2025paper
4
citations

Can Private Machine Learning Be Fair?

Joseph Rance, Filip Svoboda

AAAI 2025paper
1
citations

Can Reinforcement Learning Solve Asymmetric Combinatorial-Continuous Zero-Sum Games?

ICLR 2025arXiv:2502.01252
1
citations

Can RLHF be More Efficient with Imperfect Reward Models? A Policy Coverage Perspective

Jiawei Huang, Bingcong Li, Christoph Dann et al.

ICML 2025arXiv:2502.19255
4
citations

Can Students Beyond the Teacher? Distilling Knowledge from Teacher’s Bias

Jianhua Zhang, Yi Gao, Ruyu Liu et al.

AAAI 2025paperarXiv:2412.09874
7
citations

Can Test-Time Scaling Improve World Foundation Model?

Wenyan Cong, Hanqing Zhu, Peihao Wang et al.

COLM 2025paperarXiv:2503.24320
7
citations

Can Text-to-Video Generation help Video-Language Alignment?

Luca Zanella, Massimiliano Mancini, Willi Menapace et al.

CVPR 2025arXiv:2503.18507
1
citations

Can Textual Gradient Work in Federated Learning?

Minghui Chen, Ruinan Jin, Wenlong Deng et al.

ICLR 2025arXiv:2502.19980
9
citations

Can the Perceived Capability of Your Virtual Avatar Enhance Exercise Performance?

Sen-Zhe Xu, Bosheng Huang, Zian Zhou et al.

ISMAR 2025paper

Can Transformers Do Enumerative Geometry?

Baran Hashemi, Roderic Corominas, Alessandro Giacchetto

ICLR 2025arXiv:2408.14915
9
citations

Can Transformers Learn Full Bayesian Inference in Context?

Arik Reuter, Tim G. J. Rudner, Vincent Fortuin et al.

ICML 2025arXiv:2501.16825
17
citations

Can Transformers Reason Logically? A Study in SAT Solving

Leyan Pan, Vijay Ganesh, Jacob Abernethy et al.

ICML 2025arXiv:2410.07432
11
citations

Can't Slow Me Down: Learning Robust and Hardware-Adaptive Object Detectors against Latency Attacks for Edge Devices

Tianyi Wang, Zichen Wang, Cong Wang et al.

CVPR 2025arXiv:2412.02171
3
citations

Can Video LLMs Refuse to Answer? Alignment for Answerability in Video Large Language Models

Eunseop Yoon, Hee Suk Yoon, Mark Hasegawa-Johnson et al.

ICLR 2025arXiv:2507.04976
4
citations

Can Watermarked LLMs be Identified by Users via Crafted Prompts?

Aiwei Liu, Sheng Guan, Yiming Liu et al.

ICLR 2025arXiv:2410.03168
12
citations

Can Watermarking Large Language Models Prevent Copyrighted Text Generation and Hide Training Data?

Michael-Andrei Panaitescu-Liess, Zora Che, Bang An et al.

AAAI 2025paperarXiv:2407.17417
20
citations

Can Watermarks be Used to Detect LLM IP Infringement For Free?

Zhengyue Zhao, Xiaogeng Liu, Somesh Jha et al.

ICLR 2025

Can We Achieve Efficient Diffusion Without Self-Attention? Distilling Self-Attention into Convolutions

ZiYi Dong, Chengxing Zhou, Weijian Deng et al.

ICCV 2025arXiv:2504.21292

Can We Get Rid of Handcrafted Feature Extractors? SparseViT: Nonsemantics-Centered, Parameter-Efficient Image Manipulation Localization Through Spare-Coding Transformer

Lei Su, Xiaochen Ma, Xuekang Zhu et al.

AAAI 2025paperarXiv:2412.14598
27
citations

Can We Ignore Labels in Out of Distribution Detection?

Hong Yang, Qi Yu, Travis Desell

ICLR 2025arXiv:2504.14704
1
citations

Can We Infer Confidential Properties of Training Data from LLMs?

Pengrun Huang, Chhavi Yadav, Kamalika Chaudhuri et al.

NEURIPS 2025spotlightarXiv:2506.10364
3
citations

Can We Predict Performance of Large Models across Vision-Language Tasks?

Qinyu Zhao, Ming Xu, Kartik Gupta et al.

ICML 2025arXiv:2410.10112
2
citations

Can We Talk Models Into Seeing the World Differently?

Paul Gavrikov, Jovita Lukasik, Steffen Jung et al.

ICLR 2025arXiv:2403.09193
17
citations

Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-Based Decision-Making Systems

Ruochen Jiao, Shaoyuan Xie, Justin Yue et al.

ICLR 2025arXiv:2405.20774
27
citations

CaO2: Rectifying Inconsistencies in Diffusion-Based Dataset Distillation

Haoxuan Wang, Zhenghao Zhao, Junyi Wu et al.

ICCV 2025
5
citations

CAP4D: Creating Animatable 4D Portrait Avatars with Morphable Multi-View Diffusion Models

Felix Taubner, Ruihang Zhang, Mathieu Tuli et al.

CVPR 2025arXiv:2412.12093
25
citations

CAPability: A Comprehensive Visual Caption Benchmark for Evaluating Both Correctness and Thoroughness

Zhihang Liu, Chen-Wei Xie, Bin Wen et al.

NEURIPS 2025arXiv:2502.14914
3
citations

Capability Instruction Tuning

Yi-Kai Zhang, De-Chuan Zhan, Han-Jia Ye

AAAI 2025paper

Capability Localization: Capabilities Can be Localized rather than Individual Knowledge

Xiusheng Huang, Jiaxiang Liu, Yequan Wang et al.

ICLR 2025arXiv:2502.20992
1
citations

Cape: Context-Aware Prompt Perturbation Mechanism with Differential Privacy

Haoqi Wu, Wei Dai, Wang Li et al.

ICML 2025arXiv:2505.05922
5
citations

CapeLLM: Support-Free Category-Agnostic Pose Estimation with Multimodal Large Language Models

Junho Kim, Hyungjin Chung, Byung-Hoon Kim

ICCV 2025arXiv:2411.06869
2
citations

CAP: Evaluation of Persuasive and Creative Image Generation

Aysan Aghazadeh, Adriana Kovashka

ICCV 2025arXiv:2412.10426
3
citations

CapeX: Category-Agnostic Pose Estimation from Textual Point Explanation

Matan Rusanovsky, Or Hirschorn, Shai Avidan

ICLR 2025arXiv:2406.00384
8
citations