All Papers

34,598 papers found • Page 55 of 692

CA-MLIF: Cross-Attention and Multimodal Low-Rank Interaction Fusion Framework for Tumor Prognostic Prediction

Yajun An, Jiale Chen, Huan Lin et al.

AAAI 2025paper

CAMO: Convergence-Aware Multi-Fidelity Bayesian Optimization

WEI XING, Zhenjie Lu, Akeel Shah

NEURIPS 2025

Camouflage Anything: Learning to Hide using Controlled Out-painting and Representation Engineering

Biplab Das, Viswanath Gopalakrishnan

CVPR 2025

CamPoint: Boosting Point Cloud Segmentation with Virtual Camera

Jianhui Zhang, Luo Yizhi, Zicheng Zhang et al.

CVPR 2025
1
citations

CamSAM2: Segment Anything Accurately in Camouflaged Videos

Yuli Zhou, Yawei Li, Yuqian Fu et al.

NEURIPS 2025arXiv:2503.19730
4
citations

CAMSIC: Content-aware Masked Image Modeling Transformer for Stereo Image Compression

Xinjie Zhang, Shenyuan Gao, Zhening Liu et al.

AAAI 2025paperarXiv:2403.08505
5
citations

CaMuViD: Calibration-Free Multi-View Detection

Amir Etefaghi Daryani, M. Usman Maqbool Bhutta, Byron Hernandez et al.

CVPR 2025
1
citations

Can3Tok: Canonical 3D Tokenization and Latent Modeling of Scene-Level 3D Gaussians

Quankai Gao, Iliyan Georgiev, Tuanfeng Wang et al.

ICCV 2025arXiv:2508.01464
2
citations

Can a Crow Hatch a Falcon? Lineage Matters in Predicting Large Language Model Performance

Takuya Tamura, Taro Yano, Masafumi Enomoto et al.

COLM 2025paperarXiv:2504.19811
1
citations

Can Agent Fix Agent Issues?

Alfin Wijaya Rahardja, Junwei Liu, Weitong Chen et al.

NEURIPS 2025arXiv:2505.20749
3
citations

Can AI Inspire Biophilic Design in Immersive Virtual Reality Workspaces to Enhance Well-being?

Sara Romano, Luana Marangelli, Enricoandrea Laviola et al.

ISMAR 2025paper

Can a Large Language Model be a Gaslighter?

Wei Li, Luyao Zhu, Yang Song et al.

ICLR 2025arXiv:2410.09181
2
citations

Can a MISL Fly? Analysis and Ingredients for Mutual Information Skill Learning

Chongyi Zheng, Jens Tuyls, Joanne Peng et al.

ICLR 2025arXiv:2412.08021
9
citations

Can A Society of Generative Agents Simulate Human Behavior and Inform Public Health Policy? A Case Study on Vaccine Hesitancy

Abe Bohan Hou, Hongru Du, Yichen Wang et al.

COLM 2025paperarXiv:2503.09639
14
citations

Can Biologically Plausible Temporal Credit Assignment Rules Match BPTT for Neural Similarity? E-prop as an Example

Yuhan Helena Liu, Guangyu Robert Yang, Christopher Cueva

ICML 2025oralarXiv:2506.06904

Cancer Survival Analysis via Zero-shot Tumor Microenvironment Segmentation on Low-resolution Whole Slide Pathology Images

Jiao Tang, WEI SHAO, Daoqiang Zhang

NEURIPS 2025

Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence

Yuankai Luo, Lei Shi, Xiao-Ming Wu

ICML 2025arXiv:2502.09263
13
citations

Can Class-Priors Help Single-Positive Multi-Label Learning?

Biao Liu, Ning Xu, Jie Wang et al.

NEURIPS 2025arXiv:2309.13886
1
citations

Can Compressed LLMs Truly Act? An Empirical Evaluation of Agentic Capabilities in LLM Compression

Peijie Dong, Zhenheng Tang, Xiang Liu et al.

ICML 2025arXiv:2505.19433
10
citations

Can DBNNs Robust to Environmental Noise for Resource-constrained Scenarios?

Wendong Zheng, Junyang Chen, Husheng Guo et al.

ICML 2025

Can Dependencies Induced by LLM-Agent Workflows Be Trusted?

Yu Yao, Yiliao (Lia) Song, Yian Xie et al.

NEURIPS 2025

Can Diffusion Models Disentangle? A Theoretical Perspective

Liming Wang, Muhammad Jehanzeb Mirza, Yishu Gong et al.

NEURIPS 2025arXiv:2504.00220

Can Diffusion Models Learn Hidden Inter-Feature Rules Behind Images?

Yujin Han, Andi Han, Wei Huang et al.

ICML 2025arXiv:2502.04725
8
citations

Can DPO Learn Diverse Human Values? A Theoretical Scaling Law

Shawn Im, Sharon Li

NEURIPS 2025arXiv:2408.03459
8
citations

CanFields: Consolidating Diffeomorphic Flows for Non-Rigid 4D Interpolation from Arbitrary-Length Sequences

Miaowei Wang, Changjian Li, Amir Vaxman

ICCV 2025arXiv:2406.18582
1
citations

Can Generative AI Solve Your In-Context Learning Problem? A Martingale Perspective

Andrew Jesson, Nicolas Beltran-Velez, David Blei

ICLR 2025arXiv:2412.06033
1
citations

Can Generative Geospatial Diffusion Models Excel as Discriminative Geospatial Foundation Models?

Yuru Jia, Valerio Marsocci, Ziyang Gong et al.

ICCV 2025arXiv:2503.07890
5
citations

Can Generative Models Improve Self-Supervised Representation Learning?

Sana Ayromlou, Vahid Reza Khazaie, Fereshteh Forghani et al.

AAAI 2025paperarXiv:2403.05966
3
citations

Can Generative Video Models Help Pose Estimation?

Ruojin Cai, Jason Y. Zhang, Philipp Henzler et al.

CVPR 2025highlightarXiv:2412.16155
7
citations

Can In-context Learning Really Generalize to Out-of-distribution Tasks?

Qixun Wang, Yifei Wang, Xianghua Ying et al.

ICLR 2025arXiv:2410.09695
16
citations

Can Knowledge be Transferred from Unimodal to Multimodal? Investigating the Transitivity of Multimodal Knowledge Editing

Lingyong Fang, Xinzhong Wang, Depeng depeng wang et al.

ICCV 2025

Can Knowledge Editing Really Correct Hallucinations?

Baixiang Huang, Canyu Chen, Xiongxiao Xu et al.

ICLR 2025arXiv:2410.16251
29
citations

Can Knowledge-Graph-based Retrieval Augmented Generation Really Retrieve What You Need?

Junchi Yu, Yujie Liu, Jindong Gu et al.

NEURIPS 2025spotlightarXiv:2510.16582
1
citations

Can Language Models Falsify? Evaluating Algorithmic Reasoning with Counterexample Creation

Shiven Sinha, Shashwat Goel, Ponnurangam Kumaraguru et al.

COLM 2025paperarXiv:2502.19414
1
citations

Can Large Language Models Derive High-Level Cognition from Low-Level and Fragmented Foundational Information?

Yang Liu, Xiaoping Wang, Kai Lu

AAAI 2025paper

Can Large Language Models Help Multimodal Language Analysis? MMLA: A Comprehensive Benchmark

Hanlei Zhang, zhuohang li, Hua Xu et al.

NEURIPS 2025arXiv:2504.16427
2
citations

Can Large Language Models Integrate Spatial Data? Empirical Insights into Reasoning Strengths and Computational Weaknesses

Bin HAN, Robert Wolfe, Anat Caspi et al.

COLM 2025paperarXiv:2508.05009
1
citations

Can Large Language Models Master Complex Card Games?

Wei Wang, Fuqing Bie, Junzhe Chen et al.

NEURIPS 2025arXiv:2509.01328
2
citations

Can Large Language Models Understand Intermediate Representations in Compilers?

Hailong Jiang, Jianfeng Zhu, Yao Wan et al.

ICML 2025arXiv:2502.06854
1
citations

Can Large Language Models Understand Symbolic Graphics Programs?

Zeju Qiu, Weiyang Liu, Haiwen Feng et al.

ICLR 2025arXiv:2408.08313
29
citations

Can Large Multimodal Models Understand Agricultural Scenes? Benchmarking with AgroMind

Qingmei Li, Yang Zhang, Zurong Mai et al.

NEURIPS 2025arXiv:2505.12207
1
citations

Can Large Vision-Language Models Correct Semantic Grounding Errors By Themselves?

Yuan-Hong Liao, Rafid Mahmood, Sanja Fidler et al.

CVPR 2025arXiv:2404.06510
9
citations

CAN: Leveraging Clients As Navigators for Generative Replay in Federated Continual Learning

Xuankun Rong, Jianshu Zhang, Kun He et al.

ICML 2025

Can LLMs Correct Themselves? A Benchmark of Self-Correction in LLMs

Guiyao Tie, Zenghui Yuan, Zeli Zhao et al.

NEURIPS 2025arXiv:2510.16062
2
citations

Can LLM "Self-report"?: Evaluating the Validity of Self-report Scales in Measuring Personality Design in LLM-based Chatbots

Huiqi Zou, Pengda Wang, Zihan Yan et al.

COLM 2025paper

Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers

Chenglei Si, Diyi Yang, Tatsunori Hashimoto

ICLR 2025arXiv:2409.04109
285
citations

Can LLMs Handle WebShell Detection? Overcoming Detection Challenges with Behavioral Function-Aware Framework

Feijiang Han, Jiaming Zhang, Chuyi Deng et al.

COLM 2025paperarXiv:2504.13811
6
citations

Can LLM Simulations Truly Reflect Humanity? A Deep Dive

Qian Wang, Zhenheng Tang, Bingsheng He

ICLR 2025

Can LLMs Obfuscate Code? A Systematic Analysis of Large Language Models into Assembly Code Obfuscation

Seyedreza Mohseni, Seyedali Mohammadi, Deepa Tilwani et al.

AAAI 2025paperarXiv:2412.16135
6
citations

Can LLMs Outshine Conventional Recommenders? A Comparative Evaluation

Qijiong Liu, Jieming Zhu, Lu Fan et al.

NEURIPS 2025arXiv:2503.05493
4
citations