All Papers

34,598 papers found • Page 107 of 692

Does Refusal Training in LLMs Generalize to the Past Tense?

Maksym Andriushchenko, Nicolas Flammarion

ICLR 2025arXiv:2407.11969
69
citations

Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?

Yang Yue, Zhiqi Chen, Rui Lu et al.

NEURIPS 2025oralarXiv:2504.13837
540
citations

Does Representation Guarantee Welfare?

Jakob de Raaij, Ariel Procaccia, Alexandros Psomas

NEURIPS 2025

Does Safety Training of LLMs Generalize to Semantically Related Natural Prompts?

Sravanti Addepalli, Yerram Varun, Arun Suggala et al.

ICLR 2025arXiv:2412.03235
7
citations

Does SGD really happen in tiny subspaces?

Minhak Song, Kwangjun Ahn, Chulhee Yun

ICLR 2025arXiv:2405.16002
21
citations

Does Spatial Cognition Emerge in Frontier Models?

Santhosh Kumar Ramakrishnan, Erik Wijmans, Philipp Krähenbühl et al.

ICLR 2025arXiv:2410.06468
51
citations

Does Stochastic Gradient really succeed for bandits?

Dorian Baudry, Emmeran Johnson, Simon Vary et al.

NEURIPS 2025oral

Does Thinking More Always Help? Mirage of Test-Time Scaling in Reasoning Models

Soumya Suvra Ghosal, Souradip Chakraborty, Avinash Reddy et al.

NEURIPS 2025arXiv:2506.04210
24
citations

Does Training with Synthetic Data Truly Protect Privacy?

Yunpeng Zhao, Jie Zhang

ICLR 2025arXiv:2502.12976
8
citations

Does VLM Classification Benefit from LLM Description Semantics?

Pingchuan Ma, Lennart Rietdorf, Dmytro Kotovenko et al.

AAAI 2025paperarXiv:2412.11917
2
citations

Does Your AI Agent Get You? A Personalizable Framework for Approximating Human Models from Argumentation-based Dialogue Traces

Yinxu Tang, Stylianos Loukas Vasileiou, William Yeoh

AAAI 2025paperarXiv:2502.16376
4
citations

Does Your Vision-Language Model Get Lost in the Long Video Sampling Dilemma?

Tianyuan Qu, Longxiang Tang, Bohao PENG et al.

ICCV 2025arXiv:2503.12496
12
citations

DoF: A Diffusion Factorization Framework for Offline Multi-Agent Reinforcement Learning

Chao Li, Ziwei Deng, Chenxing Lin et al.

ICLR 2025
7
citations

DoF-Gaussian: Controllable Depth-of-Field for 3D Gaussian Splatting

Liao Shen, Tianqi Liu, Huiqiang Sun et al.

CVPR 2025arXiv:2503.00746
3
citations

DOF-GS: Adjustable Depth-of-Field 3D Gaussian Splatting for Post-Capture Refocusing, Defocus Rendering and Blur Removal

Yujie Wang, Praneeth Chakravarthula, Baoquan Chen

CVPR 2025
3
citations

DOF-Separation for 3D Manipulation in XR: Understanding Finger-Wrist Separation to Simultaneously Translate and Rotate Objects

Thorbjørn Mikkelsen, Qiushi Zhou, Mathias N. Lystbæk et al.

ISMAR 2025paper

DoGA: Enhancing Grounded Object Detection via Grouped Pre-Training with Attributes

Yang Liu, Feng Hou, Yunjie Peng et al.

AAAI 2025paper

DOGE: LLMs-Enhanced Hyper-Knowledge Graph Recommender for Multimodal Recommendation

Fanshen Meng, Zhenhua Meng, Ru Jin et al.

AAAI 2025paper

DOGR: Leveraging Document-Oriented Contrastive Learning in Generative Retrieval

Penghao Lu, Xin Dong, Yuansheng Zhou et al.

AAAI 2025paperarXiv:2502.07219

DOGR: Towards Versatile Visual Document Grounding and Referring

Yinan Zhou, Yuxin Chen, Haokun Lin et al.

ICCV 2025arXiv:2411.17125
2
citations

Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models

Javier Ferrando, Oscar Obeso, Senthooran Rajamanoharan et al.

ICLR 2025arXiv:2411.14257
85
citations

Do ImageNet-trained Models Learn Shortcuts? The Impact of Frequency Shortcuts on Generalization

Shunxin Wang, Raymond Veldhuis, Nicola Strisciuglio

CVPR 2025arXiv:2503.03519
2
citations

Do It Yourself: Learning Semantic Correspondence from Pseudo-Labels

Olaf Dünkel, Thomas Wimmer, Christian Theobalt et al.

ICCV 2025arXiv:2506.05312
5
citations

Do Language Models Agree with Human Perceptions of Suspense in Stories?

Glenn Matlin, Devin Zhang, Rodrigo Barroso Loza et al.

COLM 2025paperarXiv:2508.15794

Do Language Models Use Their Depth Efficiently?

Róbert Csordás, Christopher D Manning, Chris Potts

NEURIPS 2025arXiv:2505.13898
21
citations

Do Large Language Models Have a Planning Theory of Mind? Evidence from MindGames: a Multi-Step Persuasion Task

Jared Moore, Ned Cooper, Rasmus Overmark et al.

COLM 2025paperarXiv:2507.16196
1
citations

Do Large Language Models Truly Understand Geometric Structures?

Xiaofeng Wang, Yiming Wang, Wenhong Zhu et al.

ICLR 2025arXiv:2501.13773
9
citations

DOLLAR: Few-Step Video Generation via Distillation and Latent Reward Optimization

Zihan Ding, Chi Jin, Difan Liu et al.

ICCV 2025arXiv:2412.15689
8
citations

Do LLM Agents Have Regret? A Case Study in Online Learning and Games

Chanwoo Park, Xiangyu Liu, Asuman Ozdaglar et al.

ICLR 2025arXiv:2403.16843
36
citations

Do LLMs estimate uncertainty well in instruction-following?

Juyeon Heo, Miao Xiong, Christina Heinze-Deml et al.

ICLR 2025arXiv:2410.14582
16
citations

Do LLMs have Consistent Values?

Naama Rozen, Liat Bezalel, Gal Elidan et al.

ICLR 2025arXiv:2407.12878
8
citations

Do LLMs ``know'' internally when they follow instructions?

Juyeon Heo, Christina Heinze-Deml, Oussama Elachqar et al.

ICLR 2025arXiv:2410.14516
22
citations

Do LLMs Really Forget? Evaluating Unlearning with Knowledge Correlation and Confidence Awareness

Rongzhe Wei, Peizhi Niu, Hans Hao-Hsun Hsu et al.

NEURIPS 2025arXiv:2506.05735
9
citations

Do LLMs Recognize Your Preferences? Evaluating Personalized Preference Following in LLMs

Siyan Zhao, Mingyi Hong, Yang Liu et al.

ICLR 2025arXiv:2502.09597
51
citations

Do LLMs Understand Your Translations? Evaluating Paragraph-level MT with Question Answering

Patrick Fernandes, Sweta Agrawal, Emmanouil Zaranis et al.

COLM 2025paperarXiv:2504.07583
8
citations

DOLPHIN: A Programmable Framework for Scalable Neurosymbolic Learning

Aaditya Naik, Jason Liu, Claire Wang et al.

ICML 2025arXiv:2410.03348
7
citations

Do LVLMs Truly Understand Video Anomalies? Revealing Hallucination via Co-Occurrence Patterns

Menghao Zhang, Huazheng Wang, Pengfei Ren et al.

NEURIPS 2025

Domain2Vec: Vectorizing Datasets to Find the Optimal Data Mixture without Training

Mozhi Zhang, Howe Tissue, Lu Wang et al.

ICML 2025arXiv:2506.10952
3
citations

Domain-Adapted Diffusion Model for PROTAC Linker Design Through the Lens of Density Ratio in Chemical Space

Zixing Song, Ziqiao Meng, Jose Miguel Hernandez-Lobato

ICML 2025

Domain Adaptive Diabetic Retinopathy Grading with Model Absence and Flowing Data

Wenxin Su, Song Tang, Xiaofeng Liu et al.

CVPR 2025arXiv:2412.01203

Domain Adaptive Hashing Retrieval via VLM Assisted Pseudo-Labeling and Dual Space Adaptation

Jingyao Li, Zhanshan Li, Shuai Lü

NEURIPS 2025

Domain Adaptive Unfolded Graph Neural Networks

Zepeng Zhang, Olga Fink

AAAI 2025paperarXiv:2411.13137
1
citations

Domain-aware Category-level Geometry Learning Segmentation for 3D Point Clouds

Pei He, Lingling Li, Licheng Jiao et al.

ICCV 2025arXiv:2508.11265

DOMAINEVAL: An Auto-Constructed Benchmark for Multi-Domain Code Generation

Qiming Zhu, Jialun Cao, Yaojie Lu et al.

AAAI 2025paperarXiv:2408.13204
23
citations

Domain Generalizable Portrait Style Transfer

Xinbo Wang, Wenju Xu, Qing Zhang et al.

ICCV 2025arXiv:2507.04243
1
citations

Domain Generalization in CLIP via Learning with Diverse Text Prompts

Changsong Wen, Zelin Peng, Yu Huang et al.

CVPR 2025

Domain Generalized Medical Landmark Detection via Robust Boundary-Aware Pre-Training

Haifan Gong, Yu Lu, Xiang Wan et al.

AAAI 2025paper
3
citations

Domain Guidance: A Simple Transfer Approach for a Pre-trained Diffusion Model

Jincheng Zhong, XiangCheng Zhang, Jianmin Wang et al.

ICLR 2025arXiv:2504.01521
4
citations

Domain-Level Disentanglement Framework Based on Information Enhancement for Cross-Domain Cold-Start Recommendation

Nian Rong, Fei Xiong, Shirui Pan et al.

AAAI 2025paper

Domain-RAG: Retrieval-Guided Compositional Image Generation for Cross-Domain Few-Shot Object Detection

Yu Li, Xingyu Qiu, Yuqian Fu et al.

NEURIPS 2025arXiv:2506.05872
7
citations