All Papers
34,598 papers found • Page 107 of 692
Conference
Does Refusal Training in LLMs Generalize to the Past Tense?
Maksym Andriushchenko, Nicolas Flammarion
Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?
Yang Yue, Zhiqi Chen, Rui Lu et al.
Does Representation Guarantee Welfare?
Jakob de Raaij, Ariel Procaccia, Alexandros Psomas
Does Safety Training of LLMs Generalize to Semantically Related Natural Prompts?
Sravanti Addepalli, Yerram Varun, Arun Suggala et al.
Does SGD really happen in tiny subspaces?
Minhak Song, Kwangjun Ahn, Chulhee Yun
Does Spatial Cognition Emerge in Frontier Models?
Santhosh Kumar Ramakrishnan, Erik Wijmans, Philipp Krähenbühl et al.
Does Stochastic Gradient really succeed for bandits?
Dorian Baudry, Emmeran Johnson, Simon Vary et al.
Does Thinking More Always Help? Mirage of Test-Time Scaling in Reasoning Models
Soumya Suvra Ghosal, Souradip Chakraborty, Avinash Reddy et al.
Does Training with Synthetic Data Truly Protect Privacy?
Yunpeng Zhao, Jie Zhang
Does VLM Classification Benefit from LLM Description Semantics?
Pingchuan Ma, Lennart Rietdorf, Dmytro Kotovenko et al.
Does Your AI Agent Get You? A Personalizable Framework for Approximating Human Models from Argumentation-based Dialogue Traces
Yinxu Tang, Stylianos Loukas Vasileiou, William Yeoh
Does Your Vision-Language Model Get Lost in the Long Video Sampling Dilemma?
Tianyuan Qu, Longxiang Tang, Bohao PENG et al.
DoF: A Diffusion Factorization Framework for Offline Multi-Agent Reinforcement Learning
Chao Li, Ziwei Deng, Chenxing Lin et al.
DoF-Gaussian: Controllable Depth-of-Field for 3D Gaussian Splatting
Liao Shen, Tianqi Liu, Huiqiang Sun et al.
DOF-GS: Adjustable Depth-of-Field 3D Gaussian Splatting for Post-Capture Refocusing, Defocus Rendering and Blur Removal
Yujie Wang, Praneeth Chakravarthula, Baoquan Chen
DOF-Separation for 3D Manipulation in XR: Understanding Finger-Wrist Separation to Simultaneously Translate and Rotate Objects
Thorbjørn Mikkelsen, Qiushi Zhou, Mathias N. Lystbæk et al.
DoGA: Enhancing Grounded Object Detection via Grouped Pre-Training with Attributes
Yang Liu, Feng Hou, Yunjie Peng et al.
DOGE: LLMs-Enhanced Hyper-Knowledge Graph Recommender for Multimodal Recommendation
Fanshen Meng, Zhenhua Meng, Ru Jin et al.
DOGR: Leveraging Document-Oriented Contrastive Learning in Generative Retrieval
Penghao Lu, Xin Dong, Yuansheng Zhou et al.
DOGR: Towards Versatile Visual Document Grounding and Referring
Yinan Zhou, Yuxin Chen, Haokun Lin et al.
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
Javier Ferrando, Oscar Obeso, Senthooran Rajamanoharan et al.
Do ImageNet-trained Models Learn Shortcuts? The Impact of Frequency Shortcuts on Generalization
Shunxin Wang, Raymond Veldhuis, Nicola Strisciuglio
Do It Yourself: Learning Semantic Correspondence from Pseudo-Labels
Olaf Dünkel, Thomas Wimmer, Christian Theobalt et al.
Do Language Models Agree with Human Perceptions of Suspense in Stories?
Glenn Matlin, Devin Zhang, Rodrigo Barroso Loza et al.
Do Language Models Use Their Depth Efficiently?
Róbert Csordás, Christopher D Manning, Chris Potts
Do Large Language Models Have a Planning Theory of Mind? Evidence from MindGames: a Multi-Step Persuasion Task
Jared Moore, Ned Cooper, Rasmus Overmark et al.
Do Large Language Models Truly Understand Geometric Structures?
Xiaofeng Wang, Yiming Wang, Wenhong Zhu et al.
DOLLAR: Few-Step Video Generation via Distillation and Latent Reward Optimization
Zihan Ding, Chi Jin, Difan Liu et al.
Do LLM Agents Have Regret? A Case Study in Online Learning and Games
Chanwoo Park, Xiangyu Liu, Asuman Ozdaglar et al.
Do LLMs estimate uncertainty well in instruction-following?
Juyeon Heo, Miao Xiong, Christina Heinze-Deml et al.
Do LLMs have Consistent Values?
Naama Rozen, Liat Bezalel, Gal Elidan et al.
Do LLMs ``know'' internally when they follow instructions?
Juyeon Heo, Christina Heinze-Deml, Oussama Elachqar et al.
Do LLMs Really Forget? Evaluating Unlearning with Knowledge Correlation and Confidence Awareness
Rongzhe Wei, Peizhi Niu, Hans Hao-Hsun Hsu et al.
Do LLMs Recognize Your Preferences? Evaluating Personalized Preference Following in LLMs
Siyan Zhao, Mingyi Hong, Yang Liu et al.
Do LLMs Understand Your Translations? Evaluating Paragraph-level MT with Question Answering
Patrick Fernandes, Sweta Agrawal, Emmanouil Zaranis et al.
DOLPHIN: A Programmable Framework for Scalable Neurosymbolic Learning
Aaditya Naik, Jason Liu, Claire Wang et al.
Do LVLMs Truly Understand Video Anomalies? Revealing Hallucination via Co-Occurrence Patterns
Menghao Zhang, Huazheng Wang, Pengfei Ren et al.
Domain2Vec: Vectorizing Datasets to Find the Optimal Data Mixture without Training
Mozhi Zhang, Howe Tissue, Lu Wang et al.
Domain-Adapted Diffusion Model for PROTAC Linker Design Through the Lens of Density Ratio in Chemical Space
Zixing Song, Ziqiao Meng, Jose Miguel Hernandez-Lobato
Domain Adaptive Diabetic Retinopathy Grading with Model Absence and Flowing Data
Wenxin Su, Song Tang, Xiaofeng Liu et al.
Domain Adaptive Hashing Retrieval via VLM Assisted Pseudo-Labeling and Dual Space Adaptation
Jingyao Li, Zhanshan Li, Shuai Lü
Domain Adaptive Unfolded Graph Neural Networks
Zepeng Zhang, Olga Fink
Domain-aware Category-level Geometry Learning Segmentation for 3D Point Clouds
Pei He, Lingling Li, Licheng Jiao et al.
DOMAINEVAL: An Auto-Constructed Benchmark for Multi-Domain Code Generation
Qiming Zhu, Jialun Cao, Yaojie Lu et al.
Domain Generalizable Portrait Style Transfer
Xinbo Wang, Wenju Xu, Qing Zhang et al.
Domain Generalization in CLIP via Learning with Diverse Text Prompts
Changsong Wen, Zelin Peng, Yu Huang et al.
Domain Generalized Medical Landmark Detection via Robust Boundary-Aware Pre-Training
Haifan Gong, Yu Lu, Xiang Wan et al.
Domain Guidance: A Simple Transfer Approach for a Pre-trained Diffusion Model
Jincheng Zhong, XiangCheng Zhang, Jianmin Wang et al.
Domain-Level Disentanglement Framework Based on Information Enhancement for Cross-Domain Cold-Start Recommendation
Nian Rong, Fei Xiong, Shirui Pan et al.
Domain-RAG: Retrieval-Guided Compositional Image Generation for Cross-Domain Few-Shot Object Detection
Yu Li, Xingyu Qiu, Yuqian Fu et al.