All Papers

34,598 papers found • Page 106 of 692

DMesh++: An Efficient Differentiable Mesh for Complex Shapes

Sanghyun Son, Matheus Gadelha, Yang Zhou et al.

ICCV 2025arXiv:2412.16776
3
citations

DMF-Net: Image-Guided Point Cloud Completion with Dual-Channel Modality Fusion and Shape-Aware Upsampling Transformer

Aihua Mao, Yuxuan Tang, Jiangtao Huang et al.

AAAI 2025paperarXiv:2406.17319
6
citations

DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning Based on Constant-Overhead Linear Secret Resharing

Alexander Bienstock, Ujjwal Kumar, Antigoni Polychroniadou

ICML 2025arXiv:2410.16161

DMol: A Highly Efficient and Chemical Motif-Preserving Molecule Generation Platform

Peizhi Niu, Yu-Hsiang Wang, Vishal Rana et al.

NEURIPS 2025arXiv:2504.06312
1
citations

DMOSpeech: Direct Metric Optimization via Distilled Diffusion Model in Zero-Shot Speech Synthesis

Yinghao Li, Rithesh Kumar, Zeyu Jin

ICML 2025oralarXiv:2410.11097
5
citations

DMQ: Dissecting Outliers of Diffusion Models for Post-Training Quantization

Dongyeun Lee, jiwan hur, Hyounguk Shon et al.

ICCV 2025arXiv:2507.12933
2
citations

DMT-RoleBench: A Dynamic Multi-Turn Dialogue Based Benchmark for Role-Playing Evaluation of Large Language Model and Agent

Dingbo Yuan, Yipeng Chen, Guodong Liu et al.

AAAI 2025paper

DMWM: Dual-Mind World Model with Long-Term Imagination

Lingyi Wang, Rashed Shelim, Walid Saad et al.

NEURIPS 2025spotlightarXiv:2502.07591
6
citations

DNA-DetectLLM: Unveiling AI-Generated Text via a DNA-Inspired Mutation-Repair Paradigm

Xiaowei Zhu, Yubing Ren, Fang Fang et al.

NEURIPS 2025spotlightarXiv:2509.15550

DNAEdit: Direct Noise Alignment for Text-Guided Rectified Flow Editing

Chenxi Xie, Minghan Li, Shuai Li et al.

NEURIPS 2025spotlightarXiv:2506.01430
8
citations

DNF-Intrinsic: Deterministic Noise-Free Diffusion for Indoor Inverse Rendering

Rongjia Zheng, Qing Zhang, Chengjiang Long et al.

ICCV 2025arXiv:2507.03924
2
citations

DNF: Unconditional 4D Generation with Dictionary-based Neural Fields

Xinyi Zhang, Naiqi Li, Angela Dai

CVPR 2025arXiv:2412.05161
4
citations

DnLUT: Ultra-Efficient Color Image Denoising via Channel-Aware Lookup Tables

Sidi Yang, Binxiao Huang, Yulun Zhang et al.

CVPR 2025arXiv:2503.15931
8
citations

Do as I do (Safely): Mitigating Task-Specific Fine-tuning Risks in Large Language Models

Francisco Eiras, Aleksandar Petrov, Philip Torr et al.

ICLR 2025arXiv:2406.10288
11
citations

Do as We Do, Not as You Think: the Conformity of Large Language Models

Zhiyuan Weng, Guikun Chen, Wenguan Wang

ICLR 2025arXiv:2501.13381
20
citations

Do Automatic Factuality Metrics Measure Factuality? A Critical Evaluation

Sanjana Ramprasad, Byron Wallace

NEURIPS 2025arXiv:2411.16638
8
citations

Do Bayesian Neural Networks Actually Behave Like Bayesian Models?

Gábor Pituk, Vik Shirvaikar, Tom Rainforth

ICML 2025

Do Biased Models Have Biased Thoughts?

Swati Rajwal, Shivank Garg, Reem Abdel-Salam et al.

COLM 2025paperarXiv:2508.06671

Dobi-SVD: Differentiable SVD for LLM Compression and Some New Perspectives

Qinsi Wang, Jinghan Ke, Masayoshi Tomizuka et al.

ICLR 2025arXiv:2502.02723
25
citations

DocKS-RAG: Optimizing Document-Level Relation Extraction through LLM-Enhanced Hybrid Prompt Tuning

Xiaolong Xu, Yibo Zhou, Haolong Xiang et al.

ICML 2025

DocKylin: A Large Multimodal Model for Visual Document Understanding with Efficient Visual Slimming

Jiaxin Zhang, Wentao Yang, Songxuan Lai et al.

AAAI 2025paperarXiv:2406.19101
32
citations

DocLayLLM: An Efficient Multi-modal Extension of Large Language Models for Text-rich Document Understanding

Wenhui Liao, Jiapeng Wang, Hongliang Li et al.

CVPR 2025arXiv:2408.15045
10
citations

DocMamba: Efficient Document Pre-training with State Space Model

Pengfei Hu, Zhenrong Zhang, Jiefeng Ma et al.

AAAI 2025paperarXiv:2409.11887
2
citations

DocMIA: Document-Level Membership Inference Attacks against DocVQA Models

Khanh Nguyen, Raouf Kerkouche, Mario Fritz et al.

ICLR 2025arXiv:2502.03692
1
citations

Do Computer Vision Foundation Models Learn the Low-level Characteristics of the Human Visual System?

Yancheng Cai, Fei Yin, Dounia Hammou et al.

CVPR 2025highlightarXiv:2502.20256
7
citations

Do Contemporary Causal Inference Models Capture Real-World Heterogeneity? Findings from a Large-Scale Benchmark

Haining Yu, Yizhou Sun

ICLR 2025arXiv:2410.07021
1
citations

Docopilot: Improving Multimodal Models for Document-Level Understanding

Yuchen Duan, Zhe Chen, Yusong Hu et al.

CVPR 2025arXiv:2507.14675
15
citations

DocSAM: Unified Document Image Segmentation via Query Decomposition and Heterogeneous Mixed Learning

Xiao-Hui Li, Fei Yin, Cheng-Lin Liu

CVPR 2025arXiv:2504.04085
3
citations

DOCS: Quantifying Weight Similarity for Deeper Insights into Large Language Models

Zeping Min, Xinshang Wang

ICLR 2025arXiv:2501.16650
1
citations

DocThinker: Explainable Multimodal Large Language Models with Rule-based Reinforcement Learning for Document Understanding

Wenwen Yu, Zhibo Yang, Yuliang Liu et al.

ICCV 2025arXiv:2508.08589
4
citations

Doctor Approved: Generating Medically Accurate Skin Disease Images through AI-Expert Feedback

Janet Wang, Yunbei Zhang, Zhengming Ding et al.

NEURIPS 2025arXiv:2506.12323
2
citations

Document Haystacks: Vision-Language Reasoning Over Piles of 1000+ Documents

Jun Chen, Dannong Xu, Junjie Fei et al.

CVPR 2025arXiv:2411.16740
5
citations

Document Summarization with Conformal Importance Guarantees

Bruce Kuwahara, Chen-Yuan Lin, Xiao Shi Huang et al.

NEURIPS 2025arXiv:2509.20461

DocVision: a Seamless, Cross-Device Immersive Active Reading Framework for Digital Academic Literature

Yapeng Liu, Kai Chen, Dongliang Guo et al.

ISMAR 2025paper

DocVLM: Make Your VLM an Efficient Reader

Mor Shpigel Nacson, Aviad Aberdam, Roy Ganz et al.

CVPR 2025arXiv:2412.08746
12
citations

DocVXQA: Context-Aware Visual Explanations for Document Question Answering

Mohamed Ali Souibgui, Changkyu Choi, Andrey Barsky et al.

ICML 2025arXiv:2505.07496
3
citations

Do Deep Neural Network Solutions Form a Star Domain?

Ankit Sonthalia, Alexander Rubinstein, Ehsan Abbasnejad et al.

ICLR 2025arXiv:2403.07968
4
citations

Do different prompting methods yield a common task representation in language models?

Guy Davidson, Todd Gureckis, Brenden Lake et al.

NEURIPS 2025arXiv:2505.12075
5
citations

DoDo-Code: an Efficient Levenshtein Distance Embedding-based Code for 4-ary IDS Channel

Alan J.X. Guo, Sihan Sun, Xiang Wei et al.

NEURIPS 2025arXiv:2312.12717
1
citations

Do Egocentric Video-Language Models Truly Understand Hand-Object Interactions?

BOSHEN XU, Ziheng Wang, Yang Du et al.

ICLR 2025oralarXiv:2405.17719
10
citations

Does Data Scaling Lead to Visual Compositional Generalization?

Arnas Uselis, Andrea Dittadi, Seong Joon Oh

ICML 2025arXiv:2507.07102
5
citations

Does Editing Provide Evidence for Localization?

Zihao Wang, Victor Veitch

ICLR 2025arXiv:2502.11447
9
citations

Does GCL Need a Large Number of Negative Samples? Enhancing Graph Contrastive Learning with Effective and Efficient Negative Sampling

Yongqi Huang, Jitao Zhao, Dongxiao He et al.

AAAI 2025paperarXiv:2503.17908
8
citations

Does Generation Require Memorization? Creative Diffusion Models using Ambient Diffusion

Kulin Shah, Alkis Kalavasis, Adam Klivans et al.

ICML 2025arXiv:2502.21278
11
citations

Does GPT Really Get It? A Hierarchical Scale to Quantify Human and AI’s Understanding of Algorithms

Mirabel Reid, Santosh S. Vempala

AAAI 2025paperarXiv:2406.14722
1
citations

Does Graph Prompt Work? A Data Operation Perspective with Theoretical Analysis

Qunzhong WANG, Xiangguo Sun, Hong Cheng

ICML 2025arXiv:2410.01635
15
citations

Does learning the right latent variables necessarily improve in-context learning?

Sarthak Mittal, Eric Elmoznino, Léo Gagnon et al.

ICML 2025arXiv:2405.19162
8
citations

Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?

Zi Liang, Haibo Hu, Qingqing Ye et al.

ICML 2025arXiv:2505.12871
4
citations

Does Object Binding Naturally Emerge in Large Pretrained Vision Transformers?

Yihao Li, Saeed Salehi, Lyle Ungar et al.

NEURIPS 2025spotlightarXiv:2510.24709
3
citations

Does One-shot Give the Best Shot? Mitigating Model Inconsistency in One-shot Federated Learning

Hui Zeng, Wenke Huang, Tongqing Zhou et al.

ICML 2025
1
citations