All Papers
34,598 papers found • Page 106 of 692
Conference
DMesh++: An Efficient Differentiable Mesh for Complex Shapes
Sanghyun Son, Matheus Gadelha, Yang Zhou et al.
DMF-Net: Image-Guided Point Cloud Completion with Dual-Channel Modality Fusion and Shape-Aware Upsampling Transformer
Aihua Mao, Yuxuan Tang, Jiangtao Huang et al.
DMM: Distributed Matrix Mechanism for Differentially-Private Federated Learning Based on Constant-Overhead Linear Secret Resharing
Alexander Bienstock, Ujjwal Kumar, Antigoni Polychroniadou
DMol: A Highly Efficient and Chemical Motif-Preserving Molecule Generation Platform
Peizhi Niu, Yu-Hsiang Wang, Vishal Rana et al.
DMOSpeech: Direct Metric Optimization via Distilled Diffusion Model in Zero-Shot Speech Synthesis
Yinghao Li, Rithesh Kumar, Zeyu Jin
DMQ: Dissecting Outliers of Diffusion Models for Post-Training Quantization
Dongyeun Lee, jiwan hur, Hyounguk Shon et al.
DMT-RoleBench: A Dynamic Multi-Turn Dialogue Based Benchmark for Role-Playing Evaluation of Large Language Model and Agent
Dingbo Yuan, Yipeng Chen, Guodong Liu et al.
DMWM: Dual-Mind World Model with Long-Term Imagination
Lingyi Wang, Rashed Shelim, Walid Saad et al.
DNA-DetectLLM: Unveiling AI-Generated Text via a DNA-Inspired Mutation-Repair Paradigm
Xiaowei Zhu, Yubing Ren, Fang Fang et al.
DNAEdit: Direct Noise Alignment for Text-Guided Rectified Flow Editing
Chenxi Xie, Minghan Li, Shuai Li et al.
DNF-Intrinsic: Deterministic Noise-Free Diffusion for Indoor Inverse Rendering
Rongjia Zheng, Qing Zhang, Chengjiang Long et al.
DNF: Unconditional 4D Generation with Dictionary-based Neural Fields
Xinyi Zhang, Naiqi Li, Angela Dai
DnLUT: Ultra-Efficient Color Image Denoising via Channel-Aware Lookup Tables
Sidi Yang, Binxiao Huang, Yulun Zhang et al.
Do as I do (Safely): Mitigating Task-Specific Fine-tuning Risks in Large Language Models
Francisco Eiras, Aleksandar Petrov, Philip Torr et al.
Do as We Do, Not as You Think: the Conformity of Large Language Models
Zhiyuan Weng, Guikun Chen, Wenguan Wang
Do Automatic Factuality Metrics Measure Factuality? A Critical Evaluation
Sanjana Ramprasad, Byron Wallace
Do Bayesian Neural Networks Actually Behave Like Bayesian Models?
Gábor Pituk, Vik Shirvaikar, Tom Rainforth
Do Biased Models Have Biased Thoughts?
Swati Rajwal, Shivank Garg, Reem Abdel-Salam et al.
Dobi-SVD: Differentiable SVD for LLM Compression and Some New Perspectives
Qinsi Wang, Jinghan Ke, Masayoshi Tomizuka et al.
DocKS-RAG: Optimizing Document-Level Relation Extraction through LLM-Enhanced Hybrid Prompt Tuning
Xiaolong Xu, Yibo Zhou, Haolong Xiang et al.
DocKylin: A Large Multimodal Model for Visual Document Understanding with Efficient Visual Slimming
Jiaxin Zhang, Wentao Yang, Songxuan Lai et al.
DocLayLLM: An Efficient Multi-modal Extension of Large Language Models for Text-rich Document Understanding
Wenhui Liao, Jiapeng Wang, Hongliang Li et al.
DocMamba: Efficient Document Pre-training with State Space Model
Pengfei Hu, Zhenrong Zhang, Jiefeng Ma et al.
DocMIA: Document-Level Membership Inference Attacks against DocVQA Models
Khanh Nguyen, Raouf Kerkouche, Mario Fritz et al.
Do Computer Vision Foundation Models Learn the Low-level Characteristics of the Human Visual System?
Yancheng Cai, Fei Yin, Dounia Hammou et al.
Do Contemporary Causal Inference Models Capture Real-World Heterogeneity? Findings from a Large-Scale Benchmark
Haining Yu, Yizhou Sun
Docopilot: Improving Multimodal Models for Document-Level Understanding
Yuchen Duan, Zhe Chen, Yusong Hu et al.
DocSAM: Unified Document Image Segmentation via Query Decomposition and Heterogeneous Mixed Learning
Xiao-Hui Li, Fei Yin, Cheng-Lin Liu
DOCS: Quantifying Weight Similarity for Deeper Insights into Large Language Models
Zeping Min, Xinshang Wang
DocThinker: Explainable Multimodal Large Language Models with Rule-based Reinforcement Learning for Document Understanding
Wenwen Yu, Zhibo Yang, Yuliang Liu et al.
Doctor Approved: Generating Medically Accurate Skin Disease Images through AI-Expert Feedback
Janet Wang, Yunbei Zhang, Zhengming Ding et al.
Document Haystacks: Vision-Language Reasoning Over Piles of 1000+ Documents
Jun Chen, Dannong Xu, Junjie Fei et al.
Document Summarization with Conformal Importance Guarantees
Bruce Kuwahara, Chen-Yuan Lin, Xiao Shi Huang et al.
DocVision: a Seamless, Cross-Device Immersive Active Reading Framework for Digital Academic Literature
Yapeng Liu, Kai Chen, Dongliang Guo et al.
DocVLM: Make Your VLM an Efficient Reader
Mor Shpigel Nacson, Aviad Aberdam, Roy Ganz et al.
DocVXQA: Context-Aware Visual Explanations for Document Question Answering
Mohamed Ali Souibgui, Changkyu Choi, Andrey Barsky et al.
Do Deep Neural Network Solutions Form a Star Domain?
Ankit Sonthalia, Alexander Rubinstein, Ehsan Abbasnejad et al.
Do different prompting methods yield a common task representation in language models?
Guy Davidson, Todd Gureckis, Brenden Lake et al.
DoDo-Code: an Efficient Levenshtein Distance Embedding-based Code for 4-ary IDS Channel
Alan J.X. Guo, Sihan Sun, Xiang Wei et al.
Do Egocentric Video-Language Models Truly Understand Hand-Object Interactions?
BOSHEN XU, Ziheng Wang, Yang Du et al.
Does Data Scaling Lead to Visual Compositional Generalization?
Arnas Uselis, Andrea Dittadi, Seong Joon Oh
Does Editing Provide Evidence for Localization?
Zihao Wang, Victor Veitch
Does GCL Need a Large Number of Negative Samples? Enhancing Graph Contrastive Learning with Effective and Efficient Negative Sampling
Yongqi Huang, Jitao Zhao, Dongxiao He et al.
Does Generation Require Memorization? Creative Diffusion Models using Ambient Diffusion
Kulin Shah, Alkis Kalavasis, Adam Klivans et al.
Does GPT Really Get It? A Hierarchical Scale to Quantify Human and AI’s Understanding of Algorithms
Mirabel Reid, Santosh S. Vempala
Does Graph Prompt Work? A Data Operation Perspective with Theoretical Analysis
Qunzhong WANG, Xiangguo Sun, Hong Cheng
Does learning the right latent variables necessarily improve in-context learning?
Sarthak Mittal, Eric Elmoznino, Léo Gagnon et al.
Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?
Zi Liang, Haibo Hu, Qingqing Ye et al.
Does Object Binding Naturally Emerge in Large Pretrained Vision Transformers?
Yihao Li, Saeed Salehi, Lyle Ungar et al.
Does One-shot Give the Best Shot? Mitigating Model Inconsistency in One-shot Federated Learning
Hui Zeng, Wenke Huang, Tongqing Zhou et al.