All Papers

34,598 papers found • Page 103 of 692

DisPose: Disentangling Pose Guidance for Controllable Human Image Animation

Hongxiang Li, Yaowei Li, Yuhang Yang et al.

ICLR 2025arXiv:2412.09349
26
citations

DiSRT-In-Bed: Diffusion-Based Sim-to-Real Transfer Framework for In-Bed Human Mesh Recovery

Jing Gao, Ce Zheng, Laszlo Jeni et al.

CVPR 2025arXiv:2504.03006
1
citations

Disrupting Model Merging: A Parameter-Level Defense Without Sacrificing Accuracy

JUNHAO WEI, YU ZHE, Jun Sakuma

ICCV 2025arXiv:2503.07661
5
citations

Dissecting Adversarial Robustness of Multimodal LM Agents

Chen Wu, Rishi Shah, Jing Yu Koh et al.

ICLR 2025arXiv:2406.12814
81
citations

Dissecting and Mitigating Diffusion Bias via Mechanistic Interpretability

Yingdong Shi, Changming Li, Yifan Wang et al.

CVPR 2025arXiv:2503.20483
15
citations

Dissecting Generalized Category Discovery: Multiplex Consensus under Self-Deconstruction

Luyao Tang, Kunze Huang, Yuxuan Yuan et al.

ICCV 2025highlightarXiv:2508.10731
6
citations

Dissecting Submission Limit in Desk-Rejections: A Mathematical Analysis of Fairness in AI Conference Policies

Yuefan Cao, Xiaoyu Li, Yingyu Liang et al.

ICML 2025arXiv:2502.00690
12
citations

Diss-l-ECT: Dissecting Graph Data with Local Euler Characteristic Transforms

Julius Von Rohrscheidt, Bastian Rieck

ICML 2025arXiv:2410.02622
3
citations

DiST-4D: Disentangled Spatiotemporal Diffusion with Metric Depth for 4D Driving Scene Generation

Jiazhe Guo, Yikang Ding, Xiwu Chen et al.

ICCV 2025arXiv:2503.15208
22
citations

Distance Adaptive Beam Search for Provably Accurate Graph-Based Nearest Neighbor Search

Yousef Al-Jazzazi, Haya Diwan, Jinrui Gou et al.

NEURIPS 2025arXiv:2505.15636
2
citations

Distance-Based Tree-Sliced Wasserstein Distance

Viet-Hoang Tran, Minh-Khoi Nguyen-Nhat, Trang Pham et al.

ICLR 2025arXiv:2503.11050
7
citations

Distance-informed Neural Processes

Aishwarya Venkataramanan, Joachim Denzler

NEURIPS 2025arXiv:2508.18903
1
citations

Distances Between Top-Truncated Elections of Different Sizes

Piotr Faliszewski, Jitka Mertlová, Pierre Nunn et al.

AAAI 2025paperarXiv:2601.17931
2
citations

Distances for Markov chains from sample streams

Sergio Calo, Anders Jonsson, Gergely Neu et al.

NEURIPS 2025arXiv:2505.18005
1
citations

DISTA-Net: Dynamic Closely-Spaced Infrared Small Target Unmixing

Shengdong Han, Shangdong Yang, Yuxuan Li et al.

ICCV 2025arXiv:2505.19148
1
citations

DISTIL: Data-Free Inversion of Suspicious Trojan Inputs via Latent Diffusion

Hossein Mirzaei, Zeinab Taghavi, Sepehr Rezaee et al.

ICCV 2025arXiv:2507.22813

Distil-E2D: Distilling Image-to-Depth Priors for Event-Based Monocular Depth Estimation

Jie Long Lee, Gim Hee Lee

NEURIPS 2025oral

Distillation of Discrete Diffusion through Dimensional Correlations

Satoshi Hayakawa, Yuhta Takida, Masaaki Imaizumi et al.

ICML 2025arXiv:2410.08709
18
citations

Distillation Robustifies Unlearning

Bruce W, Lee, Addie Foote, Alex Infanger et al.

NEURIPS 2025spotlightarXiv:2506.06278
6
citations

Distillation Scaling Laws

Dan Busbridge, Amitis Shidani, Floris Weers et al.

ICML 2025arXiv:2502.08606
30
citations

DistillDrive: End-to-End Multi-Mode Autonomous Driving Distillation by Isomorphic Hetero-Source Planning Model

Rui Yu, Xianghang Zhang, Runkai Zhao et al.

ICCV 2025arXiv:2508.05402
4
citations

Distilled Decoding 1: One-step Sampling of Image Auto-regressive Models with Flow Matching

Enshu Liu, Xuefei Ning, Yu Wang et al.

ICLR 2025arXiv:2412.17153
14
citations

Distilled Decoding 2: One-step Sampling of Image Auto-regressive Models with Conditional Score Distillation

Enshu Liu, Qian Chen, Xuefei Ning et al.

NEURIPS 2025arXiv:2510.21003
3
citations

Distilled Prompt Learning for Incomplete Multimodal Survival Prediction

Yingxue Xu, Fengtao ZHOU, Chenyu Zhao et al.

CVPR 2025arXiv:2503.01653
6
citations

DistillHGNN: A Knowledge Distillation Approach for High-Speed Hypergraph Neural Networks

Saman Forouzandeh, Parham Moradi Dowlatabadi, Mahdi Jalili

ICLR 2025
1
citations

Distilling Dataset into Neural Field

Donghyeok Shin, HeeSun Bae, Gyuwon Sim et al.

ICLR 2025arXiv:2503.04835
4
citations

Distilling Diffusion Models to Efficient 3D LiDAR Scene Completion

shengyuan zhang, An Zhao, Ling Yang et al.

ICCV 2025arXiv:2412.03515
5
citations

Distilling Knowledge from Heterogeneous Architectures for Semantic Segmentation

Yanglin Huang, Kai Hu, Yuan Zhang et al.

AAAI 2025paperarXiv:2504.07691
1
citations

Distilling LLM Agent into Small Models with Retrieval and Code Tools

Minki Kang, Jongwon Jeong, Seanie Lee et al.

NEURIPS 2025spotlightarXiv:2505.17612
13
citations

Distilling LLM Prior to Flow Model for Generalizable Agent’s Imagination in Object Goal Navigation

Badi Li, Ren-Jie Lu, Yu Zhou et al.

NEURIPS 2025arXiv:2508.09423

Distilling Long-tailed Datasets

Zhenghao Zhao, Haoxuan Wang, Yuzhang Shang et al.

CVPR 2025arXiv:2408.14506
5
citations

Distilling Monocular Foundation Model for Fine-grained Depth Completion

Yingping Liang, Yutao Hu, Wenqi Shao et al.

CVPR 2025arXiv:2503.16970
9
citations

Distilling Multi-modal Large Language Models for Autonomous Driving

Deepti Hegde, Rajeev Yasarla, Hong Cai et al.

CVPR 2025arXiv:2501.09757
29
citations

Distilling Parallel Gradients for Fast ODE Solvers of Diffusion Models

Beier Zhu, Ruoyu Wang, Tong Zhao et al.

ICCV 2025arXiv:2507.14797
6
citations

Distilling Reinforcement Learning Algorithms for In-Context Model-Based Planning

Jaehyeon Son, Soochan Lee, Gunhee Kim

ICLR 2025arXiv:2502.19009
7
citations

Distilling Spatially-Heterogeneous Distortion Perception for Blind Image Quality Assessment

Xudong Li, Wenjie Nie, Yan Zhang et al.

CVPR 2025
3
citations

Distilling Spectral Graph for Object-Context Aware Open-Vocabulary Semantic Segmentation

Chanyoung Kim, Dayun Ju, Woojung Han et al.

CVPR 2025arXiv:2411.17150
10
citations

Distilling Structural Representations into Protein Sequence Models

Jeffrey Ouyang-Zhang, Chengyue Gong, Yue Zhao et al.

ICLR 2025
9
citations

Distilling Structured Rationale from Large Language Models to Small Language Models for Abstractive Summarization

Linyong Wang, Lianwei Wu, Shaoqi Song et al.

AAAI 2025paper
7
citations

Distilling the Knowledge in Data Pruning

Emanuel Ben Baruch, Adam Botach, Igor Kviatkovsky et al.

ICML 2025arXiv:2403.07854
2
citations

DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMs

Jongwoo Ko, Tianyi Chen, Sungnyun Kim et al.

ICML 2025oralarXiv:2503.07067
20
citations

DisTime: Distribution-based Time Representation for Video Large Language Models

yingsen zeng, Zepeng Huang, Yujie Zhong et al.

ICCV 2025arXiv:2505.24329
5
citations

DistinctAD: Distinctive Audio Description Generation in Contexts

Bo Fang, Wenhao Wu, Qiangqiang Wu et al.

CVPR 2025highlightarXiv:2411.18180
4
citations

Distinguishing Cause from Effect with Causal Velocity Models

Johnny Xi, Hugh Dance, Peter Orbanz et al.

ICML 2025arXiv:2502.05122
2
citations

Distinguish Then Exploit: Source-free Open Set Domain Adaptation via Weight Barcode Estimation and Sparse Label Assignment

Weiming Liu, Jun Dan, Fan Wang et al.

CVPR 2025
2
citations

Dist Loss: Enhancing Regression in Few-Shot Region through Distribution Distance Constraint

Guangkun Nie, Gongzheng Tang, Shenda Hong

ICLR 2025arXiv:2411.15216
3
citations

Distortion of AI Alignment: Does Preference Optimization Optimize for Preferences?

Paul Gölz, Nika Haghtalab, Kunhe Yang

NEURIPS 2025arXiv:2505.23749
10
citations

Distraction is All You Need for Multimodal Large Language Model Jailbreaking

Zuopeng Yang, Jiluan Fan, Anli Yan et al.

CVPR 2025highlightarXiv:2502.10794
22
citations

Distributed Conformal Prediction via Message Passing

Haifeng Wen, Hong XING, Osvaldo Simeone

ICML 2025arXiv:2501.14544
2
citations

Distributed Differentially Private Data Analytics via Secure Sketching

Jakob Burkhardt, Hannah Keller, Claudio Orlandi et al.

ICML 2025arXiv:2412.00497
1
citations