"diffusion models" Papers

1,011 papers found • Page 18 of 21

MagicEraser: Erasing Any Objects via Semantics-Aware Control

FAN LI, Zixiao Zhang, Yi Huang et al.

ECCV 2024arXiv:2410.10207
13
citations

MagicPose: Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion

Di Chang, Yichun Shi, Quankai Gao et al.

ICML 2024arXiv:2311.12052
113
citations

Make a Cheap Scaling: A Self-Cascade Diffusion Model for Higher-Resolution Adaptation

Lanqing Guo, Yingqing He, Haoxin Chen et al.

ECCV 2024arXiv:2402.10491
51
citations

Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework

Ziyao Huang, Fan Tang, Yong Zhang et al.

CVPR 2024arXiv:2403.16510
30
citations

MaskINT: Video Editing via Interpolative Non-autoregressive Masked Transformers

Haoyu Ma, Shahin Mahdizadehaghdam, Bichen Wu et al.

CVPR 2024arXiv:2312.12468
11
citations

Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs

Ling Yang, Zhaochen Yu, Chenlin Meng et al.

ICML 2024arXiv:2401.11708
200
citations

MeDM: Mediating Image Diffusion Models for Video-to-Video Translation with Temporal Correspondence Guidance

Ernie Chu, Tzuhsuan Huang, Shuo-Yen LIN et al.

AAAI 2024paperarXiv:2308.10079
23
citations

Membership Inference Attacks on Diffusion Models via Quantile Regression

Shuai Tang, Steven Wu, Sergul Aydore et al.

ICML 2024arXiv:2312.05140
21
citations

Merging and Splitting Diffusion Paths for Semantically Coherent Panoramas

Fabio Quattrini, Vittorio Pippi, Silvia Cascianelli et al.

ECCV 2024arXiv:2408.15660
6
citations

MetaDiff: Meta-Learning with Conditional Diffusion for Few-Shot Learning

Baoquan Zhang, Chuyao Luo, Demin Yu et al.

AAAI 2024paperarXiv:2307.16424
79
citations

MILP-FBGen: LP/MILP Instance Generation with Feasibility/Boundedness

Yahong Zhang, Chenchen Fan, Donghui Chen et al.

ICML 2024

MobileDiffusion: Instant Text-to-Image Generation on Mobile Devices

Yang Zhao, Zhisheng Xiao, Yanwu Xu et al.

ECCV 2024arXiv:2311.16567
36
citations

MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space

Yanru Qu, Keyue Qiu, Yuxuan Song et al.

ICML 2024arXiv:2404.12141
52
citations

MoMo: Momentum Models for Adaptive Learning Rates

Fabian Schaipp, Ruben Ohana, Michael Eickenberg et al.

ICML 2024arXiv:2305.07583
20
citations

MonoWAD: Weather-Adaptive Diffusion Model for Robust Monocular 3D Object Detection

Youngmin Oh, Hyung-Il Kim, Seong Tae Kim et al.

ECCV 2024arXiv:2407.16448
7
citations

Morphable Diffusion: 3D-Consistent Diffusion for Single-image Avatar Creation

Xiyi Chen, Marko Mihajlovic, Shaofei Wang et al.

CVPR 2024arXiv:2401.04728
18
citations

Move Anything with Layered Scene Diffusion

Jiawei Ren, Mengmeng Xu, Jui-Chieh Wu et al.

CVPR 2024arXiv:2404.07178
13
citations

MoVideo: Motion-Aware Video Generation with Diffusion Models

Jingyun Liang, Yuchen Fan, Kai Zhang et al.

ECCV 2024arXiv:2311.11325
14
citations

Multi-Architecture Multi-Expert Diffusion Models

Yunsung Lee, Jin-Young Kim, Hyojun Go et al.

AAAI 2024paperarXiv:2306.04990
38
citations

Music Style Transfer with Time-Varying Inversion of Diffusion Models

Sifei Li, Yuxin Zhang, Fan Tang et al.

AAAI 2024paperarXiv:2402.13763
17
citations

Mutual Learning for Acoustic Matching and Dereverberation via Visual Scene-driven Diffusion

Jian Ma, Wenguan Wang, Yi Yang et al.

ECCV 2024arXiv:2407.10373
1
citations

NaturalSpeech 3: Zero-Shot Speech Synthesis with Factorized Codec and Diffusion Models

Zeqian Ju, Yuancheng Wang, Kai Shen et al.

ICML 2024arXiv:2403.03100
306
citations

Navigating Text-to-Image Generative Bias across Indic Languages

Surbhi Mittal, Arnav Sudan, MAYANK VATSA et al.

ECCV 2024arXiv:2408.00283
4
citations

NeRFiller: Completing Scenes via Generative 3D Inpainting

Ethan Weber, Aleksander Holynski, Varun Jampani et al.

CVPR 2024arXiv:2312.04560
59
citations

Neural Diffusion Models

Grigory Bartosh, Dmitry Vetrov, Christian Andersson Naesseth

ICML 2024arXiv:2310.08337
16
citations

Neural Point Cloud Diffusion for Disentangled 3D Shape and Appearance Generation

Philipp Schröppel, Christopher Wewer, Jan Lenssen et al.

CVPR 2024arXiv:2312.14124
10
citations

Neural Sign Actors: A Diffusion Model for 3D Sign Language Production from Text

Vasileios Baltatzis, Rolandos Alexandros Potamias, Evangelos Ververas et al.

CVPR 2024arXiv:2312.02702
45
citations

Neuroexplicit Diffusion Models for Inpainting of Optical Flow Fields

Tom Fischer, Pascal Peter, Joachim Weickert et al.

ICML 2024arXiv:2405.14599

NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation

Jingyang Huo, Yikai Wang, Yanwei Fu et al.

ECCV 2024arXiv:2403.18211
34
citations

NL2Contact: Natural Language Guided 3D Hand-Object Contact Modeling with Diffusion Model

Zhongqun Zhang, Hengfei Wang, Ziwei Yu et al.

ECCV 2024arXiv:2407.12727
11
citations

Non-confusing Generation of Customized Concepts in Diffusion Models

Wang Lin, Jingyuan CHEN, Jiaxin Shi et al.

ICML 2024arXiv:2405.06914
18
citations

ObjectDrop: Bootstrapping Counterfactuals for Photorealistic Object Removal and Insertion

Daniel Winter, Matan Cohen, Shlomi Fruchter et al.

ECCV 2024arXiv:2403.18818
59
citations

OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers

Han Liang, Jiacheng Bao, Ruichi Zhang et al.

CVPR 2024arXiv:2312.08985
48
citations

On Discrete Prompt Optimization for Diffusion Models

Ruochen Wang, Ting Liu, Cho-Jui Hsieh et al.

ICML 2024arXiv:2407.01606
24
citations

One at a Time: Progressive Multi-Step Volumetric Probability Learning for Reliable 3D Scene Perception

Bohan Li, Yasheng Sun, Jingxin Dong et al.

AAAI 2024paperarXiv:2306.12681
9
citations

One-Shot Diffusion Mimicker for Handwritten Text Generation

Gang Dai, Yifan Zhang, Quhui Ke et al.

ECCV 2024arXiv:2409.04004
21
citations

One-step Diffusion with Distribution Matching Distillation

Tianwei Yin, Michaël Gharbi, Richard Zhang et al.

CVPR 2024arXiv:2311.18828
579
citations

On Inference Stability for Diffusion Models

Viet Nguyen, Giang Vu, Tung Nguyen Thanh et al.

AAAI 2024paperarXiv:2312.12431
3
citations

On the Trajectory Regularity of ODE-based Diffusion Sampling

Defang Chen, Zhenyu Zhou, Can Wang et al.

ICML 2024arXiv:2405.11326
37
citations

Optimizing Diffusion Models for Joint Trajectory Prediction and Controllable Generation

Yixiao Wang, Chen Tang, Lingfeng Sun et al.

ECCV 2024arXiv:2408.00766
17
citations

Parallel Vertex Diffusion for Unified Visual Grounding

Authors: Zesen Cheng, Kehan Li, Peng Jin et al.

AAAI 2024paperarXiv:2303.07216
37
citations

PEA-Diffusion: Parameter-Efficient Adapter with Knowledge Distillation in non-English Text-to-Image Generation

Jian Ma, Chen Chen, Qingsong Xie et al.

ECCV 2024arXiv:2311.17086
8
citations

Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering

Ruofan Liang, Zan Gojcic, Merlin Nimier-David et al.

ECCV 2024arXiv:2408.09702
24
citations

Photorealistic Video Generation with Diffusion Models

Agrim Gupta, Lijun Yu, Kihyuk Sohn et al.

ECCV 2024arXiv:2312.06662
278
citations

Pix2Gif: Motion-Guided Diffusion for GIF Generation

Hitesh Kandala, Jianfeng Gao, Jianwei Yang

ECCV 2024arXiv:2403.04634
5
citations

Pixel-Aware Stable Diffusion for Realistic Image Super-Resolution and Personalized Stylization

Tao Yang, Rongyuan Wu, Peiran Ren et al.

ECCV 2024arXiv:2308.14469
249
citations

Placing Objects in Context via Inpainting for Out-of-distribution Segmentation

Pau de Jorge Aranda, Riccardo Volpi, Puneet Dokania et al.

ECCV 2024arXiv:2402.16392
11
citations

Plan, Posture and Go: Towards Open-vocabulary Text-to-Motion Generation

Jinpeng Liu, Wenxun Dai, Chunyu Wang et al.

ECCV 2024
8
citations

Plug-and-Play image restoration with Stochastic deNOising REgularization

Marien Renaud, Jean Prost, Arthur Leclaire et al.

ICML 2024arXiv:2402.01779
17
citations

Plug-In Diffusion Model for Sequential Recommendation

Haokai Ma, Ruobing Xie, Lei Meng et al.

AAAI 2024paperarXiv:2401.02913
71
citations