Paper "diffusion models" Papers
92 papers found • Page 1 of 2
Conference
AdaDiff: Adaptive Step Selection for Fast Diffusion Models
Hui Zhang, Zuxuan Wu, Zhen Xing et al.
Attentive Eraser: Unleashing Diffusion Model’s Object Removal Potential via Self-Attention Redirection Guidance
Wenhao Sun, Xue-Mei Dong, Benlei Cui et al.
Auto-Regressive Moving Diffusion Models for Time Series Forecasting
Jiaxin Gao, Qinglong Cao, Yuntian Chen
CaRDiff: Video Salient Object Ranking Chain of Thought Reasoning for Saliency Prediction with Diffusion
Yunlong Tang, Gen Zhan, Li Yang et al.
CasFT: Future Trend Modeling for Information Popularity Prediction with Dynamic Cues-Driven Diffusion Models
Xin Jing, Yichen Jing, Yuhuan Lu et al.
ChangeDiff: A Multi-Temporal Change Detection Data Generator with Flexible Text Prompts via Diffusion Model
Qi Zang, Jiayi Yang, Shuang Wang et al.
CODE: Confident Ordinary Differential Editing
Bastien Van Delft, Tommaso Martorella, Alexandre Alahi
Constrained Generative Modeling with Manually Bridged Diffusion Models
Saeid Naderiparizi, Xiaoxuan Liang, Berend Zwartsenberg et al.
Denoising Diffusion Variational Inference: Diffusion Models as Expressive Variational Posteriors
Wasu Top Piriyakulkij, Yingheng Wang, Volodymyr Kuleshov
DIDiffGes: Decoupled Semi-Implicit Diffusion Models for Real-time Gesture Generation from Speech
Yongkang Cheng, Shaoli Huang, Xuelin Chen et al.
DiffCalib: Reformulating Monocular Camera Calibration as Diffusion-Based Dense Incident Map Generation
Xiankang He, Guangkai Xu, Bo Zhang et al.
DiffRetouch: Using Diffusion to Retouch on the Shoulder of Experts
Zheng-Peng Duan, Jiawei Zhang, Zheng Lin et al.
Diff-Shadow: Global-guided Diffusion Model for Shadow Removal
Jinting Luo, Ru Li, Chengzhi Jiang et al.
Diffusion Model Patching via Mixture-of-Prompts
Seokil Ham, Sangmin Woo, Jin-Young Kim et al.
Digging into Intrinsic Contextual Information for High-fidelity 3D Point Cloud Completion
Jisheng Chu, Wenrui Li, Xingtao Wang et al.
DriveEditor: A Unified 3D Information-Guided Framework for Controllable Object Editing in Driving Scenes
Yiyuan Liang, Zhiying Yan, Liqun Chen et al.
EditBoard: Towards a Comprehensive Evaluation Benchmark for Text-Based Video Editing Models
Yupeng Chen, Penglin Chen, Xiaoyu Zhang et al.
Efficient Image-to-Image Diffusion Classifier for Adversarial Robustness
Hefei Mei, Minjing Dong, Chang Xu
Expensive Multi-Objective Bayesian Optimization Based on Diffusion Models
Bingdong Li, Zixiang Di, Yongfan Lu et al.
Feature Denoising Diffusion Model for Blind Image Quality Assessment
Xudong Li, Yan Zhang, Yunhang Shen et al.
GenesisTex2: Stable, Consistent and High-Quality Text-to-Texture Generation
Jiawei Lu, YingPeng Zhang, Zengjun Zhao et al.
GlyphDraw2: Automatic Generation of Complex Glyph Posters with Diffusion Models and Large Language Models
Jian Ma, Yonglin Deng, Chen Chen et al.
GRPose: Learning Graph Relations for Human Image Generation with Pose Priors
Xiangchen Yin, Donglin Di, Lei Fan et al.
GSDiff: Synthesizing Vector Floorplans via Geometry-enhanced Structural Graph Generation
Sizhe Hu, Wenming Wu, Yuntao Wang et al.
HaHeAE: Learning Generalisable Joint Representations of Human Hand and Head Movements in Extended Reality
Zhiming Hu, Guanhua Zhang, Zheming Yin et al.
HandDiffuse: Generative Controllers for Two-Hand Interactions via Diffusion Models
Pei Lin
HYGENE: A Diffusion-Based Hypergraph Generation Method
Dorian Gailhard, Enzo Tartaglione, Lirida Naviner et al.
ISPDiffuser: Learning RAW-to-sRGB Mappings with Texture-Aware Diffusion Models and Histogram-Guided Color Consistency
Yang Ren, Hai Jiang, Menglong Yang et al.
LLM4GEN: Leveraging Semantic Representation of LLMs for Text-to-Image Generation
Mushui Liu, Yuhang Ma, Zhen Yang et al.
Modular-Cam: Modular Dynamic Camera-view Video Generation with LLM
Zirui Pan, Xin Wang, Yipeng Zhang et al.
MV-VTON: Multi-View Virtual Try-On with Diffusion Models
Haoyu Wang, Zhilu Zhang, Donglin Di et al.
Pixel Is Not a Barrier: An Effective Evasion Attack for Pixel-Domain Diffusion Models
Chun-Yen Shih, Li-Xuan Peng, Jia-Wei Liao et al.
PixelMan: Consistent Object Editing with Diffusion Models via Pixel Manipulation and Generation
Liyao Jiang, Negar Hassanpour, Mohammad Salameh et al.
Population Aware Diffusion for Time Series Generation
Yang Li, Han Meng, Zhenyu Bi et al.
RealisHuman: A Two-Stage Approach for Refining Malformed Human Parts in Generated Images
Benzhi Wang, Jingkai Zhou, Jingqi Bai et al.
ResAdapter: Domain Consistent Resolution Adapter for Diffusion Models
Jiaxiang Cheng, Pan Xie, Xin Xia et al.
RHanDS: Refining Malformed Hands for Generated Images with Decoupled Structure and Style Guidance
Chengrui Wang, Pengfei Liu, Min Zhou et al.
Self-attention-based Diffusion Model for Time-series Imputation in Partial Blackout Scenarios
Mohammad Rafid Ul Islam, Prasad Tadepalli, Alan Fern
Sign-IDD: Iconicity Disentangled Diffusion for Sign Language Production
Shengeng Tang, Jiayi He, Dan Guo et al.
Spectral Motion Alignment for Video Motion Transfer Using Diffusion Models
Geon Yeong Park, Hyeonho Jeong, Sang Wan Lee et al.
SwiftTry: Fast and Consistent Video Virtual Try-On with Diffusion Models
Hung Nguyen, Quang Qui-Vinh Nguyen, Khoi Nguyen et al.
TCAQ-DM: Timestep-Channel Adaptive Quantization for Diffusion Models
Haocheng Huang, Jiaxin Chen, Jinyang Guo et al.
TEncDM: Understanding the Properties of the Diffusion Model in the Space of Language Model Encodings
Alexander Shabalin, Viacheslav Meshchaninov, Egor Chimbulatov et al.
Text2Data: Low-Resource Data Generation with Textual Control
Shiyu Wang, Yihao Feng, Tian Lan et al.
Text2Relight: Creative Portrait Relighting with Text Guidance
Junuk Cha, Mengwei Ren, Krishna Kumar Singh et al.
TimeDP: Learning to Generate Multi-Domain Time Series with Domain Prompts
Yu-Hao Huang, Chang Xu, Yueying Wu et al.
TIV-Diffusion: Towards Object-Centric Movement for Text-driven Image to Video Generation
Xingrui Wang, Xin Li, Yaosi Hu et al.
Transfer Learning of Real Image Features with Soft Contrastive Loss for Fake Image Detection
Ziyou Liang, Weifeng Liu, Run Wang et al.
Tri-Ergon: Fine-Grained Video-to-Audio Generation with Multi-Modal Conditions and LUFS Control
Bingliang Li, Fengyu Yang, Yuxin Mao et al.
Unlearning Concepts in Diffusion Model via Concept Domain Correction and Concept Preserving Gradient
Yongliang Wu, Shiji Zhou, Mingzhuo Yang et al.