Poster "video diffusion models" Papers

45 papers found

AKiRa: Augmentation Kit on Rays for Optical Video Generation

Xi Wang, Robin Courant, Marc Christie et al.

CVPR 2025arXiv:2412.14158
12
citations

AnimeGamer: Infinite Anime Life Simulation with Next Game State Prediction

Junhao Cheng, Yuying Ge, Yixiao Ge et al.

ICCV 2025arXiv:2504.01014
5
citations

Articulated Kinematics Distillation from Video Diffusion Models

Xuan Li, Qianli Ma, Tsung-Yi Lin et al.

CVPR 2025arXiv:2504.01204
3
citations

BlobGEN-Vid: Compositional Text-to-Video Generation with Blob Video Representations

Weixi Feng, Chao Liu, Sifei Liu et al.

CVPR 2025arXiv:2501.07647
11
citations

DAViD: Modeling Dynamic Affordance of 3D Objects Using Pre-trained Video Diffusion Models

Hyeonwoo Kim, Sangwon Baik, Hanbyul Joo

ICCV 2025arXiv:2501.08333
8
citations

DynamicScaler: Seamless and Scalable Video Generation for Panoramic Scenes

Jinxiu Liu, Shaoheng Lin, Yinxiao Li et al.

CVPR 2025arXiv:2412.11100
15
citations

Dynamic Typography: Bringing Text to Life via Video Diffusion Prior

Zichen Liu, Yihao Meng, Hao Ouyang et al.

ICCV 2025arXiv:2404.11614
7
citations

Dynamic View Synthesis as an Inverse Problem

Hidir Yesiltepe, Pinar Yanardag

NEURIPS 2025arXiv:2506.08004
3
citations

Event-Guided Consistent Video Enhancement with Modality-Adaptive Diffusion Pipeline

Kanghao Chen, Zixin Zhang, Guoqiang Liang et al.

NEURIPS 2025

FADE: Frequency-Aware Diffusion Model Factorization for Video Editing

Yixuan Zhu, Haolin Wang, Shilin Ma et al.

CVPR 2025arXiv:2506.05934
3
citations

FluidNexus: 3D Fluid Reconstruction and Prediction from a Single Video

Yue Gao, Hong-Xing Yu, Bo Zhu et al.

CVPR 2025arXiv:2503.04720
12
citations

From Prompt to Progression: Taming Video Diffusion Models for Seamless Attribute Transition

Ling Lo, Kelvin Chan, Wen-Huang Cheng et al.

ICCV 2025arXiv:2509.19690
1
citations

GenFusion: Closing the Loop between Reconstruction and Generation via Videos

Sibo Wu, Congrong Xu, Binbin Huang et al.

CVPR 2025arXiv:2503.21219
19
citations

Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using Real-Time Warped Noise

Ryan Burgert, Yuancheng Xu, Wenqi Xian et al.

CVPR 2025arXiv:2501.08331
62
citations

InterDyn: Controllable Interactive Dynamics with Video Diffusion Models

Rick Akkerman, Haiwen Feng, Michael J. Black et al.

CVPR 2025arXiv:2412.11785
5
citations

LangScene-X: Reconstruct Generalizable 3D Language-Embedded Scenes with TriMap Video Diffusion

Fangfu Liu, Hao Li, Jiawei Chi et al.

ICCV 2025arXiv:2507.02813
3
citations

Learning 3D Persistent Embodied World Models

Siyuan Zhou, Yilun Du, Yuncong Yang et al.

NEURIPS 2025arXiv:2505.05495
17
citations

LongDiff: Training-Free Long Video Generation in One Go

Zhuoling Li, Hossein Rahmani, Qiuhong Ke et al.

CVPR 2025arXiv:2503.18150
5
citations

Mimir: Improving Video Diffusion Models for Precise Text Understanding

Shuai Tan, Biao Gong, Yutong Feng et al.

CVPR 2025arXiv:2412.03085
16
citations

Mobile Video Diffusion

Haitam Ben Yahia, Denis Korzhenkov, Ioannis Lelekas et al.

ICCV 2025arXiv:2412.07583
12
citations

Multi-identity Human Image Animation with Structural Video Diffusion

Zhenzhi Wang, Yixuan Li, yanhong zeng et al.

ICCV 2025arXiv:2504.04126
5
citations

NormalCrafter: Learning Temporally Consistent Normals from Video Diffusion Priors

Yanrui Bin, Wenbo Hu, Haoyuan Wang et al.

ICCV 2025arXiv:2504.11427
9
citations

Novel View Synthesis from A Few Glimpses via Test-Time Natural Video Completion

Yan Xu, Yixing Wang, Stella X. Yu

NEURIPS 2025arXiv:2511.17932

OSV: One Step is Enough for High-Quality Image to Video Generation

Xiaofeng Mao, Zhengkai Jiang, Fu-Yun Wang et al.

CVPR 2025arXiv:2409.11367
23
citations

Reanimating Images using Neural Representations of Dynamic Stimuli

Jacob Yeung, Andrew Luo, Gabriel Sarch et al.

CVPR 2025arXiv:2406.02659
3
citations

Scene Splatter: Momentum 3D Scene Generation from Single Image with Video Diffusion Model

Shengjun Zhang, Jinzhao Li, Xin Fei et al.

CVPR 2025arXiv:2504.02764
7
citations

ShowHowTo: Generating Scene-Conditioned Step-by-Step Visual Instructions

Tomas Soucek, Prajwal Gatti, Michael Wray et al.

CVPR 2025arXiv:2412.01987
6
citations

Spatiotemporal Skip Guidance for Enhanced Video Diffusion Sampling

Junha Hyung, Kinam Kim, Susung Hong et al.

CVPR 2025arXiv:2411.18664
18
citations

StreetCrafter: Street View Synthesis with Controllable Video Diffusion Models

Yunzhi Yan, Zhen Xu, Haotong Lin et al.

CVPR 2025arXiv:2412.13188
36
citations

SViM3D: Stable Video Material Diffusion for Single Image 3D Generation

Andreas Engelhardt, Mark Boss, Vikram Voleti et al.

ICCV 2025arXiv:2510.08271
4
citations

TASTE-Rob: Advancing Video Generation of Task-Oriented Hand-Object Interaction for Generalizable Robotic Manipulation

Hongxiang Zhao, Xingchen Liu, Mutian Xu et al.

CVPR 2025arXiv:2503.11423
22
citations

Track4Gen: Teaching Video Diffusion Models to Track Points Improves Video Generation

Hyeonho Jeong, Chun-Hao P. Huang, Jong Chul Ye et al.

CVPR 2025arXiv:2412.06016
33
citations

Training-free Camera Control for Video Generation

Chen Hou, Zhibo Chen

ICLR 2025arXiv:2406.10126
84
citations

Video Diffusion Models Excel at Tracking Similar-Looking Objects Without Supervision

Chenshuang Zhang, Kang Zhang, Joon Son Chung et al.

NEURIPS 2025arXiv:2512.02339

VideoDPO: Omni-Preference Alignment for Video Diffusion Generation

Runtao Liu, Haoyu Wu, Zheng Ziqiang et al.

CVPR 2025arXiv:2412.14167
75
citations

VideoGuide: Improving Video Diffusion Models without Training Through a Teacher's Guide

Dohun Lee, Bryan Sangwoo Kim, Geon Yeong Park et al.

CVPR 2025arXiv:2410.04364
2
citations

VLIPP: Towards Physically Plausible Video Generation with Vision and Language Informed Physical Prior

Xindi Yang, Baolu Li, Yiming Zhang et al.

ICCV 2025arXiv:2503.23368
17
citations

Zero-1-to-A: Zero-Shot One Image to Animatable Head Avatars Using Video Diffusion

Zhenglin Zhou, Fan Ma, Hehe Fan et al.

CVPR 2025arXiv:2503.15851
4
citations

DragVideo: Interactive Drag-style Video Editing

Yufan Deng, Ruida Wang, Yuhao ZHANG et al.

ECCV 2024arXiv:2312.02216
36
citations

FreeInit: Bridging Initialization Gap in Video Diffusion Models

Tianxing Wu, Chenyang Si, Yuming Jiang et al.

ECCV 2024arXiv:2312.07537
81
citations

Hybrid Video Diffusion Models with 2D Triplane and 3D Wavelet Representation

Kihong Kim, Haneol Lee, Jihye Park et al.

ECCV 2024arXiv:2402.13729
11
citations

IM-3D: Iterative Multiview Diffusion and Reconstruction for High-Quality 3D Generation

Luke Melas-Kyriazi, Iro Laina, Christian Rupprecht et al.

ICML 2024arXiv:2402.08682
87
citations

Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models

Qinyu Yang, Haoxin Chen, Yong Zhang et al.

ECCV 2024arXiv:2407.10285
3
citations

PhysGen: Rigid-Body Physics-Grounded Image-to-Video Generation

Shaowei Liu, Zhongzheng Ren, Saurabh Gupta et al.

ECCV 2024arXiv:2409.18964
107
citations

ZoLA: Zero-Shot Creative Long Animation Generation with Short Video Model

Fu-Yun Wang, Zhaoyang Huang, Qiang Ma et al.

ECCV 2024
9
citations