Oral "video diffusion models" Papers
7 papers found
Conference
DenseDPO: Fine-Grained Temporal Preference Optimization for Video Diffusion Models
Ziyi Wu, Anil Kag, Ivan Skorokhodov et al.
NEURIPS 2025oralarXiv:2506.03517
14
citations
Diffusion$^2$: Dynamic 3D Content Generation via Score Composition of Video and Multi-view Diffusion Models
Zeyu Yang, Zijie Pan, Chun Gu et al.
ICLR 2025oralarXiv:2404.02148
20
citations
EG4D: Explicit Generation of 4D Object without Score Distillation
Qi Sun, Zhiyang Guo, Ziyu Wan et al.
ICLR 2025oralarXiv:2405.18132
40
citations
Emergent Temporal Correspondences from Video Diffusion Transformers
Jisu Nam, Soowon Son, Dahyun Chung et al.
NEURIPS 2025oralarXiv:2506.17220
11
citations
FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality
Zhengyao Lyu, Chenyang Si, Junhao Song et al.
ICLR 2025oralarXiv:2410.19355
58
citations
Genesis: Multimodal Driving Scene Generation with Spatio-Temporal and Cross-Modal Consistency
Xiangyu Guo, Zhanqian Wu, Kaixin Xiong et al.
NEURIPS 2025oralarXiv:2506.07497
9
citations
Trajectory attention for fine-grained video motion control
Zeqi Xiao, Wenqi Ouyang, Yifan Zhou et al.
ICLR 2025oralarXiv:2411.19324
40
citations