"motion generation" Papers

22 papers found

Autoregressive Motion Generation with Gaussian Mixture-Guided Latent Sampling

Linnan Tu, Lingwei Meng, Zongyi Li et al.

NEURIPS 2025

Deep Compositional Phase Diffusion for Long Motion Sequence Generation

Ho Yin Au, Jie Chen, Junkun Jiang et al.

NEURIPS 2025oralarXiv:2510.14427
1
citations

DenseDPO: Fine-Grained Temporal Preference Optimization for Video Diffusion Models

Ziyi Wu, Anil Kag, Ivan Skorokhodov et al.

NEURIPS 2025oralarXiv:2506.03517
14
citations

Direct Post-Training Preference Alignment for Multi-Agent Motion Generation Model Using Implicit Feedback from Pre-training Demonstrations

Thomas Tian, Kratarth Goel

ICLR 2025arXiv:2503.20105
4
citations

EgoLM: Multi-Modal Language Model of Egocentric Motions

Fangzhou Hong, Vladimir Guzov, Hyo Jin Kim et al.

CVPR 2025arXiv:2409.18127
12
citations

Guiding Human-Object Interactions with Rich Geometry and Relations

Mengqing Xue, Yifei Liu, Ling Guo et al.

CVPR 2025arXiv:2503.20172
7
citations

HandDiffuse: Generative Controllers for Two-Hand Interactions via Diffusion Models

Pei Lin

AAAI 2025paperarXiv:2312.04867
13
citations

HUMOTO: A 4D Dataset of Mocap Human Object Interactions

Jiaxin Lu, Chun-Hao Huang, Uttaran Bhattacharya et al.

ICCV 2025arXiv:2504.10414
9
citations

MEgoHand: Multimodal Egocentric Hand-Object Interaction Motion Generation

Bohan Zhou, Yi Zhan, Zhongbin Zhang et al.

NEURIPS 2025oralarXiv:2505.16602
3
citations

MoMaps: Semantics-Aware Scene Motion Generation with Motion Maps

Jiahui Lei, Kyle Genova, George Kopanas et al.

ICCV 2025arXiv:2510.11107
1
citations

PINO: Person-Interaction Noise Optimization for Long-Duration and Customizable Motion Generation of Arbitrary-Sized Groups

Sakuya Ota, Qing Yu, Kent Fujiwara et al.

ICCV 2025arXiv:2507.19292
1
citations

SOLAMI: Social Vision-Language-Action Modeling for Immersive Interaction with 3D Autonomous Characters

Jianping Jiang, Weiye Xiao, Zhengyu Lin et al.

CVPR 2025arXiv:2412.00174
11
citations

SViMo: Synchronized Diffusion for Video and Motion Generation in Hand-object Interaction Scenarios

Lingwei Dang, Ruizhi Shao, Hongwen Zhang et al.

NEURIPS 2025spotlightarXiv:2506.02444
3
citations

Think Then React: Towards Unconstrained Action-to-Reaction Motion Generation

Wenhui Tan, Boyuan Li, Chuhao Jin et al.

ICLR 2025
10
citations

UniEgoMotion: A Unified Model for Egocentric Motion Reconstruction, Forecasting, and Generation

Chaitanya Patel, Hiroki Nakamura, Yuta Kyuragi et al.

ICCV 2025arXiv:2508.01126
4
citations

UniMuMo: Unified Text, Music, and Motion Generation

Han Yang, Kun Su, Yutong Zhang et al.

AAAI 2025paperarXiv:2410.04534
12
citations

From Audio to Photoreal Embodiment: Synthesizing Humans in Conversations

Evonne Ng, Javier Romero, Timur Bagautdinov et al.

CVPR 2024arXiv:2401.01885
72
citations

Generating Physically Realistic and Directable Human Motions from Multi-Modal Inputs

Aayam Shrestha, Pan Liu, German Ros et al.

ECCV 2024arXiv:2502.05641
10
citations

Large Motion Model for Unified Multi-Modal Motion Generation

Mingyuan Zhang, Daisheng Jin, Chenyang Gu et al.

ECCV 2024arXiv:2404.01284
63
citations

MotionLCM: Real-time Controllable Motion Generation via Latent Consistency Model

Wenxun Dai, Ling-Hao Chen, Jingbo Wang et al.

ECCV 2024arXiv:2404.19759
121
citations

Motion Mamba: Efficient and Long Sequence Motion Generation

Zeyu Zhang, Akide Liu, Ian Reid et al.

ECCV 2024arXiv:2403.07487
114
citations

Neural Sign Actors: A Diffusion Model for 3D Sign Language Production from Text

Vasileios Baltatzis, Rolandos Alexandros Potamias, Evangelos Ververas et al.

CVPR 2024arXiv:2312.02702
45
citations