Poster "text-to-motion generation" Papers

12 papers found

AToM: Aligning Text-to-Motion Model at Event-Level with GPT-4Vision Reward

Haonan Han, Xiangzuo Wu, Huan Liao et al.

CVPR 2025arXiv:2411.18654
5
citations

AvatarGO: Zero-shot 4D Human-Object Interaction Generation and Animation

Yukang Cao, Liang Pan, Kai Han et al.

ICLR 2025arXiv:2410.07164
19
citations

HGM³: Hierarchical Generative Masked Motion Modeling with Hard Token Mining

Minjae Jeong, Yechan Hwang, Jaejin Lee et al.

ICLR 2025

LaMP: Language-Motion Pretraining for Motion Generation, Retrieval, and Captioning

Zhe Li, Weihao Yuan, Yisheng He et al.

ICLR 2025arXiv:2410.07093
37
citations

Learning to Generate Human-Human-Object Interactions from Textual Descriptions

Jeonghyeon Na, Sangwon Baik, Inhee Lee et al.

NEURIPS 2025arXiv:2511.20446
1
citations

Shape My Moves: Text-Driven Shape-Aware Synthesis of Human Motions

Ting-Hsuan Liao, Yi Zhou, Yu Shen et al.

CVPR 2025arXiv:2504.03639
5
citations

BAMM: Bidirectional Autoregressive Motion Model

Ekkasit Pinyoanuntapong, Muhammad Usama Saleem, Pu Wang et al.

ECCV 2024arXiv:2403.19435
44
citations

CoMo: Controllable Motion Generation through Language Guided Pose Code Editing

Yiming Huang, WEILIN WAN, Yue Yang et al.

ECCV 2024arXiv:2403.13900
50
citations

Local Action-Guided Motion Diffusion Model for Text-to-Motion Generation

Peng Jin, Hao Li, Zesen Cheng et al.

ECCV 2024arXiv:2407.10528
13
citations

OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers

Han Liang, Jiacheng Bao, Ruichi Zhang et al.

CVPR 2024arXiv:2312.08985
48
citations

Plan, Posture and Go: Towards Open-vocabulary Text-to-Motion Generation

Jinpeng Liu, Wenxun Dai, Chunyu Wang et al.

ECCV 2024
8
citations

SMooDi: Stylized Motion Diffusion Model

Lei Zhong, Yiming Xie, Varun Jampani et al.

ECCV 2024arXiv:2407.12783
38
citations