Shape My Moves: Text-Driven Shape-Aware Synthesis of Human Motions

5
citations
#1373
in CVPR 2025
of 2873 papers
7
Top Authors
6
Data Points

Abstract

We explore how body shapes influence human motion synthesis, an aspect often overlooked in existing text-to-motion generation methods due to the ease of learning a homogenized, canonical body shape. However, this homogenization can distort the natural correlations between different body shapes and their motion dynamics. Our method addresses this gap by generating body-shape-aware human motions from natural language prompts. We utilize a finite scalar quantization-based variational autoencoder (FSQ-VAE) to quantize motion into discrete tokens and then leverage continuous body shape information to de-quantize these tokens back into continuous, detailed motion. Additionally, we harness the capabilities of a pretrained language model to predict both continuous shape parameters and motion tokens, facilitating the synthesis of text-aligned motions and decoding them into shape-aware motions. We evaluate our method quantitatively and qualitatively, and also conduct a comprehensive perceptual study to demonstrate its efficacy in generating shape-aware motions.

Citation History

Jan 24, 2026
4
Jan 27, 2026
4
Feb 4, 2026
5+1
Feb 13, 2026
5
Feb 13, 2026
5
Feb 13, 2026
5