RelCon: Relative Contrastive Learning for a Motion Foundation Model for Wearable Data

13citations
arXiv:2411.18822
13
citations
#1258
in ICLR 2025
of 3827 papers
10
Top Authors
7
Data Points

Abstract

We present RelCon, a novel self-supervised Relative Contrastive learning approach for training a motion foundation model from wearable accelerometry sensors. First, a learnable distance measure is trained to capture motif similarity and domain-specific semantic information such as rotation invariance. Then, the learned distance provides a measurement of semantic similarity between a pair of accelerometry time-series, which we use to train our foundation model to model relative relationships across time and across subjects. The foundation model is trained on 1 billion segments from 87,376 participants, and achieves state-of-the-art performance across multiple downstream tasks, including human activity recognition and gait metric regression. To our knowledge, we are the first to show the generalizability of a foundation model with motion data from wearables across distinct evaluation tasks.

Citation History

Jan 26, 2026
12
Jan 26, 2026
12
Jan 27, 2026
12
Feb 3, 2026
12
Feb 13, 2026
13+1
Feb 13, 2026
13
Feb 13, 2026
13