Promptable Behaviors: Personalizing Multi-Objective Rewards from Human Preferences

25
citations
#1074
in CVPR 2024
of 2716 papers
6
Top Authors
4
Data Points

Abstract

Customizing robotic behaviors to be aligned with diverse human preferences is an underexplored challenge in the field of embodied AI. In this paper, we present Promptable Behaviors, a novel framework that facilitates efficient personalization of robotic agents to diverse human preferences in complex environments. We use multi-objective reinforcement learning to train a single policy adaptable to a broad spectrum of preferences. We introduce three distinct methods to infer human preferences by leveraging different types of interactions: (1) human demonstrations, (2) preference feedback on trajectory comparisons, and (3) language instructions. We evaluate the proposed method in personalized object-goal navigation and flee navigation tasks in ProcTHOR and RoboTHOR, demonstrating the ability to prompt agent behaviors to satisfy human preferences in various scenarios. Project page: https://promptable-behaviors.github.io

Citation History

Jan 28, 2026
0
Feb 13, 2026
25+25
Feb 13, 2026
25
Feb 13, 2026
25