InfantAgent-Next: A Multimodal Generalist Agent for Automated Computer Interaction

5
citations
#1136
in NEURIPS 2025
of 5858 papers
11
Top Authors
5
Data Points

Abstract

This paper introduces \textsc{InfantAgent-Next}, a generalist agent capable of interacting with computers in a multimodal manner, encompassing text, images, audio, and video. Unlike existing approaches that either build intricate workflows around a single large model or only provide workflow modularity, our agent integrates tool-based and pure vision agents within a highly modular architecture, enabling different models to collaboratively solve decoupled tasks in a step-by-step manner. Our generality is demonstrated by our ability to evaluate not only pure vision-based real-world benchmarks (i.e., OSWorld), but also more general or tool-intensive benchmarks (e.g., GAIA and SWE-Bench). Specifically, we achieve $\mathbf{7.27\%}$ accuracy on OSWorld, higher than Claude-Computer-Use. Codes and evaluation scripts are open-sourced at https://github.com/bin123apple/InfantAgent.

Citation History

Jan 26, 2026
5
Feb 1, 2026
5
Feb 6, 2026
5
Feb 13, 2026
5
Feb 13, 2026
5