RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness

60citations
arXiv:2405.17220
60
citations
#85
in CVPR 2025
of 2873 papers
16
Top Authors
5
Data Points

Abstract

Traditional feedback learning for hallucination reduction relies on labor-intensive manual labeling or expensive proprietary models. This leaves the community without foundational knowledge about how to build high-quality feedback with open-source MLLMs. In this work, we introduce RLAIF-V, a novel framework that aligns MLLMs in a fully open-source paradigm. RLAIF-V maximally explores open-source MLLMs from two perspectives, including high-quality feedback data generation for preference learning and self-feedback guidance for inference-time scaling. Extensive experiments on six benchmarks in both automatic and human evaluation show that RLAIF-V substantially enhances the trustworthiness of models at both preference learning and inference time. RLAIF-V 7B reduces object hallucination by 80.7\% and overall hallucination by 33.7\%. Remarkably, RLAIF-V 12B further reveals the self-alignment potential of open-source MLLMs, where the model can learn from feedback of itself to achieve super GPT-4V trustworthiness.

Citation History

Jan 26, 2026
54
Feb 1, 2026
54
Feb 6, 2026
58+4
Feb 13, 2026
60+2
Feb 13, 2026
60