LoRAverse: A Submodular Framework to Retrieve Diverse Adapters for Diffusion Models

2
citations
#1094
in ICCV 2025
of 2701 papers
3
Top Authors
4
Data Points

Abstract

Low-rank Adaptation (LoRA) models have revolutionized the personalization of pre-trained diffusion models by enabling fine-tuning through low-rank, factorized weight matrices specifically optimized for attention layers. These models facilitate the generation of highly customized content across a variety of objects, individuals, and artistic styles without the need for extensive retraining. Despite the availability of over 100K LoRA adapters on platforms like Civit.ai, users often face challenges in navigating, selecting, and effectively utilizing the most suitable adapters due to their sheer volume, diversity, and lack of structured organization. This paper addresses the problem of selecting the most relevant and diverse LoRA models from this vast database by framing the task as a combinatorial optimization problem and proposing a novel submodular framework. Our quantitative and qualitative experiments demonstrate that our method generates diverse outputs across a wide range of domains.

Citation History

Jan 24, 2026
1
Feb 13, 2026
2+1
Feb 13, 2026
2
Feb 13, 2026
2