IMMA: Immunizing text-to-image Models against Malicious Adaptation

14
citations
#822
in ECCV 2024
of 2387 papers
2
Top Authors
4
Data Points

Abstract

Advancements in open-sourced text-to-image models and fine-tuning methods have led to the increasing risk of malicious adaptation, i.e., fine-tuning to generate harmful/unauthorized content. Recent works, e.g., Glaze or MIST, have developed data-poisoning techniques which protect the data against adaptation methods. In this work, we consider an alternative paradigm for protection. We propose to ``immunize'' the model by learning model parameters that are difficult for the adaptation methods when fine-tuning malicious content; in short IMMA. Specifically, IMMA should be applied before the release of the model weights to mitigate these risks. Empirical results show IMMA's effectiveness against malicious adaptations, including mimicking the artistic style and learning of inappropriate/unauthorized content, over three adaptation methods: LoRA, Textual-Inversion, and DreamBooth.

Citation History

Jan 25, 2026
13
Feb 13, 2026
14+1
Feb 13, 2026
14
Feb 13, 2026
14