Watermark Stealing in Large Language Models

75
citations
#194
in ICML 2024
of 2635 papers
3
Top Authors
4
Data Points

Abstract

LLM watermarking has attracted attention as a promising way to detect AI-generated content, with some works suggesting that current schemes may already be fit for deployment. In this work we dispute this claim, identifyingwatermark stealing(WS) as a fundamental vulnerability of these schemes. We show that querying the API of the watermarked LLM to approximately reverse-engineer a watermark enables practicalspoofing attacks, as hypothesized in prior work, but also greatly boostsscrubbingattacks, which was previously unnoticed. We are the first to propose an automated WS algorithm and use it in the first comprehensive study of spoofing and scrubbing in realistic settings. We show that for under $50 an attacker can both spoof and scrub state-of-the-art schemes previously considered safe, with average success rate of over 80\%. Our findings challenge common beliefs about LLM watermarking, stressing the need for more robust schemes. We make all our code and additional examples available at https://watermark-stealing.org.

Citation History

Jan 28, 2026
0
Feb 13, 2026
75+75
Feb 13, 2026
75
Feb 13, 2026
75