Highly Compressed Tokenizer Can Generate Without Training

9
citations
#613
in ICML 2025
of 3340 papers
5
Top Authors
4
Data Points

Abstract

Commonly used image tokenizers produce a 2D grid of spatially arranged tokens. In contrast, so-called1Dimage tokenizers represent images as highly compressed one-dimensional sequences of as few as 32 discrete tokens. We find that the high degree of compression achieved by a 1D tokenizer with vector quantization enables image editing and generative capabilities through heuristic manipulation of tokens, demonstrating that even very crude manipulations -- such as copying and replacing tokens between latent representations of images -- enable fine-grained image editing by transferring appearance and semantic attributes. Motivated by the expressivity of the 1D tokenizer's latent space, we construct an image generation pipeline leveraging gradient-based test-time optimization of tokens with plug-and-play loss functions such as reconstruction or CLIP similarity. Our approach is demonstrated for inpainting and text-guided image editing use cases, and can generate diverse and realistic samples without requiring training of any generative model.

Citation History

Jan 28, 2026
0
Feb 13, 2026
9+9
Feb 13, 2026
9
Feb 13, 2026
9