Language-Guided Image Tokenization for Generation

1Google DeepMind 2MIT CSAIL
CVPR 2025, Oral Presentation

By using text during tokenization, TexTok encodes finer visual details into image tokens, achieving better reconstruction quality across various token budgets, such as improved text in images, car wheels, and bird beaks.

Abstract

Image tokenization, the process of transforming raw image pixels into a compact low-dimensional latent representation, has proven crucial for scalable and efficient image generation. However, mainstream image tokenization methods generally have limited compression rates, making high-resolution image generation computationally expensive. To address this challenge, we propose to leverage language for efficient image tokenization, and we call our method Text-Conditioned Image Tokenization (TexTok). TexTok is a simple yet effective tokenization framework that leverages language to provide a compact, high-level semantic representation. By conditioning the tokenization process on descriptive text captions, TexTok simplifies semantic learning, allowing more learning capacity and token space to be allocated to capture fine-grained visual details, leading to enhanced reconstruction quality and higher compression rates. Compared to the conventional tokenizer without text conditioning, TexTok achieves average reconstruction FID improvements of 29.2% and 48.1% on ImageNet-256 and -512 benchmarks respectively, across varying numbers of tokens. These tokenization improvements consistently translate to 16.3% and 34.3% average improvements in generation FID. By simply replacing the tokenizer in Diffusion Transformer (DiT) with TexTok, our system can achieve a 93.5× inference speedup while still outperforming the original DiT using only 32 tokens on ImageNet-512. TexTok with a vanilla DiT generator achieves state-of-the-art FID scores of 1.46 and 1.62 on ImageNet-256 and -512 respectively. Furthermore, we demonstrate TexTok's superiority on the text-to-image generation task, effectively utilizing the off-the-shelf text captions in tokenization.

Motivation


Tokenization is Key to Generation

Tokenization: Compress raw image data into a compact low-dimensional latent representation, enabling generative models to operate in this latent space instead of the high-dimensional pixel space, thereby improving both computational efficiency and generation quality.

Tradeoff between Compression and Quality

High compression rate: Low cost, but bad quality.

Low compression rate: Good quality, but high cost.

Can we achieve the best of both worlds, i.e., low cost and high quality?

Method


Our Idea: Use Text during Tokenization

The most compact and comprehensive representation of an image is its caption.

By providing the text caption to the tokenizer, the tokenizer can:

  • Simplify semantic learning
  • Allow more learning capacity and token space to be allocated to capture fine-grained visual details
  • Achieve better quality without compromising cost

Our Model: Text-Conditioned Image Tokenization (TexTok)

TexTok architecture. During training, a frozen text encoder (e.g., T5) extracts text embeddings (tokens) from the given image caption. The image patches, learnable image tokens, and text tokens are fed into the tokenizer (a ViT) to produce the image tokens. During detokenization, the image tokens are concatenated with the same text tokens fed to the tokenizer and learnable patch tokens to reconstruct the image. For generation, only image tokens need to be generated.

Results


Reconstruction Results

ImageNet 256x256

ImageNet 512x512

Reconstruction FID of TexTok v.s. Baseline (w/o text) on ImageNet 256x256 and 512x512 for different number of image tokens. With text conditioning, TexTok can use half, 1/4 of the token number (2x, 4x compression rates) to achieve similar rFID compared to Baseline (w/o text) on ImageNet-256 and -512.

Tokenization Improvements Translate to Generation

Image reconstruction and generation performance comparison of TexTok with Baseline (w/o text) on ImageNet-256 and -512. TexTok consistently delivers significant improvements in reconstruction and generation performance, with more pronounced gains as the number of tokens decreases.

System-level Image Generation Benchmarking

System-level comparison of class-conditional image generation on ImageNet-256 and -512. TexTok-256 + DiT-XL achieves state-of-the-art performance on both image resolutions. Further, TexTok with 64/32 tokens outperforms vanilla DiT with 1024/4096 tokens on ImageNet-256/-512.

Generation Speed/Quality Frontier

ImageNet 256x256

ImageNet 512x512

Speed/Quality Frontier of TexTok + DiT-XL compared to the original DiT-XL/2 on ImageNet-256 and -512. TexTok achieves the same generation performance 14.3x/93.5x faster, or gains 34.0%/46.7% FID improvements using similar inference time.

Conditional Image Generation Samples

Text-to-Image Generation Samples

BibTeX

@inproceedings{zha2025language,
  title={Language-guided image tokenization for generation},
  author={Zha, Kaiwen and Yu, Lijun and Fathi, Alireza and Ross, David A and Schmid, Cordelia and Katabi, Dina and Gu, Xiuye},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={15713--15722},
  year={2025}
}