Contrastive Denoising Score for Text-guided Latent Diffusion Image Editing

🌈 CVPR2024

KAIST(Korea Advanced Institute of Science and Technology)

Abstract

With the remarkable advent of text-to-image diffusion models, image editing methods have become more diverse and continue to evolve. A promising recent approach in this realm is Delta Denoising Score (DDS) - an image editing technique based on Score Distillation Sampling (SDS) framework that leverages the rich generative prior of text-to-image diffusion models. However, relying solely on the difference between scoring functions is insufficient for preserving specific structural elements from the original image, a crucial aspect of image editing. Inspired by the similarity and importance differences between DDS and the contrastive learning for unpaired image-to-image translation (CUT), here we present an embarrassingly simple yet very powerful modification of DDS, called Contrastive Denoising Score (CDS), for latent diffusion models (LDM). Specifically, to enforce structural correspondence between the input and output while maintaining the controllability of contents, we introduce a straightforward approach to regulate structural consistency using CUT loss within the DDS framework. To calculate this loss, instead of employing auxiliary networks, we utilize the intermediate features of LDM, in particular, those from the self-attention layers, which possesses rich spatial information. Our approach enables zero-shot image-to-image translation and neural radiance field (NeRF) editing, achieving a well-balanced interplay between maintaining the structural details and transforming content. Qualitative results and comparisons demonstrates the effectiveness of our proposed method.

Main Idea

Key observation


Aside from the actual image generation processes, both DDS and CUT approaches attempt to align the features from the reference and reconstruction branches. With the aim of regulating the excessive structural changes, we are interested in the key idea from CUT, which is shown effective in maintaining input structure.

Overview of method

The original CUT algorithm requires training an encoder to extract spatial information from the input image, which is inefficient. On the other hand, we calculate CUT loss utilizing the latent representation of self-attention layers. Those features have been shown to contain rich spatial information, disentangled from semantic information, which is exactly CUT loss requires.


Additional Results