Faster customization of text-to-image models with DVAR
Diffusion models are currently the most powerful approach to generating images from a text description. One popular application is the adaptation of such models to new concepts: usually, it is quite difficult or even impossible to write a prompt that accurately describes the target object.
At the same time, existing methods are too slow for practical purposes. In “Is This Loss Informative? Faster Text-to-Image Customization by Tracking Objective Dynamics”, our paper accepted at NeurIPS 2023, we propose DVAR — a lightweight early stopping criterion for gradient-based adaptation methods. Our experiments with Stable Diffusion show up to 8 times acceleration with minor losses in the quality of identity preservation and less overfitting. We release the source code of DVAR so that anyone can speed up their adaptation pipeline.
Task description The link has been copied to clipboard
A typical task of adapting text-to-image models is to teach the model to generate images of a new concept, which is presented in the form of 3–4 pictures (see figure above from the DreamBooth paper). For example, you want to draw your dog on the beach, but the model hasn’t seen it. At the same time, it is extremely difficult to describe exactly how it looks with a textual prompt. It's much easier to take a few photos and show them to the model.
There are many methods to solve this problem. Still, most of them are computationally expensive (taking up to two hours on a single GPU) due to a large number of optimization steps. We consider the three most common approaches: Textual Inversion [1], Custom Diffusion [2], and DreamBooth [3]. Each method creates a new pseudo-token in the text encoder vocabulary and uses gradient descent to find its vector representation.
Returning to the example of the dog, we can create a new word, "[V]," and write a prompt: "[V] dog on the beach" and pass it to the model.
Diffusion models The link has been copied to clipboard
To explain our method, we need to recall how diffusion models work. Diffusion models generate samples by iterative denoising. The forward process $x_0 \rightarrow x_T$ is defined as a stepwise addition of noise $\epsilon_t$ to an image $x_t$. Our model $\epsilon_{\theta}$ will simulate the reverse process $x_T \rightarrow x_0$ predicting the difference between steps, following the objective:
$|| \epsilon_t - \epsilon_{\theta}(x_t, c, t)||^2_2$,
where $c$ represents a condition (caption embedding in case of text-to-image generation). These processes are visualized in the picture below.
The sampling process occurs with a fixed timestep $t$ and starts from $x_t$ equal to the normal noise.
Loss correction The link has been copied to clipboard
Intuitively, early stopping would seem as the easiest way to speed up learning, but we need an indicator that the quality of the model is good enough. Typically, such an indicator is the loss function value or the norm of gradients. In the context of our task, they turned out to be not indicative: the behavior is shown in the picture above. On the other hand, pairwise image CLIP [7] similarity with the train set grows sharply only in the early stages, indicating that a predefined number of training steps might be excessive for these adaptation methods. However, the use of this metric for early stopping is excessively costly due to the need for intermediate image sampling and CLIP evaluation.
After further investigation, we observed that the training loss contains a high amount of noise, which was not addressed in previous works. The reason of this is that the training and generating processes include multiple factors of randomness, namely:
- Sampling of input images
- Sampling of corresponding captions
- Stochastic autoencoder of image representations (VAE)
- Selection of the diffusion timestep
- Sampling of starting Gaussian noise
We have identified how each factor affects the training loss by training the model in the original setup and additionally evaluating semi-deterministic losses with resampling only a single factor. We found that captions and VAE encoder noise do not affect the loss value; on the other hand, sampled diffusion timesteps, noise, and images make the loss uninformative.
DVAR The link has been copied to clipboard
Our method consists of two parts: a deterministic loss function $\mathcal{L}_{det}$ and an early stopping criterion. To obtain $\mathcal{L}_{det}$, it is enough to fix a single batch before starting training and conduct an additional forward pass of the model on this batch every few steps. Since we do not perform the backward pass, this procedure does not significantly slow down the learning process or affect it in any way. As you can see in the figure below, this loss becomes indicative.
Based on this loss, we developed Deterministic VARiance Evaluation (DVAR), a simple variance-based early stopping criterion. It maintains a rolling variance estimate of $\mathcal{L}_{det}$ over the last N steps. Once this rolling variance becomes less than $\alpha$ of the global variance estimate, we stop training. This criterion is easy to implement and has just two hyperparameters that are easy to tune. A NumPy/Pytorch implementation is shown below; for a full example of using DVAR, see our paper or the GitHub repository.
def DVAR(losses, window_size, threshold):
running_var = losses[-window_size:].var()
total_var = losses.var()
ratio = running_var / total_var
return ratio < threshold
Results The link has been copied to clipboard
To evaluate the quality of customized models, we identified three characteristics: identity preservation, reconstruction ability, and unseen prompt alignment. To assess the first two, we compute CLIP image-to-image similarity between train/validation generated images and reference images. We named these metrics Train CLIP img and Val CLIP img respectively. To evaluate the latest quality, we use CLIP image-to-text similarity between validation images and prompts; we called this metric Val CLIP txt.
Our experiments were conducted on datasets provided by the authors of all three customization methods (48 concepts in total). Aggregated results for each approach are shown in the table below. We compare our approach with several baselines: the original training setup (Baseline), CLIP-based early stopping (CLIP-s), and a baseline with a fixed number of steps equal to average CLIP-s iterations (Few Iters). We observed that DVAR is more efficient than Baseline and CLIP-s in terms of overall runtime. An additional advantage of DVAR over Few Iters is its adaptability because not all concepts are equally easy to learn.
Textual Inversion The link has been copied to clipboard
Method | Train CLIP img | Val CLIP img | Val CLIP txt | Iteration | Time (min) |
---|---|---|---|---|---|
Baseline | 0.840 | 0.786 | 0.209 | 6100 | 27.0 |
CLIPs | 0.824 | 0.757 | 0.233 | 666.7 | 9.6 |
Few Iters | 0.796 | 0.744 | 0.232 | 475 | 1.6 |
DVAR | 0.795 | 0.748 | 0.227 | 566 | 3.1 |
DreamBooth The link has been copied to clipboard
Method | Train CLIP img | Val CLIP img | Val CLIP txt | Iteration | Time (min) |
Baseline | 0.857 | 0.824 | 0.205 | 1000 | 8.1 |
CLIPs | 0.862 | 0.788 | 0.225 | 353.2 | 6.1 |
Few Iters | 0.855 | 0.806 | 0.219 | 367 | 1.9 |
DVAR | 0.784 | 0.687 | 0.238 | 665.3 | 4.9 |
Custom Diffusion The link has been copied to clipboard
Method | Train CLIP img | Val CLIP img | Val CLIP txt | Iteration | Time (min) |
Baseline | 0.857 | 0.824 | 0.205 | 1000 | 8.1 |
CLIPs | 0.862 | 0.788 | 0.225 | 353.2 | 6.1 |
Few Iters | 0.855 | 0.806 | 0.219 | 367 | 1.9 |
DVAR | 0.784 | 0.687 | 0.238 | 665.3 | 4.9 |
In addition to the automatic evaluation, we also conducted a human evaluation. We ran two surveys on a crowdsourcing platform, Toloka. The first survey compared the ability to recreate training images of a model trained with DVAR to the baseline. Annotators were asked to choose the image that was most similar to the reference object. The second survey compared samples created with new prompts to see how well the model could be customized. Participants had to decide which image matched the prompt better. Our findings are shown in the table below; as you can see, DVAR enables early stopping without compromising reconstruction quality for two out of three customization methods. Although applying DVAR to Textual Inversion slightly decreases reconstruction quality, this is likely due to overfitting of the original method. Other customization methods that use fewer iterations can avoid this problem.
Method | Reconstruction | Customization |
Textual Inversion | 41.6 | 79.9 |
DreamBooth | 67.9 | 91.4 |
Custom Diffusion | 69.9 | 93.8 |
Another important advantage of DVAR is reduced overfitting. In the adaptation task, overfitting leads to a worse alignment with an unseen validation prompt. In other words, the model tends to generate outputs that resemble the training images, regardless of the input prompt, as shown in the figure below.
Conclusion The link has been copied to clipboard
The field of rapid diffusion models adaptation is an emerging area of research. In our work we present DVAR — a light-weight and easy to implement early stopping criterion. Our approach simplifies the practical use of personalized models by reducing the computational cost of training them. We also believe that our analysis of the diffusion objective behavior will lead to a deeper understanding of the entire training process. Since the proposed deterministic loss better corresponds to model performance than the original, it could be used not only for the adaptation task, but also for training diffusion models.
References
- R. Gal, Y. Alaluf, Y. Atzmon, O. Patashnik, A. H. Bermano, G. Chechik and D. Cohen-Or, An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion. ICLR 2022.←
- N. Kumari, B. Zhang, R. Zhang, E. Shechtman, and J. Y. Zhu, Multi-Concept Customization of Text-to-Image Diffusion. CVPR 2023.←
- N. Ruiz, Y. Li, V. Jampani, Y. Pritch, M. Rubinstein, and K. Aberman, DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation. CVPR 2023.←
- J. Ho, A. Jain, and P. Abbeel, Denoising Diffusion Probabilistic Models. NeurIPS 2020.←
- R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, High-Resolution Image Synthesis with Latent Diffusion Models. CVPR 2022.←
- D. P. Kingma and M. Welling, Auto-Encoding Variational Bayes. ICLR 2014.←
- A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, G. Krueger and I. Sutskever, Learning Transferable Visual Models From Natural Language Supervision. 2021.←