Paper accepted to CVPR 2024

Our paper has been accepted for publication at the Conference on Computer Vision and Pattern Recognition (CVPR 2024).
Knowledge distillation methods have recently shown to be a promising direction to speedup the synthesis of large-scale diffusion models. Unfortunately, the overall quality of student samples is typically lower compared to the teacher ones, which hinders their practical usage. Surprisingly, we discover that a noticeable portion of student samples exhibit superior fidelity compared to the teacher ones, despite the “approximate” nature of the student. Based on this finding, we propose an adaptive collaboration between student and teacher diffusion models for effective text-to-image synthesis. Specifically, the distilled model produces the initial sample, and then an oracle decides whether it needs further improvements with a slow teacher model.