Lesson 5 • 2 min
Why Z-Image is Fast
The distillation shortcut
The master chef shortcut
A master chef (slow, meticulous) teaches a line cook (fast, efficient) to make the same dish. The line cook can't replicate every nuance, but gets 95% of the quality in 1/6 the time.
Traditional diffusion models need 50 denoising steps. That means running the neural network 50 times per image. Z-Image-Turbo uses only 8 steps—and produces nearly the same quality.
Compare 50-step vs 8-step generation (same prompt, same starting noise)
How? Through "distillation"—training a fast student model to match a slow teacher model. The student learns to take bigger steps, jumping from noisy to clean in fewer iterations.
Standard model: 50 steps × 200ms = 10 seconds
Z-Image-Turbo: 8 steps × 100ms = 0.8 seconds
12x faster, 95% quality retentionQuick Win
You've completed Module 1! You now understand: the core challenge, how models learn, the denoising insight, the pipeline, and why Z-Image is fast.