Lesson 4 • 2 min
Bilingual Magic
How it handles multiple languages
Z-Image-Turbo can understand both English and Chinese prompts. How? The encoder was trained on text in both languages, learning that "cat" and "猫" should produce similar embeddings.
A bilingual librarian
Imagine a librarian who can find the same book whether you ask in English or Chinese. They've learned that different words in different languages can mean the same thing.
Compare embeddings for "cat" vs "猫"
Same meaning, different tokens
// Different tokens
tokenize("sunset over ocean") // [18294, 962, 8241]
tokenize("海上日落") // [45982, 23847]
// But similar embeddings after encoding!
encode("sunset over ocean") // [0.82, -0.34, ...]
encode("海上日落") // [0.79, -0.31, ...]
// Cosine similarity: 0.94Quick Win
Module 2 complete! You understand: tokenization, embeddings, context-aware encoding, and multilingual support.