-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Feature: Add Z-Image-Turbo model support #8671
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Add Z-Image-Turbo model support #8671
Conversation
Add comprehensive support for Z-Image-Turbo (S3-DiT) models including: Backend: - New BaseModelType.ZImage in taxonomy - Z-Image model config classes (ZImageTransformerConfig, Qwen3TextEncoderConfig) - Model loader for Z-Image transformer and Qwen3 text encoder - Z-Image conditioning data structures - Step callback support for Z-Image with FLUX latent RGB factors Invocations: - z_image_model_loader: Load Z-Image transformer and Qwen3 encoder - z_image_text_encoder: Encode prompts using Qwen3 with chat template - z_image_denoise: Flow matching denoising with time-shifted sigmas - z_image_image_to_latents: Encode images to 16-channel latents - z_image_latents_to_image: Decode latents using FLUX VAE Frontend: - Z-Image graph builder for text-to-image generation - Model picker and validation updates for z-image base type - CFG scale now allows 0 (required for Z-Image-Turbo) - Clip skip disabled for Z-Image (uses Qwen3, not CLIP) - Optimal dimension settings for Z-Image (1024x1024) Technical details: - Uses Qwen3 text encoder (not CLIP/T5) - 16 latent channels with FLUX-compatible VAE - Flow matching scheduler with dynamic time shift - 8 inference steps recommended for Turbo variant - bfloat16 inference dtype
Add comprehensive LoRA support for Z-Image models including: Backend: - New Z-Image LoRA config classes (LoRA_LyCORIS_ZImage_Config, LoRA_Diffusers_ZImage_Config) - Z-Image LoRA conversion utilities with key mapping for transformer and Qwen3 encoder - LoRA prefix constants (Z_IMAGE_LORA_TRANSFORMER_PREFIX, Z_IMAGE_LORA_QWEN3_PREFIX) - LoRA detection logic to distinguish Z-Image from Flux models - Layer patcher improvements for proper dtype conversion and parameter
|
Very impressive. The model is working with acceptable performance even on my 12 GB RAM card. I notice the following message in the error log: Would it be possible to add support for the quantized models, e.g. T5B/Z-Image-Turbo-FP8 or jayn7/Z-Image-Turbo-GGUF ? |
|
I'll take a look at it and report back. |
|
I tried two huggingface LoRAs that claim to be based on z-image, but they were detected as Flux lycoris models: reverentelusarca/elusarca-anime-style-lora-z-image-turbo |
…ntification Move Flux layer structure check before metadata check to prevent misidentifying Z-Image LoRAs (which use `diffusion_model.layers.X`) as Flux AI Toolkit format. Flux models use `double_blocks` and `single_blocks` patterns which are now checked first regardless of metadata presence.
…ibility Add comprehensive support for GGUF quantized Z-Image models and improve component flexibility: Backend: - New Main_GGUF_ZImage_Config for GGUF quantized Z-Image transformers - Z-Image key detection (_has_z_image_keys) to identify S3-DiT models - GGUF quantization detection and sidecar LoRA patching for quantized models - Qwen3Encoder_Qwen3Encoder_Config for standalone Qwen3 encoder models Model Loader: - Split Z-Image model
…kuchensack/InvokeAI into feat/z-image-turbo-support
|
When running upscaling, diffusers 0.36.0.dev0 dies because the |
|
Think this needs support for loading in the repackaged safetensors versions of the models that people use with Comfy - the default fp16 version and the fp8 model. People will likely try to load those model files as the transformer and also as the text encoder and share between the two programs. |
|
I've tested multiple LoRAs and they import and work correctly. |
…inModelConfig
The FLUX Dev license warning in model pickers used isCheckpointMainModelConfig
incorrectly:
```
isCheckpointMainModelConfig(config) && config.variant === 'dev'
```
This caused a TypeScript error because CheckpointModelConfig type doesn't
include the 'variant' property (it's extracted as `{ type: 'main'; format:
'checkpoint' }` which doesn't narrow to include variant).
Changes:
- Add isFluxDevMainModelConfig type guard that properly checks
base='flux' AND variant='dev', returning MainModelConfig
- Update MainModelPicker and InitialStateMainModelPicker to use new guard
- Remove isCheckpointMainModelConfig as it had no other usages
The function was removed because:
1. It was only used for detecting FLUX Dev models (incorrect use case)
2. No other code needs a generic "is checkpoint format" check
3. The pattern in this codebase is specific type guards per model variant
(isFluxFillMainModelModelConfig, isRefinerMainModelModelConfig, etc.)
This is also the same issue with a model that I manually converted on my end too.
|
|
Same issue with CFG set to 0 too. Another issue I found is that now that 0 CFG is possible, we cannot set it as the model default in the model manager. It bugs out. Needs fixing. |
…ters - Add Qwen3EncoderGGUFLoader for llama.cpp GGUF quantized text encoders - Convert llama.cpp key format (blk.X., token_embd) to PyTorch format - Handle tied embeddings (lm_head.weight ↔ embed_tokens.weight) - Dequantize embed_tokens for embedding lookups (GGMLTensor limitation) - Add QK normalization key mappings (q_norm, k_norm) for Qwen3 - Set Z-Image defaults: steps=9, cfg_scale=0.0, width/height=1024 - Allow cfg_scale >= 0 (was >= 1) for Z-Image Turbo compatibility - Add GGUF format detection for Qwen3 model probing
…rNorm - Add CustomDiffusersRMSNorm for diffusers.models.normalization.RMSNorm - Add CustomLayerNorm for torch.nn.LayerNorm - Register both in AUTOCAST_MODULE_TYPE_MAPPING Enables partial loading (enable_partial_loading: true) for Z-Image models by wrapping their normalization layers with device autocast support
Fixed the DEFAULT_TOKENIZER_SOURCE to Qwen/Qwen3-4B
I got the FP8 Version without the scaled running.
added.
fixed for the one you linked.
fixed. tested with the Qwen3-4B-Q5_0.gguf and Qwen3-4B-Q6_K.gguf
found the problem here. i used the wrong qwen config. |
|
Gave it a quick test. All the above stuff has been fixed. Tested text to image, image to image, inpainting and outpainting. All of them work fine except outpainting -- model related maybe? But beyond that, I think this PR is good to go once we clear up the tests and formatting. @lstein could probably give it a quick look over too maybe. There's a new controlnet model for Z Image. But I guess that can be another PR. Great job overall. |
|
Yeah the outpainting is a limitation of model. i found (https://github.com/scraed/LanPaint?tab=readme-ov-file#example-z-image-inpaintlanpaint-k-sampler-5-steps-of-thinking) (https://arxiv.org/abs/2502.03491) but that is a lot for now. |
|
https://github.com/Pfannkuchensack/InvokeAI/tree/feature/z-image-control the control branch. not ready. |
|
I've fixed up the ruff checks. I didn't update the uv lockfile yet. I'll let you do the pinning of the diffusers version on that. Currently it is locking to the dev version from their git. Version 0.36 released last week suffices with this update, we can pin that and update the lock file directly. I'll also go through the code later today. There's a lot but I'll skim through. Once the uv lockfile is resolved, we are a go I think. Pinging @lstein once more for a second set of eyes before we merge. |
lstein
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've functionally tested extensively over the past week and did not encounter any hiccups. I'm happy to approve.
…noise node The Z-Image denoise node outputs latents, not images, so these mixins were unnecessary. Metadata and board handling is correctly done in the L2I (latents-to-image) node. This aligns with how FLUX denoise works.
The previous mixed-precision optimization for FP32 mode only converted some VAE decoder layers (post_quant_conv, conv_in, mid_block) to the latents dtype while leaving others (up_blocks, conv_norm_out) in float32. This caused "expected scalar type Half but found Float" errors after recent diffusers updates. Simplify FP32 mode to consistently use float32 for both VAE and latents, removing the incomplete mixed-precision logic. This trades some VRAM usage for stability and correctness. Also removes now-unused attention processor imports.
|
I fixed a Problem with SDXL FP32 that maybe comes from the update of diffusers. |
|
Sounds good (if anything else breaks, we'll fix it up in another PR). Should I update the uv lockfile to 0.36 and merge the PR? If anyone has an issue, speak now or forever hold your peace until the next PR. |
|
I've upgraded Diffusers to 0.36.0 and the lock file. If the checks pass, I think this PR is good for merge. Once this is merged, we'll move on to the ControlNet PR. |



Add comprehensive support for Z-Image-Turbo (S3-DiT) models including:
Backend:
Invocations:
Frontend:
Technical details:
Summary
Related Issues / Discussions
QA Instructions
Merge Plan
Standard merge, no special considerations needed.
Checklist
What's Newcopy (if doing a release after this PR)