bubbliiiing
Update Weights
1fc23a9
metadata
license: apache-2.0
library_name: videox_fun

Z-Image-Turbo-Fun-Controlnet-Union-2.1

Github

Update

  • During testing, we found that applying ControlNet to Z-Image-Turbo caused the model to lose its acceleration capability and become blurry. We performed 8-step distillation on the version 2.1 model, and the distilled model demonstrates better performance when using 8-step prediction. Additionally, we have uploaded a tile model that can be used for super-resolution generation. [2025.12.22]
  • Due to a typo in version 2.0, control_layers was used instead of control_noise_refiner to process refiner latents during training. Although the model converged normally, the model inference speed was slow because control_layers forward pass was performed twice. In version 2.1, we made an urgent fix and the speed has returned to normal. [2025.12.17]

Model Card

Name Description
Z-Image-Turbo-Fun-Controlnet-Union-2.1-8steps.safetensors Based on version 2.1, the model was distilled using an 8-step distillation algorithm. 8-step prediction is recommended. Compared to version 2.1, when using 8-step prediction, the images are clearer and the composition is more reasonable.
Z-Image-Turbo-Fun-Controlnet-Tile-2.1-8steps.safetensors A Tile model trained on high-definition datasets that can be used for super-resolution, with a maximum training resolution of 2048x2048. The model was distilled using an 8-step distillation algorithm, and 8-step prediction is recommended.
Z-Image-Turbo-Fun-Controlnet-Union-2.1.safetensors A retrained model after fixing the typo in version 2.0, with faster single-step speed. Similar to version 2.0, the model lost some of its acceleration capability after training, thus requiring more steps.
Z-Image-Turbo-Fun-Controlnet-Union-2.0.safetensors ControlNet weights for Z-Image-Turbo. Compared to version 1.0, it adds modifications to more layers and was trained for a longer time. However, due to a typo in the code, the layer blocks were forwarded twice, resulting in slower speed. The model supports multiple control conditions such as Canny, Depth, Pose, MLSD, etc. Additionally, the model lost some of its acceleration capability after training, thus requiring more steps.

Model Features

  • This ControlNet is added on 15 layer blocks and 2 refiner layer blocks. It supports multiple control conditionsβ€”including Canny, HED, Depth, Pose and MLSD can be used like a standard ControlNet.
  • Inpainting mode is also supported.
  • Training Process:
    • 2.0: The model was trained from scratch for 70,000 steps on a dataset of 1 million high-quality images covering both general and human-centric content. Training was performed at 1328 resolution using BFloat16 precision, with a batch size of 64, a learning rate of 2e-5, and a text dropout ratio of 0.10.
    • 2.1: Version 2.1 is based on the version 2.0 weights and continued training for an additional 11,000 steps after the typo fix, using the same parameters and dataset.
    • 2.1-8-steps: Version 2.1-8-steps was obtained by training for 5,500 steps using an 8-step distillation algorithm based on version 2.1.
  • Note on Steps:
    • 2.0 and 2.1: As you increase the control strength (higher control_context_scale values), it's recommended to appropriately increase the number of inference steps to achieve better results and maintain generation quality. This is likely because the control model has not been distilled.
    • 2.1-8-steps: Just use 8 steps in inference.
  • You can adjust control_context_scale for stronger control and better detail preservation. For better stability, we highly recommend using a detailed prompt. The optimal range for control_context_scale is from 0.65 to 0.90.
  • During testing, in versions 2.0 and 2.1, we found that applying ControlNet to Z-Image-Turbo caused the model to lose its acceleration capability and produce blurry images. For detailed information on strength and step count testing, please refer to Scale Test Results. These results were generated using version 2.0. For strength and step testing, please refer to Scale Test Results. This was obtained by generating with version 2.0.

TODO

  • Train on better data.

Results

Difference between 2.1 and 2.1-8steps.

8 steps results:

Z-Image-Turbo-Fun-Controlnet-Union-2.1-8steps Z-Image-Turbo-Fun-Controlnet-Union-2.1

Generation Results

Pose + Inpaint Output
Pose Output
Pose Output
Pose Output
Canny Output
HED Output
Depth Output
Low Resolution High Resolution

Inference

Go to the VideoX-Fun repository for more details.

Please clone the VideoX-Fun repository and create the required directories:

# Clone the code
git clone https://github.com/aigc-apps/VideoX-Fun.git

# Enter VideoX-Fun's directory
cd VideoX-Fun

# Create model directories
mkdir -p models/Diffusion_Transformer
mkdir -p models/Personalized_Model

Then download the weights into models/Diffusion_Transformer and models/Personalized_Model.

πŸ“¦ models/
β”œβ”€β”€ πŸ“‚ Diffusion_Transformer/
β”‚   └── πŸ“‚ Z-Image-Turbo/
β”œβ”€β”€ πŸ“‚ Personalized_Model/
β”‚   β”œβ”€β”€ πŸ“¦ Z-Image-Turbo-Fun-Controlnet-Union-2.1.safetensors
β”‚   β”œβ”€β”€ πŸ“¦ Z-Image-Turbo-Fun-Controlnet-Union-2.1-8steps.safetensors
β”‚   └── πŸ“¦ Z-Image-Turbo-Fun-Controlnet-Tile-2.1-8steps.safetensors

Then run the file examples/z_image_fun/predict_t2i_control_2.1.py and examples/z_image_fun/predict_i2i_inpaint_2.1.py.

(Obsolete) Scale Test Results:

Scale Test Results

The table below shows the generation results under different combinations of Diffusion steps and Control Scale strength:

Diffusion Steps Scale 0.65 Scale 0.70 Scale 0.75 Scale 0.8 Scale 0.9 Scale 1.0
9
10
20
30
40

Parameter Description:

Diffusion Steps: Number of iteration steps for the diffusion model (9, 10, 20, 30, 40) Control Scale: Control strength coefficient (0.65 - 1.0)