FLUX.2 Text Encoder (FP8)

Combined repo for FLUX.2 text encoding with FP8 quantization (~24GB VRAM instead of ~48GB).

Components

Usage

from transformers import AutoProcessor, Mistral3ForConditionalGeneration

model = Mistral3ForConditionalGeneration.from_pretrained(
    "TensorTemplar/flux2-text-encoder-fp8",
    local_files_only=True,  # or False for download
)
processor = AutoProcessor.from_pretrained(
    "TensorTemplar/flux2-text-encoder-fp8",
    use_fast=False,
)

Purpose

This repo exists to simplify FLUX.2 deployment by combining all necessary text encoder components into a single download. Used for extracting intermediate hidden states (layers 10/20/30) for image generation conditioning.

Attribution

Downloads last month
6
Safetensors
Model size
24B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TensorTemplar/flux2-text-encoder-fp8