FLUX.2 Text Encoder (FP8)
Combined repo for FLUX.2 text encoding with FP8 quantization (~24GB VRAM instead of ~48GB).
Components
| Component | Source |
|---|---|
| FP8 Model Weights | RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-FP8-dynamic |
| Tokenizer/Processor | mistralai/Mistral-Small-3.1-24B-Instruct-2503 |
Usage
from transformers import AutoProcessor, Mistral3ForConditionalGeneration
model = Mistral3ForConditionalGeneration.from_pretrained(
"TensorTemplar/flux2-text-encoder-fp8",
local_files_only=True, # or False for download
)
processor = AutoProcessor.from_pretrained(
"TensorTemplar/flux2-text-encoder-fp8",
use_fast=False,
)
Purpose
This repo exists to simplify FLUX.2 deployment by combining all necessary text encoder components into a single download. Used for extracting intermediate hidden states (layers 10/20/30) for image generation conditioning.
Attribution
- FP8 quantization by RedHatAI using llm-compressor
- Original model by Mistral AI
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for TensorTemplar/flux2-text-encoder-fp8
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503