mlx-community/MiMo-V2-Flash-mlx-8bit-gs32

This model mlx-community/MiMo-V2-Flash-mlx-8bit-gs32 was converted to MLX format from XiaomiMiMo/MiMo-V2-Flash using mlx-lm version 0.30.

Recipe:

  • 8-bit
  • group-size 32
  • 9 bits per weight (bpw)

You can find more similar MLX model quants for a single Apple Mac Studio M3 Ultra with 512 GB at https://huggingface.co/bibproj


Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/MiMo-V2-Flash-mlx-8bit-gs32")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
328
Safetensors
Model size
309B params
Tensor type
BF16
U32
F32
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for mlx-community/MiMo-V2-Flash-mlx-8bit-gs32

Quantized
(10)
this model