Susant-Achary's picture
Update README.md
1bfc445 verified
metadata
model-index:
  - name: LFM2-8B-A1B  MLX (Apple Silicon), **6-bit** (with MoE + RAM planning)
    results: []
language:
  - en
tags:
  - mlx
  - apple-silicon
  - liquidai
  - lfm2
  - Mixture of Expert
  - transformer
  - long-context
  - instruct
  - quantized
  - 6bit
  - coding
pipeline_tag: text-generation
license: other
license_name: lfm1.0
license_link: LICENSE
library_name: mlx
base_model:
  - LiquidAI/LFM2-8B-A1B

LFM2-8B-A1B — MLX 6-bit (Apple Silicon)

Maintainer / Publisher: Susant Achary
Upstream model: LiquidAI/LFM2-8B-A1B
This repo (MLX 6-bit): mlx-community/LFM2-8B-A1B-6bit-MLX

This repository provides an Apple-Silicon-optimized MLX build of LFM2-8B-A1B at 6-bit quantization.
Among quantized tiers, 6-bit is a strong fidelity sweet-spot for many Macs—noticeably smaller than FP16/8-bit while preserving answer quality for instruction following, summarization, and structured extraction.


🔎 What is LFM2-8B-A1B?

  • Architecture: Mixture-of-Experts (MoE) Transformer.
  • Size: 8B total parameters with ~1B active per token (A1B ≈ “1B active”).
  • Why MoE? At each token, a subset of experts is activated, reducing compute per token while keeping a larger parameter pool for expressivity.

Single-device memory reality: Even though only ~1B are active per token, all experts typically reside in memory during inference on one device. That means RAM planning should track total parameters, not just the active slice.


📦 What’s in this MLX build

  • config.json (MLX), mlx_model*.safetensors (6-bit shards)
  • Tokenizer files: tokenizer.json, tokenizer_config.json
  • Model metadata (e.g., model_index.json)

Target: macOS on Apple Silicon (M-series) with Metal/MPS.


✅ Intended use

  • General instruction following, chat, and summarization
  • RAG and long-context assistants on device
  • Schema-guided structured outputs (JSON)

⚠️ Limitations

  • Quantization can cause small regressions vs FP16 on tricky math/code or tight formatting.
  • For very long contexts and/or batching, the KV-cache can dominate memory—tune max_tokens and batch size.
  • Add your own safety/guardrails for sensitive deployments.

🔢 RAM planning (6-bit, MoE, MLX)

You asked to assume and decide realistic ranges. The following are practical starting points for a single-device MLX run; validate on your hardware.

Rule-of-thumb components

  • Weights (6-bit):total_params × 0.75 byte → for 8B params ≈ ~6.0 GB