Fanar-1-9B-Instruct — GGUF quantized
This repo contains multiple GGUF builds of the Arabic-English LLM Fanar-1-9B-Instruct, the instruction-tuned variant of Fanar-1-9B created by QCRI / HBKU. The base model is a 9 B-parameter continuation of gemma-2-9b trained on ≈1 T Arabic + English tokens and aligned through SFT → DPO (4.5 M / 250 K pairs). License remains Apache-2.0 and the context window is 4 096 tokens. :contentReference[oaicite:0]{index=0}
Available files
| Bits | Format | Size (≈) |
|---|---|---|
| Q2_K | 2-bit | 3.4 GB |
| Q3_K_M | 3-bit | 4.4 GB |
| Q4_0 / Q4_K_M | 4-bit | 5.1 GB / 5.4 GB |
| Q5_0 / Q5_K_M | 5-bit | 6.1 GB / 6.3 GB |
| Q6_K | 6-bit | 8 GB* |
| Q8_0 | 8-bit | 9.3 GB |
| F16 / F32 | 16 / 32-bit | 17.6 GB |
*value shown on the HF page is a placeholder.
Quick start (llama.cpp ≥ 0.2)
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make -j
./main -m Fanar-1-9B-Instruct.Q4_K_M.gguf -p "ما هي عاصمة قطر؟"
Python (llama-cpp-python)
from llama_cpp import Llama
llm = Llama(
model_path="Fanar-1-9B-Instruct.Q4_K_M.gguf",
n_ctx=4096,
chat_format="gemma" # Fanar follows Gemma chat template
)
print(llm.create_chat_completion(
messages=[{"role":"user","content":"Translate 'peace' to Arabic"}]
).choices[0].message.content)
Credits & notes
- Original model:
QCRI/Fanar-1-9B-Instruct(please consult its model card for training data, evaluation results and limitations). (Hugging Face) - This repository only supplies GGUF conversions for efficient local inference on CPU/GPU; no weights were changed.
- Use responsibly—outputs may be inaccurate, biased, or culturally sensitive.
- Downloads last month
- 59
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
32-bit