mradermacher commited on
Commit
bdca9fe
·
verified ·
1 Parent(s): f9210a3

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -77,7 +77,11 @@ more details, including on how to concatenate multi-part files.
77
  |:-----|:-----|--------:|:------|
78
  | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-i1-GGUF/resolve/main/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV.imatrix.gguf) | imatrix | 0.3 | imatrix file (for creating your own qwuants) |
79
  | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-i1-GGUF/resolve/main/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV.i1-Q2_K.gguf) | i1-Q2_K | 15.7 | IQ3_XXS probably better |
 
80
  | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-i1-GGUF/resolve/main/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV.i1-IQ3_M.gguf) | i1-IQ3_M | 18.8 | |
 
 
 
81
 
82
  Here is a handy graph by ikawrakow comparing some lower-quality quant
83
  types (lower is better):
 
77
  |:-----|:-----|--------:|:------|
78
  | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-i1-GGUF/resolve/main/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV.imatrix.gguf) | imatrix | 0.3 | imatrix file (for creating your own qwuants) |
79
  | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-i1-GGUF/resolve/main/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV.i1-Q2_K.gguf) | i1-Q2_K | 15.7 | IQ3_XXS probably better |
80
+ | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-i1-GGUF/resolve/main/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 16.5 | lower quality |
81
  | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-i1-GGUF/resolve/main/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV.i1-IQ3_M.gguf) | i1-IQ3_M | 18.8 | |
82
+ | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-i1-GGUF/resolve/main/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV.i1-Q3_K_M.gguf) | i1-Q3_K_M | 20.5 | IQ3_S probably better |
83
+ | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-i1-GGUF/resolve/main/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV.i1-Q4_K_S.gguf) | i1-Q4_K_S | 24.3 | optimal size/speed/quality |
84
+ | [GGUF](https://huggingface.co/mradermacher/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV-i1-GGUF/resolve/main/Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-ST-TNG-IV.i1-Q4_K_M.gguf) | i1-Q4_K_M | 25.8 | fast, recommended |
85
 
86
  Here is a handy graph by ikawrakow comparing some lower-quality quant
87
  types (lower is better):