Update README.md
Browse files
README.md
CHANGED
|
@@ -15,7 +15,7 @@ Only non-shared experts within transformer blocks are compressed. Weights are qu
|
|
| 15 |
|
| 16 |
Model checkpoint is saved in [compressed_tensors](https://github.com/neuralmagic/compressed-tensors) format.
|
| 17 |
|
| 18 |
-
| Models | Experts Quantized | Attention blocks quantized | Size (
|
| 19 |
| ------ | --------- | --------- | --------- |
|
| 20 |
| [deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) | ❌ | ❌ | 671 GB |
|
| 21 |
| [ISTA-DASLab/DeepSeek-R1-GPTQ-4b-128g-experts](https://huggingface.co/ISTA-DASLab/DeepSeek-R1-GPTQ-4b-128g-experts) | ✅ | ❌ | 346 GB |
|
|
@@ -32,7 +32,7 @@ For reasoning tasks we estimate pass@1 based on 10 runs with different seeds and
|
|
| 32 |
#### OpenLLM Leaderboard V1 tasks
|
| 33 |
|
| 34 |
| | Recovery (%) | Average Score | ARC-Challenge<br>acc_norm, 25-shot | GSM8k<br>exact_match, 5-shot | HellaSwag<br>acc_norm, 10-shot | MMLU<br>acc, 5-shot | TruthfulQA<br>mc2, 0-shot | WinoGrande<br>acc, 5-shot |
|
| 35 |
-
|
|
| 36 |
| deepseek/DeepSeek-R1 | 100.00 | 81.04 | 72.53 | 95.91 | 89.30 | 87.22 | 59.28 | 82.00 |
|
| 37 |
| cognitivecomputations/DeepSeek-R1-AWQ | 100.07 | 81.10 | 73.12 | 95.15 | 89.07 | 86.86 | 60.09 | 82.32 |
|
| 38 |
| ISTA-DASLab/DeepSeek-R1-GPTQ-4b-128g | 99.86 | 80.93 | 72.70 | 95.68 | 89.25 | 86.83 | 58.77 | 82.32 |
|
|
|
|
| 15 |
|
| 16 |
Model checkpoint is saved in [compressed_tensors](https://github.com/neuralmagic/compressed-tensors) format.
|
| 17 |
|
| 18 |
+
| Models | Experts Quantized | Attention blocks quantized | Size (GB) |
|
| 19 |
| ------ | --------- | --------- | --------- |
|
| 20 |
| [deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) | ❌ | ❌ | 671 GB |
|
| 21 |
| [ISTA-DASLab/DeepSeek-R1-GPTQ-4b-128g-experts](https://huggingface.co/ISTA-DASLab/DeepSeek-R1-GPTQ-4b-128g-experts) | ✅ | ❌ | 346 GB |
|
|
|
|
| 32 |
#### OpenLLM Leaderboard V1 tasks
|
| 33 |
|
| 34 |
| | Recovery (%) | Average Score | ARC-Challenge<br>acc_norm, 25-shot | GSM8k<br>exact_match, 5-shot | HellaSwag<br>acc_norm, 10-shot | MMLU<br>acc, 5-shot | TruthfulQA<br>mc2, 0-shot | WinoGrande<br>acc, 5-shot |
|
| 35 |
+
| ------------------------------------------ | :----------: | :-----------: | :--------------------------------: | :--------------------------: | :----------------------------: | :-----------------: | :-----------------------: | :-----------------------: |
|
| 36 |
| deepseek/DeepSeek-R1 | 100.00 | 81.04 | 72.53 | 95.91 | 89.30 | 87.22 | 59.28 | 82.00 |
|
| 37 |
| cognitivecomputations/DeepSeek-R1-AWQ | 100.07 | 81.10 | 73.12 | 95.15 | 89.07 | 86.86 | 60.09 | 82.32 |
|
| 38 |
| ISTA-DASLab/DeepSeek-R1-GPTQ-4b-128g | 99.86 | 80.93 | 72.70 | 95.68 | 89.25 | 86.83 | 58.77 | 82.32 |
|