Very Large GGUFs
Collection
GGUF quantized versions of very large models - over 100B parameters
β’
71 items
β’
Updated
β’
6
1-bit
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Base model
MiniMaxAI/MiniMax-M2.5