nm-testing/Meta-Llama-3-8B-Instruct-W4A16-compressed-tensors-test
Text Generation
• 2B • Updated • 28
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test-bos
Text Generation
• 8B • Updated • 2
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test
Text Generation
• 8B • Updated • 7.57k
nm-testing/Meta-Llama-3-8B-Instruct-W4-Group128-A16-Test
Text Generation
• 2B • Updated • 1
nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Per-Token-Test
Text Generation
• 8B • Updated • 68
nm-testing/tinyllama-oneshot-w8a16-per-channel
Text Generation
• 0.4B • Updated • 1.45k
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-Dyn-Per-Token-2048-Samples
Text Generation
• 8B • Updated • 56
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-Dyn-Per-Token
Text Generation
• 8B • Updated • 3
nm-testing/llama-3-instruct-w8a8-dyn-per-token-test
Text Generation
• 8B • Updated nm-testing/tinyllama-oneshot-w8-channel-a8-tensor
Text Generation
• 1B • Updated • 1.66k
nm-testing/tinyllama-oneshot-w8a8-channel-dynamic-token-v2
Text Generation
• 1B • Updated • 10.6k
nm-testing/tinyllama-oneshot-w8w8-test-static-shape-change
Text Generation
• 1B • Updated • 35.2k
nm-testing/tinyllama-oneshot-w4a16-channel-v2
Text Generation
• 0.3B • Updated • 4.67k
• 1
nm-testing/tinyllama-oneshot-w4a16-group128-v2
Text Generation
• 0.3B • Updated • 1.55k
nm-testing/tinyllama-oneshot-w8a8-static-v2
Text Generation
• 1B • Updated • 26
nm-testing/tinyllama-oneshot-w8a8-dynamic-token-v2
Text Generation
• 1B • Updated • 7.63k
nm-testing/tinyllama-marlin24-w4a16-group128
Text Generation
• 0.3B • Updated • 1
nm-testing/llama7b-one-shot-2_4-w4a16-marlin24-t-alt
Text Generation
• 0.9B • Updated • 3
nm-testing/llama7b-one-shot-2_4-w4a16-marlin24-t
Text Generation
• 1B • Updated • 563
• 1
nm-testing/llama3-8b-w8_channel-a8_tensor-compressed
Text Generation
• 8B • Updated • 5
nm-testing/tinyllama-one-shot-w4a16-group-compressed
Text Generation
• 1B • Updated • 2
nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Asym-Per-Token-Test
8B • Updated • 1
nm-testing/Meta-Llama-3.1-8B-Instruct-FP8-hf
Text Generation
• 8B • Updated • 2
nm-testing/deepseekv2-lite-awq
16B • Updated • 2
• 1
nm-testing/Meta-Llama-3-8B-Instruct-fp8-hf_compat
8B • Updated • 1.78k
nm-testing/Meta-Llama-3-70B-Instruct-FBGEMM-nonuniform
Text Generation
• 71B • Updated • 2.01k
• 1
nm-testing/Meta-Llama-3-8B-Instruct-FBGEMM-nonuniform
Text Generation
• 8B • Updated nm-testing/llama-3-8b-instruct-fbgemm-test-model
Text Generation
• 8B • Updated • 2