-
-
-
-
-
-
Inference Providers
Active filters: modelopt
nvidia/Qwen3.5-397B-A17B-NVFP4
Text Generation
• Updated
• 57.1k
• 38
lukealonso/MiniMax-M2.5-NVFP4
130B • Updated
• 39.9k
• 34
lukealonso/MiniMax-M2.5-REAP-139B-A10B-NVFP4
80B • Updated
• 9.37k
• 16
Text Generation
• Updated
• 57.6k
• 53
nvidia/Qwen3-Next-80B-A3B-Thinking-NVFP4
Text Generation
• Updated
• 76.8k
• 49
nvidia/Qwen3-Next-80B-A3B-Instruct-NVFP4
Text Generation
• Updated
• 35.4k
• 29
425B • Updated
• 17k
• 9
Text Generation
• 17B • Updated
• 20.6k
• 8
vincentzed-hf/Qwen3.5-397B-A17B-NVFP4
Image-Text-to-Text
• Updated
• 20.2k
• 10
NVFP4/Qwen3-Coder-30B-A3B-Instruct-FP4
Text Generation
• 16B • Updated
• 22.9k
• 9
nvidia/Kimi-K2-Thinking-NVFP4
Text Generation
• Updated
• 124k
• 28
nvidia/Qwen3-235B-A22B-Thinking-2507-NVFP4
Text Generation
• Updated
• 753
• 5
Text Generation
• 8B • Updated
• 157
• 5
vincentzed-hf/Qwen3-Coder-Next-NVFP4
Text Generation
• Updated
• 6.33k
• 7
nvidia/Llama-4-Scout-17B-16E-Instruct-NVFP4
56B • Updated
• 13.3k
• 21
NVFP4/Qwen3-30B-A3B-Instruct-2507-FP4
Text Generation
• 16B • Updated
• 1.22k
• 12
nvidia/Llama-3.1-8B-Instruct-NVFP4
5B • Updated
• 108k
• 7
Text Generation
• 15B • Updated
• 3.4k
• 4
shanjiaz/gpt-oss-120b-nvfp4-modelopt
59B • Updated
• 9.15k
• 3
nvidia/Llama-3.1-Nemotron-Nano-VL-8B-V1-FP4-QAD
Image-Text-to-Text
• Updated
• 412
• 13
nvidia/Qwen3-235B-A22B-Instruct-2507-NVFP4
Text Generation
• 120B • Updated
• 2.73k
• 3
nvidia/Qwen3-Coder-480B-A35B-Instruct-NVFP4
Text Generation
• 241B • Updated
• 569
• 2
Cirrascale/Qwen3-Coder-Next-NVFP4
Text Generation
• Updated
• 693
• 2
txn545/Qwen3.5-35B-A3B-NVFP4
Text Generation
• Updated
• 95
• 1
txn545/Qwen3.5-122B-A10B-NVFP4
Text Generation
• 64B • Updated
• 1.35k
• 1
nvidia/Llama-4-Maverick-17B-128E-Instruct-FP8
402B • Updated
• 598
• 12
nvidia/Llama-4-Scout-17B-16E-Instruct-FP8
109B • Updated
• 39k
• 11
ishan24/test_modelopt_quant
nvidia/Llama-4-Maverick-17B-128E-Eagle3
Updated
• 6
• 9
nvidia/Qwen3-30B-A3B-NVFP4
Text Generation
• 16B • Updated
• 57.4k
• 23