will
willfalco
ยท
AI & ML interests
None yet
Recent Activity
liked
a model about 24 hours ago
syntheticlab/Kimi-K2.5-NVFP4A16 new activity
1 day ago
syntheticlab/Kimi-K2.5-NVFP4A16:6 x RTX 6000? new activity
1 day ago
lukealonso/GLM-5-NVFP4:Possible to run on six RTX Pro 6000 Blackwell with vLLM oder SGLang? Organizations
None yet
6 x RTX 6000?
#1 opened 1 day ago
by
willfalco
Possible to run on six RTX Pro 6000 Blackwell with vLLM oder SGLang?
๐ 2
5
#2 opened 8 days ago
by
FabianHeller
SGLang startup errors
4
#1 opened 12 days ago
by
fpjnijweide
Great Model! - sglang mtp support for triton backend
๐ 3
4
#19 opened 2 months ago
by
chriswritescode
[request] DeepSeek-V3.1-Terminus
4
#3 opened 3 months ago
by
willfalco
you know which nightly it worked with? because it does not with current one
31
#1 opened 3 months ago
by
willfalco
random atrifacts on larger outputs
2
#4 opened 2 months ago
by
willfalco
is NVFP4 supported on sm120 (blackwell rtx pro 6000, rtx 5090 etc)?
10
#4 opened 3 months ago
by
Fernanda24
4 x RTX PRO 6000
๐ 1
2
#1 opened 3 months ago
by
willfalco
Is it possible to make smaller NVFP4 quant at 340-360GB to fit in 4x96gb?
๐ 1
68
#1 opened 3 months ago
by
Fernanda24
Question will it work in vllm or sglang with rtx 6000 blackwells? cuda arch sm120
6
#1 opened 4 months ago
by
Fernanda24
ooof this fits in 4x96gb can we get this for the new 3.2 Speciale ase well please :)
16
#2 opened 3 months ago
by
Fernanda24
Aww Man!
20
#1 opened 3 months ago
by
mtcl
anyone ran this on blackwell?
๐ฅ 1
#2 opened 3 months ago
by
willfalco
you know which nightly it worked with? because it does not with current one
31
#1 opened 3 months ago
by
willfalco
you know which nightly it worked with? because it does not with current one
31
#1 opened 3 months ago
by
willfalco