Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
72.5
TFLOPS
17
12
zletpm996
zletpm
Follow
jayboygepa's profile picture
1 follower
·
13 following
AI & ML interests
None yet
Recent Activity
reacted
to
John1604
's
post
with 🔥
3 days ago
我即将达到公共存储空间上限。我发现我的仓库 John1604/Kimi-K2-Thinking-q6K-gguf 没有获得足够的下载量,几乎占用了 1T 存储空间。尽管我喜爱 Kimi K2 的思考方式,但可能不得不删除这个模型。因为它是一个真正的开源 1T LLM,与任何前沿的 LLM 模型相媲美。在 AI 竞争中,美国有四家公司拥有1T+模型:xAI, OpenAI, 谷歌和Anthropologie。中国也有四家公司拥有1T+模型:阿里巴巴, Kimi, DeepSeek和GLM。目前双方势均力敌。 I'm about to reach my public storage limit. I've discovered that my repository John1604/Kimi-K2-Thinking-q6K-gguf isn't getting enough downloads and is nearly consuming 1TB of storage. While I love Kimi K2's way of thinking, I have to delete this model because it's a true open-source 1TB LLM, comparable to any cutting-edge LLM model. In the AI race, four US companies have 1TB+ models: xAI, OpenAI, Google, and Anthropic. China also has four companies with 1TB+ models: Alibaba, Kimi, DeepSeek, and GLM. Currently, the two sides are evenly matched. Only American team and Chinese team have LLM with 1T+ parameters. Let's cheer for them to reach AGI in next 5 to 10 years. Maybe a 64T chinese model will do it -- Human and cat brain neuron difference is the model size of 64:1.
new
activity
6 months ago
mistralai/Mistral-Small-3.2-24B-Instruct-2506:
This model performs worse than the Mistral-Small-3.1-24B model with a 4-bit quantization.
new
activity
6 months ago
mlx-community/DeepSeek-R1-0528-4bit:
Is there a chance to get the AWQ or DWQ model? Mac Studio (512GB) has a 3-bit honey point.
View all activity
Organizations
zletpm
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
New activity in
mistralai/Mistral-Small-3.2-24B-Instruct-2506
6 months ago
This model performs worse than the Mistral-Small-3.1-24B model with a 4-bit quantization.
➕
3
2
#6 opened 6 months ago by
zletpm
New activity in
mlx-community/DeepSeek-R1-0528-4bit
6 months ago
Is there a chance to get the AWQ or DWQ model? Mac Studio (512GB) has a 3-bit honey point.
👍
1
5
#1 opened 7 months ago by
zletpm
New activity in
MiniMaxAI/MiniMax-M1-80k
6 months ago
I hope you guys can provide a 32B dense model
👍
2
#9 opened 6 months ago by
zletpm
MLX Convert Error
4
#8 opened 6 months ago by
baggaindia
New activity in
Qwen/Qwen3-32B
7 months ago
Performance problem: 32B 4x slower than 30B
1
#27 opened 7 months ago by
jagusztinl
New activity in
mlx-community/Qwen3-235B-A22B-3bit-DWQ
7 months ago
Is this DWQ model converted using the default dataset “allenai/tulu-3-sft-mixture”?
#1 opened 7 months ago by
zletpm
New activity in
mlx-community/Qwen3-30B-A3B-4bit-DWQ-0508
8 months ago
What distinguishes this model from another MLX-Community/Qwen3-30B-A3B-4bit-DWQ?
5
#1 opened 8 months ago by
zletpm
New activity in
GreenBitAI/Qwen-3-32B-layer-mix-bpw-4.0
8 months ago
I hope this project can provide more information about this model.
#1 opened 8 months ago by
zletpm
New activity in
TencentBAC/Conan-embedding-v2
8 months ago
why i can see any model file?
1
#4 opened 8 months ago by
zletpm
New activity in
meta-llama/Llama-4-Scout-17B-16E-Instruct
9 months ago
Access Rejected
5
#62 opened 9 months ago by
ansenang
access denied
👀
👍
8
1
#44 opened 9 months ago by
qulong
New activity in
janboe91/Mistral-Small-3.1-24B-Instruct-2503-HF-mlx-8Bit
9 months ago
Could you please provide a 16-bit version or teach me the steps involved in converting it myself
1
#1 opened 9 months ago by
zletpm
Could you please provide a 16-bit version or teach me the steps involved in converting it myself
1
#1 opened 9 months ago by
zletpm
Could you please provide a 16-bit version or teach me the steps involved in converting it myself
1
#1 opened 9 months ago by
zletpm
Load more