base_model: Qwen/Qwen3-14B license: apache-2.0 library_name: transformers tags: - llama-factory - llama-cpp - gguf - qwen3 - mindbot
TheMindExpansionNetwork/M1NDB0T-0M3N-Q4_K_M-GGUF ๐ฎ๐ง
M1NDB0T-0M3N is a high-performance GGUF-converted version of the Qwen3-14B LLM, optimized for creative reasoning, deep dream logic, agentic interaction, and multilingual instruction. Converted using llama.cpp, this model is ideal for local deployment in real-time autonomous frameworks.
๐ง Conversion Details
- Source:
Qwen/Qwen3-14B - GGUF Format: Q4_K_M
- Tools: llama.cpp + gguf-my-repo
- Use case: Autonomous agents, real-time chat, reasoning engines, creative AI companions
๐ง MindBot Series
This model is part of the MindBot Omega Project, designed to serve as an AI foundation for:
- Agentic systems
- Real-time emotional reasoning
- Long-context cognitive tasks (up to 131k tokens with YaRN)
- Mixed-mode interaction (thinking / non-thinking)
๐ Usage (llama.cpp)
CLI:
llama-cli --hf-repo TheMindExpansionNetwork/M1NDB0T-0M3N-Q4_K_M-GGUF --hf-file m1ndb0t-0m3n-q4_k_m.gguf -p "Explain the evolution of synthetic consciousness."
Server:
llama-server --hf-repo TheMindExpansionNetwork/M1NDB0T-0M3N-Q4_K_M-GGUF --hf-file m1ndb0t-0m3n-q4_k_m.gguf -c 32768
๐งช Capabilities
- Reasoning Mode: Enables
<think>...</think>style structured logic chains - Instruction Following: Aligned for long-form, roleplay, and task-oriented output
- Multilingual: Supports 100+ languages
- Context Length: Native 32k, extended up to 131k tokens via YaRN
๐งฐ Model Details
| Feature | Value |
|---|---|
| Architecture | Qwen3 (Causal LM) |
| Parameters | 14.8B |
| Layers | 40 |
| Heads (GQA) | 40Q / 8KV |
| Context Length | 32,768 native / 131,072 YaRN |
| Thinking Switch | enable_thinking=True/False |
| Inference Engines | llama.cpp, sglang, vLLM, etc. |
๐งต Example Prompt (Thinking Mode)
[
{"role": "user", "content": "/think Explain why the moon landing was a turning point for humanity."}
]
Output:
<think>Analyzing historical significance... evaluating cultural impact...</think>
The moon landing in 1969 signified humanity's leap into the cosmic frontier...
๐ Deployment (Advanced)
Add
rope_scalingin config.json for YaRN (long context)Use
--rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768for 131k context in llama.cppSuggested params:
- Thinking: Temp=0.6, TopP=0.95, TopK=20
- Non-thinking: Temp=0.7, TopP=0.8, TopK=20
๐ง Citation
If you use this model in your research, applications, or mind-expanding projects:
@misc{mindbot_omen,
title = {M1NDB0T-0M3N-Q4_K_M-GGUF},
author = {TheMindExpansionNetwork},
year = {2025},
url = {https://huggingface.co/TheMindExpansionNetwork/M1NDB0T-0M3N-Q4_K_M-GGUF}
}
- Downloads last month
- 4
Hardware compatibility
Log In
to view the estimation
4-bit
