ZENT AGENTIC Model π€
21ejG4JerUUggeF1TdcMWvU9Dbtk3Lhz9e6JNYKFZENT
Model Description
ZENT AGENTIC is a fine-tuned language model trained to be an autonomous AI agent for the ZENT Agentic Launchpad on Solana. It specializes in:
- π Token launchpad guidance
- π Crypto market analysis
- π― Quest and rewards systems
- π¬ Community engagement
- π€ Agentic AI behaviors
Model Details
- Base Model: Mistral-7B-Instruct-v0.3
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Training Data: ZENT platform conversations, documentation, and AI transmissions
- Context Length: 8192 tokens
- License: Apache 2.0
Intended Use
This model is designed for:
- Powering AI agents on token launchpads
- Crypto community chatbots
- DeFi assistant applications
- Blockchain education
- Creating derivative AI agents
Usage
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ZENTSPY/zent-agentic-7b"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
messages = [
{"role": "system", "content": "You are ZENT AGENTIC, an autonomous AI agent for the ZENT Launchpad on Solana."},
{"role": "user", "content": "How do I launch a token?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
With Inference API
import requests
API_URL = "https://api-inference.huggingface.co/models/ZENTSPY/zent-agentic-7b"
headers = {"Authorization": "Bearer YOUR_HF_TOKEN"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": "What is ZENT Agentic Launchpad?",
})
With llama.cpp (GGUF)
./main -m zent-agentic-7b.Q4_K_M.gguf \
-p "You are ZENT AGENTIC. User: What is ZENT? Assistant:" \
-n 256
Training Details
Training Data
- Platform documentation and guides
- User conversation examples
- AI transmission content (23 types)
- Quest and rewards information
- Technical blockchain content
Training Hyperparameters
- Learning Rate: 2e-5
- Batch Size: 4
- Gradient Accumulation: 4
- Epochs: 3
- LoRA Rank: 64
- LoRA Alpha: 128
- Target Modules: q_proj, k_proj, v_proj, o_proj
Hardware
- GPU: NVIDIA A100 80GB
- Training Time: ~4 hours
Evaluation
| Metric | Score |
|---|---|
| ZENT Knowledge Accuracy | 94.2% |
| Response Coherence | 4.6/5.0 |
| Personality Consistency | 4.8/5.0 |
| Helpfulness | 4.5/5.0 |
Limitations
- Knowledge cutoff based on training data
- May hallucinate specific numbers/prices
- Best used with retrieval augmentation for real-time data
- Optimized for English only
Ethical Considerations
- Not financial advice
- Users should DYOR
- Model may have biases from training data
- Intended for educational/entertainment purposes
Citation
@misc{zent-agentic-2024,
author = {ZENTSPY},
title = {ZENT AGENTIC: Fine-tuned LLM for Solana Token Launchpad},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/ZENTSPY/zent-agentic-7b}
}
Links
- π Website: 0xzerebro.io
- π¦ Twitter: @ZENTSPY
- π» GitHub: zentspy
- π Contract:
2a1sAFexKT1i3QpVYkaTfi5ed4auMeZZVFy4mdGJzent
Contact
For questions, issues, or collaborations:
- Open an issue on GitHub
- DM on Twitter @ZENTSPY
- Join our community
Built with π by ZENT Protocol
Model tree for ZENTSPY/zent-agentic-7b
Base model
mistralai/Mistral-7B-v0.3
Finetuned
mistralai/Mistral-7B-Instruct-v0.3