Model reply in <answer> Tags
In llama.cpp I am using --jinja and --reasoning-format none knobs. Thinking is coming fine in thinking tag in the client but each reply comes within answer tag. For example I asked it about docusauras framework. Reply was:
Docusaurus is an open-source static site generator designed to help teams create and host professional, modern documentation websites.
..............
In summary, Docusaurus simplifies the process of building and maintaining high-quality documentation sites, making it a go-to tool for both technical and non-technical teams.
can we update the jinja template to remove tags ?
@shimmyshimmer @danielhanchen Can you kindly fix the chat template. I am unable to stop thinking and also I am getting reply in tags answer.
Hello how are you
🧑💻 User
Hello how are you ?
🤖 Assistant
Thinking Process:
Identify the user's input: The user said "Hello how are you ?"
Recognize the intent: This is a standard greeting and a question about my well-being.
Recall my nature: I am an AI, a large language model. I don't have feelings, a physical body, or personal experiences like humans do.
Formulate a polite and informative response:
- Acknowledge the greeting: Start with a friendly "Hello there!"
- Address the question directly but honestly: State that I don't have feelings in the human sense.
- Explain my status: Mention that I'm functioning properly, ready to help, etc. This conveys my operational state effectively.
- Turn the focus back to the user: Ask how I can assist them. This is the primary function and shows helpfulness.
Combine the elements into a coherent response: "Hello there!
As an AI, I don't have feelings in the way humans do, but I'm functioning properly and ready to help. How can I assist you today?"
Review and refine: The response is polite, clear, accurate about my nature, and focuses on being helpful. It directly answers the user's question while maintaining appropriate AI persona.
answer>
Hello there!
As an AI, I don't have feelings in the way humans do, but I'm functioning properly and ready to help. How can I assist you today?
</answer
My llama.cpp settings.
-m /home/tipu/AI/models/unsloth/Hunyuan-A13B-Instruct/Hunyuan-A13B-Instruct-IQ4_XS.gguf --jinja --reasoning-format auto --reasoning-budget 0 -n -1 -ngl 99 --n-cpu-moe 24 --temp 0.5 --top-k 20 --top-p 0.7 --repeat-penalty 1.05 -c 32768 --seed -1 -t 4 -fa --mlock --no-warmup -a Hunyuan-A13B-Non-Thinking --chat-template-kwargs "{\"enable_thinking\": \"False\"}" --no-webui --ubatch-size 128 --batch-size 2048 --cache-type-k q8_0 --cache-type-v q8_0