When enable_thinking=True, why doesn't the chat_template output end with "<think>?
#16
by
sxcasf
- opened
Code:
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer1 = AutoTokenizer.from_pretrained("Qwen/Qwen3-1.7B")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text1 = tokenizer1.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True,
)
print(text1)
Output:
<|im_start|>user
Give me a short introduction to large language model.<|im_end|>
<|im_start|>assistant
And I used EvalScope to test AIME25, and although I obtained similar results to the think-mode results in the report, the output also didn't contain <think>. So I'm curious: will Qwen3-1.7B should output something like "<think>{thinking context}</think>{answer}"?
sxcasf
changed discussion status to
closed