A newer version of this model is available: Arioron/Vex-Amber-Mini-1.2
  type: text-generation
  name: Mathematical Reasoning
dataset:
  name: MATH
  type: math
  split: test
metrics:
- name: Accuracy
  type: accuracy
  value: 55.0

Amber Fable 1.0

Model Description

Amber Fable 1.0 is a 1.7B parameter specialized language model, fine-tuned using LoRA (Low-Rank Adaptation) on the powerful Qwen3-1.7B base model.

This model is engineered specifically for mathematical reasoning and algorithmic logic. It achieves remarkable performance on math benchmarks (75% on GSM8K) for its size class, making it a highly efficient solution for educational tools and logic-based tasks, although it trades off some general world knowledge (MMLU) to achieve this peak reasoning capability.

  • Developed by: Arioron
  • Model type: Decoder-only Transformer (LoRA Adapter)
  • Language(s): English
  • License: Apache 2.0
  • Finetuned from model: Qwen/Qwen3-1.7B

Model Sources

Performance

Amber Fable 1.0 demonstrates state-of-the-art efficiency in mathematical tasks.

Benchmark Metric Score Description
GSM8K Accuracy 75.0% Grade School Math
MATH Accuracy 55.0% Advanced Math Problems
HumanEval Pass@1 42.0% Python Coding Capability
MMLU Accuracy 22.0% General World Knowledge

Quick Start

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "Arioron/Amber-Fable-1.0"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.float16,
    device_map="auto"
)

# Math reasoning example
messages = [
    {"role": "user", "content": "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"},
]

input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs,
    max_new_tokens=512,
    temperature=0.6,
    do_sample=True,
    top_p=0.9,
    pad_token_id=tokenizer.eos_token_id
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Model Summary

  • Model: Amber Fable 1.0 (1.7B)
  • Specialty: Advanced Math Reasoning
  • Logic: Chain-of-Thought (CoT)
  • Coding: Python & Algorithms (42%)
  • Tuning: LoRA on Synthetic/Textbooks
  • Base: Qwen3-1.7B (PyTorch/PEFT)
  • Usage: Tutoring, Puzzles & Scripts
  • Caution: Verify all calculations
  • Author: Arioron (2025) If you use this model in your research, please cite: code Bibtex @misc{amberfable1.0, title = {Amber Fable 1.0: A Specialized 1.7B Math Model}, author = {Arioron}, year = {2025}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/Arioron/Amber-Fable-1.0}} }
Downloads last month
2
Safetensors
Model size
2B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Arioron/Amber-Fable-1.0

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(371)
this model