|
|
|
|
|
|
|
|
🤖 **SVE-Math-DeepSeek+-7B** is a fine-tuned Multi-modal Large Language Model (MLLM) built upon [SVE-Math-DeepSeek-7B](https://github.com/AI4Math-ShanZhang/SVE-Math), further enhanced using GeoPeP, a perception-oriented dataset of 200K high-quality geometry image-text pairs. |
|
|
|
|
|
This model is released as part of our project: |
|
|
📘 **"Open Eyes, Then Reason: Fine-grained Visual Mathematical Understanding in MLLMs (SVE-Math)"** |
|
|
🔗 [Paper & Code: github.com/AI4Math-ShanZhang/SVE-Math](https://github.com/AI4Math-ShanZhang/SVE-Math) |
|
|
|
|
|
--- |
|
|
|
|
|
|
|
|
|
|
|
- 💡 Designed to improve **visual perception** in mathematical diagrams. |
|
|
- 📊 Fine-tuned on high-quality perception-oriented dataset (100K diagram-caption + 100K conversation). |
|
|
- 🧠 **GeoPeP** explicitly structs diagrams into shapes, attributes, locations and relationships. |
|
|
- ⚙️ **Systematic investigation** explores how visual perception impacts mathematical reasoning in MLLMs. |
|
|
|
|
|
--- |
|
|
|
|
|
|
|
|
|
|
|
You can refer to the official inference code and setup from our GitHub repo: |
|
|
👉 [https://github.com/AI4Math-ShanZhang/SVE-Math](https://github.com/AI4Math-ShanZhang/SVE-Math) |
|
|
|
|
|
|