The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Error code: FileFormatMismatchBetweenSplitsError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
STXBP1 PubMed Central Multimodal Dataset v2 (12-13-2025)
A comprehensive multimodal dataset for training vision-language models on biomedical scientific literature, with focus on STXBP1-related neurological research.
π Version 2 Updates (December 2025)
- 497,360 training examples (up from ~31K)
- 170,591 matched figure-image pairs (99.7% match rate)
- Full captions preserved (no truncation)
- Multiple training formats for different use cases
- Validated response lengths for proper model training
Dataset Overview
| Metric | Value |
|---|---|
| Total Training Examples | 497,360 |
| Source Articles | 31,786 |
| Date Range | 1-1-2000 thru 6-1-2025) |
| Total Figures | 171,084 |
| Matched Figure-Image Pairs | 170,591 (99.7%) |
| Total Images | 175,392 |
| STXBP1-Specific Articles | 1,335 |
| Total Text Content | 1.44 billion characters |
Files
Training Data (LLaVA Format)
| File | Entries | Size | Description |
|---|---|---|---|
combined_training.json |
497,360 | 1.04 GB | Main training file - all formats combined & shuffled |
figure_caption.json |
159,987 | 185 MB | Simple figure β caption pairs |
figure_detailed.json |
159,987 | 440 MB | Figures with article context (title + abstract) |
figure_qa.json |
149,776 | 258 MB | Multi-turn Q&A conversations |
article_multiimage.json |
27,610 | 159 MB | Multi-figure articles (2-5 figures per entry) |
Images
| File | Size | Description |
|---|---|---|
images.zip |
60.2 GB | All figure images (175,392 files) |
Metadata
| File | Size | Description |
|---|---|---|
training_metadata.json |
1 KB | Generation config and statistics |
Data Format
All training files use LLaVA-compatible JSON format:
{
"id": "PMC10196665_f1",
"image": "images/PMC10196665-f1.png",
"conversations": [
{
"from": "human",
"value": "<image>\nDescribe this scientific figure in detail."
},
{
"from": "gpt",
"value": "Figure 1. Schematic of DNAJC5 sequence alignment and dnj-14 C. elegans mutants CRISPR-Cas9..."
}
]
}
Training File Descriptions
figure_caption.json - Basic figure captioning
- Single image β single detailed caption
- Best for: Training basic figure understanding
figure_detailed.json - Contextual descriptions
- Includes paper title and abstract for richer context
- Best for: Training models to understand figures in research context
figure_qa.json - Multi-turn conversations
- 3-turn Q&A: figure type β detailed description β source info
- Best for: Training conversational/interactive models
article_multiimage.json - Multi-figure reasoning
- 2-5 figures from same paper with combined analysis
- Best for: Training models to relate multiple figures
combined_training.json - Everything shuffled together
- All formats mixed for diverse training
- Recommended for most training scenarios
Response Length Statistics
Critical for setting max_new_tokens during training/inference:
| Percentile | Characters | Est. Tokens |
|---|---|---|
| Median | 506 | ~127 |
| 95th | 3,383 | ~845 |
| 99th | 6,741 | ~1,685 |
| Max | 22,518 | ~5,630 |
Recommended Training Configuration
# Training
model_max_length = 4096 # or 8192 for safety
# Inference - IMPORTANT: Don't set too low, may truncate responses!
generation_config = {
"max_new_tokens": 2048, # Covers 99th percentile
"min_new_tokens": 100, # Prevents cutoffs
"do_sample": True,
"temperature": 0.7,
}
Image Statistics
| Metric | Value |
|---|---|
| Median dimensions | 738 Γ 639 px |
| Size distribution | 82% medium (500-1000px) |
| Tiny images (<200px) | 0.5% |
| Format | PNG/JPG |
Preprocessing recommendations:
- LLaVA: 448Γ448, 512x512 or 672Γ672
- Qwen3-VL: Dynamic resolution (native support)
Caption Statistics
| Category | Count | Percentage |
|---|---|---|
| Very short (<100 chars) | 14,339 | 8.7% |
| Short (100-500 chars) | 43,031 | 26.2% |
| Medium (500-1500 chars) | 80,614 | 49.1% |
| Long (1500-3000 chars) | 24,645 | 15.0% |
| Very long (>3000 chars) | 1,493 | 0.9% |
Usage
Loading with Hugging Face
from datasets import load_dataset
# Load the main training file
dataset = load_dataset("SkyWhal3/STXBP1_PubMed_Central_Multimodal_Dataset",
data_files="combined_training.json")
# Or load specific formats
captions = load_dataset("SkyWhal3/STXBP1_PubMed_Central_Multimodal_Dataset",
data_files="figure_caption.json")
Training with LLaVA
# Point to the training file
data_path = "combined_training.json"
image_folder = "images/"
# Ensure proper max_length settings
training_args = TrainingArguments(
# ... your config
)
# Model config
model.config.max_length = 4096
Training with Qwen3-VL
# Qwen3-VL handles dynamic resolution natively
# Just ensure max_new_tokens is set properly for inference
generation_config = {
"max_new_tokens": 2048,
}
About STXBP1
STXBP1 (Syntaxin-Binding Protein 1), also known as Munc18-1, is essential for synaptic vesicle fusion and neurotransmitter release. Mutations cause STXBP1 Encephalopathy, a rare neurological disorder (~1 in 30,000 births) characterized by:
- Early-onset epilepsy
- Developmental delays
- Movement disorders
- Intellectual disability
This dataset supports research into understanding and treating STXBP1-related conditions.
Dataset Construction
- Source: 31,786 articles from PubMed Central related to STXBP1, synaptic function, and neurological research
- Extraction: Custom HTML parser extracting figures, captions, abstracts, and full text
- Matching: 99.7% of extracted figures matched to downloaded images
- Validation: Comprehensive quality checks on text lengths and image dimensions
- Formatting: Multiple LLaVA-compatible training formats generated
Citation
If you use this dataset, please cite:
@dataset{stxbp1_multimodal_2025,
author = {SkyWhal3},
title = {STXBP1 PubMed Central Multimodal Dataset},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/SkyWhal3/STXBP1_PubMed_Central_Multimodal_Dataset}
}
License
This dataset is released under CC-BY-4.0. The source articles are from PubMed Central's Open Access subset.
Changelog
v2.0 (December 13, 2025)
- Complete rebuild with improved extraction pipeline
- 497,360 training examples (16x increase)
- 99.7% figure-image match rate
- Full captions without truncation
- Multiple training formats (caption, detailed, Q&A, multi-image)
- Comprehensive validation and statistics
v1.0 (December 7, 2025)
- Initial release
- 31,585 articles
- Basic LLaVA/conversational/simple formats
- Downloads last month
- 407