Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- robotics
|
| 5 |
+
- video-classification
|
| 6 |
+
tags:
|
| 7 |
+
- minecraft
|
| 8 |
+
- vla
|
| 9 |
+
- vision-language-action
|
| 10 |
+
- gaming
|
| 11 |
+
- behavioral-cloning
|
| 12 |
+
size_categories:
|
| 13 |
+
- 1M<n<10M
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
# Minecraft VLA Stage 1: Action Pretraining Data
|
| 17 |
+
|
| 18 |
+
Vision-Language-Action training data for Minecraft, processed from OpenAI's VPT contractor dataset.
|
| 19 |
+
|
| 20 |
+
## Dataset Description
|
| 21 |
+
|
| 22 |
+
This dataset contains frame-action pairs from Minecraft gameplay, designed for training VLA models following the [Lumine](https://www.lumine-ai.org/) methodology.
|
| 23 |
+
|
| 24 |
+
### Source
|
| 25 |
+
- **Original**: [OpenAI VPT Contractor Data](https://github.com/openai/Video-Pre-Training) (7.x subset)
|
| 26 |
+
- **Videos**: ~17,886 videos (~330 hours of early-game gameplay)
|
| 27 |
+
- **Task**: "Play Minecraft" with focus on first 30 minutes of new worlds
|
| 28 |
+
|
| 29 |
+
### Format
|
| 30 |
+
|
| 31 |
+
Each sample contains:
|
| 32 |
+
| Field | Type | Description |
|
| 33 |
+
|-------|------|-------------|
|
| 34 |
+
| `image` | bytes | 640x360 JPEG frame |
|
| 35 |
+
| `video_id` | string | Source video identifier |
|
| 36 |
+
| `frame_idx` | int | Frame number at 5Hz |
|
| 37 |
+
| `action` | string | Lumine-format action string |
|
| 38 |
+
|
| 39 |
+
### Action Format
|
| 40 |
+
|
| 41 |
+
```
|
| 42 |
+
<|action_start|> mouse_x mouse_y scroll ; K1 ; K2 ; K3 ; K4 <|action_end|>
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
- `mouse_x`, `mouse_y`: Mouse delta (-1000 to 1000)
|
| 46 |
+
- `scroll`: Hotbar scroll (always 0 - VPT uses number keys)
|
| 47 |
+
- `K1` to `K4`: Key combinations per 50ms chunk
|
| 48 |
+
|
| 49 |
+
**Example:**
|
| 50 |
+
```
|
| 51 |
+
<|action_start|> 45 -12 0 ; W ; W Space ; W LMB ; W LMB <|action_end|>
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
### Processing Details
|
| 55 |
+
|
| 56 |
+
- **Frame rate**: 5 FPS (downsampled from VPT's 20 FPS)
|
| 57 |
+
- **Action chunks**: 4 per frame (each 50ms = 200ms total)
|
| 58 |
+
- **Filtering**: Idle frames removed, loading screens filtered
|
| 59 |
+
|
| 60 |
+
## Usage
|
| 61 |
+
|
| 62 |
+
```python
|
| 63 |
+
from datasets import load_dataset
|
| 64 |
+
|
| 65 |
+
# Streaming (recommended - no download required)
|
| 66 |
+
ds = load_dataset("TESS-Computer/minecraft-vla-stage1", split="train", streaming=True)
|
| 67 |
+
|
| 68 |
+
for sample in ds:
|
| 69 |
+
image = sample["image"] # PIL Image or bytes
|
| 70 |
+
action = sample["action"]
|
| 71 |
+
# Process...
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
## Training Pipeline
|
| 75 |
+
|
| 76 |
+
This is Stage 1 of a 3-stage training pipeline:
|
| 77 |
+
1. **Stage 1** (this dataset): Action pretraining - learn observation→action mapping
|
| 78 |
+
2. **Stage 2**: Instruction following - add task instructions from JARVIS-VLA
|
| 79 |
+
3. **Stage 3**: Reasoning - add chain-of-thought before complex actions
|
| 80 |
+
|
| 81 |
+
## Citation
|
| 82 |
+
|
| 83 |
+
If you use this dataset, please cite:
|
| 84 |
+
- [OpenAI VPT](https://arxiv.org/abs/2206.11795) - Original contractor data
|
| 85 |
+
- [JARVIS-VLA](https://craftjarvis.github.io/JarvisVLA/) - Instruction annotations
|
| 86 |
+
- [Lumine](https://www.lumine-ai.org/) - Training methodology
|
| 87 |
+
|
| 88 |
+
## License
|
| 89 |
+
|
| 90 |
+
MIT License. Original VPT data is released under MIT by OpenAI.
|