File size: 3,694 Bytes
f960dde
 
 
 
 
 
 
3fe2c61
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f960dde
612c89c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f960dde
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
license: mit
task_categories:
- question-answering
- visual-question-answering
size_categories:
- 1K<n<10K
dataset_info:
  features:
  - name: images
    sequence: image
  - name: question
    dtype: string
  - name: choices
    sequence: string
  - name: answer_idx
    dtype: int32
  - name: datatype
    dtype: string
  - name: house_ind
    dtype: int32
  - name: cam_position
    sequence:
      sequence: float32
  - name: cam_rotation
    sequence: float32
  - name: image_reason
    sequence: image
  splits:
  - name: val
    num_bytes: 11647657977.101
    num_examples: 6527
  download_size: 343936818
  dataset_size: 11647657977.101
configs:
- config_name: default
  data_files:
  - split: val
    path: data/val-*
---
# SAT_perspective Dataset

## Paper

**SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models**

This dataset is part of the SAT (Spatial Aptitude Training) project, which introduces a dynamic benchmark for evaluating and improving spatial reasoning capabilities in multimodal language models.

- **Project Page**: [https://arijitray.com/SAT/](https://arijitray.com/SAT/)
- **Paper**: [arXiv:2412.07755](https://arxiv.org/abs/2412.07755)

## Dataset Description

The SAT_perspective dataset contains 6,527 spatial reasoning questions that test perspective-taking abilities. Each question presents a scene and asks about spatial relationships from a new viewpoint, requiring models to reason about how objects would appear from different camera positions.

## Loading the Dataset

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("array/SAT_perspective", split="val")

# Access a sample
sample = dataset[0]
print(sample["question"])
print(sample["choices"])
```

## Dataset Structure

Each example in the dataset contains the following fields:

- **`images`**: List of input images showing the original scene (PIL Image objects)
- **`question`**: Text question asking about spatial relationships from a new perspective
- **`choices`**: List of possible answers (typically 2 options)
- **`answer_idx`**: Index of the correct answer in the choices list (integer)
- **`datatype`**: Type of spatial reasoning task (value: "perspective")
- **`house_ind`**: House/scene identifier (integer)
- **`cam_position`**: Camera position coordinates as 3D float arrays
- **`cam_rotation`**: Camera rotation values as float arrays
- **`image_reason`**: Rendered image from the new perspective that the question is asking about. This provides the ground truth visualization showing what the scene looks like from the target viewpoint.

### Example

```python
{
    "images": [<PIL.Image.Image>],  # Original view
    "question": "If I go to the 'X' marked point in the image and turned left by 90 degrees, will the Chair get closer or further away?",
    "choices": ["Closer", "Further"],
    "answer_idx": 0,
    "datatype": "perspective",
    "house_ind": 0,
    "cam_position": [[2.75, 0.9009997844696045, 6.25], [3.75, 0.9009997844696045, 6.75]],
    "cam_rotation": [96.0, 6.0],
    "image_reason": [<PIL.Image.Image>]  # View from new perspective
}
```

## Citation

If you use this dataset, please cite:

```bibtex
@misc{ray2025satdynamicspatialaptitude,
      title={SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models},
      author={Arijit Ray and Jiafei Duan and Ellis Brown and Reuben Tan and Dina Bashkirova and Rose Hendrix and Kiana Ehsani and Aniruddha Kembhavi and Bryan A. Plummer and Ranjay Krishna and Kuo-Hao Zeng and Kate Saenko},
      year={2025},
      eprint={2412.07755},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.07755},
}
```