--- license: mit viewer: false ---

SceneDiff: A Benchmark and Method for Multiview Object Change Detection

Project Page Arxiv Code Data Annotator

This repository contains the data for the paper [SceneDiff: A Benchmark and Method for Multiview Object Change Detection](http://yuqunw.github.io/SceneDiff). We investigate the problem of identifying objects that have been changed between a pair of captures of the same scene at different times, introducing the first object-level multiview change detection benchmark and a new training-free method. ### Overview The SceneDiff Benchmark contains **350 video sequence pairs** and **1,009 annotated objects** across two subsets: - **Varied subset (SD-V)**: 200 sequence pairs collected in a wide variety of daily indoor and outdoor scenes - **Kitchen subset (SD-K)**: 150 sequence pairs from the [HD-Epic dataset](https://hd-epic.github.io/) with changes that naturally occur during cooking activities For each video pair, we record all changed objects' attributes, including object names and deformability, and annotate their full segmentation masks in all visible frames. Each object is categorized with a change status: *Added*, *Removed*, or *Moved*. Statistics for each subset: ![Dataset Statistics](media/dataset_stat.jpg) ### Dataset Download ```bash wget https://huggingface.co/datasets/yuqun/SceneDiff/resolve/main/scenediff_bechmark.zip unzip scenediff_bechmark.zip ``` ### Dataset Structure ``` scenediff_benchmark/ ├── data/ # 350 sequence pairs │ ├── sequence_pair_1/ │ │ ├── original_video1.mp4 # Raw video before change │ │ ├── original_video2.mp4 # Raw video after change │ │ ├── video1.mp4 # Video with annotation mask (before) │ │ ├── video2.mp4 # Video with annotation mask (after) │ │ ├── segments.pkl # Dense segmentation masks for evaluation │ │ └── metadata.json # Sequence metadata │ ├── sequence_pair_2/ │ │ └── ... │ └── ... ├── splits/ # Val/Test splits │ ├── val_split.json │ └── test_split.json └── vis/ # Visualization tools ├── visualizer.py # Flask-based web viewer ├── requirements.txt └── templates/ ``` ### Segments.pkl Structure: ```python segments = { 'scenetype': str, # Type of scene change 'video1_objects': { 'object_id': { 'frame_id': RLE_Mask # Run-length encoded mask } }, 'video2_objects': { 'object_id': { 'frame_id': RLE_Mask # Run-length encoded mask } }, 'objects': { 'object_1': { 'label': str, # Object label/name 'in_video1': bool, # Present in video 1 'in_video2': bool, # Present in video 2 'deformability': str # 'rigid' or 'deformable' } } } ``` ### Loading Masks To convert RLE masks back to tensors: ```python import torch from pycocotools import mask as mask_utils # Load and decode RLE mask tensor_mask = torch.tensor(mask_utils.decode(rle_mask)) ``` ### Visualization Run the command ```bash cd vis && pip install -r requirements.txt python vis/visualizer.py ``` Open the link `http://localhost:5002` for visualized videos. ### Evaluation Please refer to the [code repo](https://github.com/yuqunw/scene_diff?tab=readme-ov-file#evaluation) for evaluation.