Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
85
4k
label
class label
18 classes
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
0advertisement
End of preview. Expand in Data Studio

πŸ§ͺ Multimodal Benchmark

This repository provides a benchmark suite for evaluating Multimodal Large Language Models (MLLMs) across a variety of visual-language tasks.


πŸ“ Directory Structure

/data

This folder contains all benchmark images and task-specific JSON files. Each JSON file defines the input and expected output format for a given task.

/run

This folder includes example scripts that demonstrate how to run different MLLMs on the benchmark tasks.


πŸ“„ Result Collection

After inference, all task JSON outputs should be merged into a single file named result.json.
Each entry in result.json includes a response field that stores the model's prediction.


πŸ“Š Evaluation

The predictions stored in result.json can be evaluated using metric.py.
This script computes performance metrics by comparing the predicted responses with the reference answers.


πŸ’‘ Ad Understanding Task

The Ad Understanding task requires an additional LLM-based preprocessing step before evaluation.
An example of deploying a language model for this purpose is provided in gpt_judge.py.


Downloads last month
1,626