Image-Text-to-Text
Transformers
Safetensors

VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents

VisGym is a gymnasium of 17 visually interactive, long-horizon environments for evaluating, diagnosing, and training vision–language models (VLMs) in multi-step visual decision-making across symbolic puzzles, real-image understanding, navigation, and manipulation.

This repository contains model checkpoints described in the paper VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents.

Description

Modern Vision-Language Models (VLMs) remain poorly characterized in multi-step visual interactions, particularly in how they integrate perception, memory, and action over long horizons. VisGym provides 17 environments for evaluating and training VLMs, offering flexible controls over difficulty, input representation, planning horizon, and feedback. The suite spans symbolic puzzles, real-image understanding, navigation, and manipulation.

Citation

If you use this model, please cite:

@article{wang2026visgym,
  title        = {VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents},
  author       = {Wang, Zirui and Zhang, Junyi and Ge, Jiaxin and Lian, Long and Fu, Letian and Dunlap, Lisa and Goldberg, Ken and Wang, Xudong and Stoica, Ion and Chan, David M. and Min, Sewon and Gonzalez, Joseph E.},
  journal      = {arXiv preprint arXiv:2601.16973},
  year         = {2026},
  url          = {https://arxiv.org/abs/2601.16973}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for VisGym/visgym_model