Causal World Modeling for Robot Control

LingBot-VA is an autoregressive diffusion framework that learns frame prediction and policy execution simultaneously, introduced in the paper Causal World Modeling for Robot Control.

It focuses on:

  • Autoregressive Video-Action World Modeling: Architecturally unifies visual dynamics prediction and action inference within a single interleaved sequence while maintaining their conceptual distinction.
  • High-efficiency Execution: A dual-stream mixture-of-transformers (MoT) architecture with Asynchronous Execution and KV Cache.
  • Long-Horizon Performance and Generalization: High improvements in sample efficiency, long-horizon success rates, and generalization to novel scenes.

Model Sources


πŸ“¦ Model Download

  • Pretrained Checkpoints for Post-Training
Model Name Huggingface Repository Description
lingbot-va-base πŸ€— robbyant/lingbot-va-base LingBot-VA w/ shared backbone
lingbot-va-posttrain-robotwin πŸ€— robbyant/lingbot-va-posttrain-robotwin LingBot-VA-Posttrain-Robotwin w/ shared backbone

πŸ› οΈ Quick Start

Installation

Requirements β€’ Python == 3.10.16 β€’ Pytorch == 2.9.0 β€’ CUDA 12.6

pip install torch==2.9.0 torchvision==0.24.0 torchaudio==2.9.0 --index-url https://download.pytorch.org/whl/cu126
pip install websockets einops diffusers==0.36.0 transformers==5.0.0 accelerate msgpack opencv-python matplotlib ftfy easydict
pip install flash-attn --no-build-isolation

Run Image to Video-Action Generation

We provide a script for image to video-action generation:

NGPU=1 CONFIG_NAME='robotwin_i2av' bash script/run_launch_va_server_sync.sh 

πŸ“Š Performance

We evaluate our model on both simulation benchmarks and real-world scenarios, achieving state-of-the-art performance.

Simulation Evaluation (Success Rate %)

Method (Average 50 Tasks) Easy SR (%) Hard SR (%)
X-VLA 72.9 72.8
Ο€β‚€ 65.9 58.4
Ο€β‚€.β‚… 82.7 76.8
Motus 88.7 87.0
LingBot-VA (Ours) 92.9 91.6

πŸ“š Citation

@article{lingbot-va2026,
  title={Causal World Modeling for Robot Control},
  author={Li, Lin and Zhang, Qihang and Luo, Yiming and Yang, Shuai and Wang, Ruilin and Han, Fei and Yu, Mingrui and Gao, Zelin and Xue, Nan and Zhu, Xing and Shen, Yujun and Xu, Yinghao},
  journal={arXiv preprint arXiv:2601.21998},
  year={2026}
}

πŸͺͺ License

This project is released under the Apache License 2.0. See LICENSE file for details.

🧩 Acknowledgments

This work builds upon several excellent open-source projects:

  • Wan-Video - Vision transformer backbone
  • MoT - Mixture-of-Transformers architecture
Downloads last month

-

Downloads are not tracked for this model. How to track
Video Preview
loading

Collection including robbyant/lingbot-va-posttrain-robotwin

Paper for robbyant/lingbot-va-posttrain-robotwin