RoboVerse Learn#

RoboVerse Learn provides a comprehensive suite of learning algorithms for robot policy training. It integrates seamlessly with MetaSim environments, enabling end-to-end training pipelines for both imitation learning and reinforcement learning.


Overview#

Imitation Learning

Learn from demonstrations using state-of-the-art IL algorithms including Diffusion Policy, ACT, and Vision-Language-Action models.

Diffusion Policy
Reinforcement Learning

Train policies through trial and error with PPO, TD3, SAC, and specialized algorithms for humanoid control.

PPO


Quick Start#

Training with Imitation Learning#

# Collect demonstrations
python scripts/collect_demo.py --task pick_cube --episodes 100

# Train Diffusion Policy
python roboverse_learn/il/train_dp.py \
    --task pick_cube \
    --data_path ./demos/pick_cube \
    --epochs 100

Training with Reinforcement Learning#

# Train PPO on a manipulation task
python roboverse_learn/rl/train_ppo.py \
    --task pick_cube \
    --robot franka \
    --num_envs 1024 \
    --steps 10000000

# Train FastTD3 with MJX backend
python roboverse_learn/rl/train_fast_td3.py \
    --task pick_cube \
    --simulator mjx \
    --num_envs 4096

Features#

Unified Interface#

All algorithms share a common interface with MetaSim:

from roboverse_learn.il import DiffusionPolicy
from roboverse_learn.rl import PPO

# IL training
policy = DiffusionPolicy(config)
policy.train(env, demonstrations)

# RL training
agent = PPO(config)
agent.train(env, total_steps=1000000)

GPU-Accelerated Training#

  • Vectorized environments for parallel data collection

  • Batch policy inference on GPU

  • Mixed-precision training support

Experiment Management#

  • Weights & Biases integration

  • TensorBoard logging

  • Checkpoint management

  • Hyperparameter sweeps


Installation#

Most algorithms are included in the base installation. For specific algorithms:

# Full IL suite
pip install -e ".[il]"

# Full RL suite
pip install -e ".[rl]"

# Vision-Language models
pip install -e ".[vla]"

Contributing#

Want to add a new algorithm? See our Contributing Guide for instructions on integrating new methods.