Tutorial 8: Replay Demonstrations#
Objective: Learn how to replay recorded trajectories for visualization and debugging.
What you’ll learn:
Two replay modes: action-based and state-based
Loading trajectories from the dataset
Replaying across different simulators
Debugging trajectory playback issues
Prerequisites: Completed Tutorial 7: Collect Demonstrations
Estimated time: 15 minutes
Replay Modes#
There are two control modes for replay:
Mode |
Method |
Reliability |
Use Case |
|---|---|---|---|
Physics |
Replays recorded actions |
May fail across simulators |
Testing controllers |
States |
Directly sets states |
Always succeeds |
Visualization, debugging |
Physics replay#
python scripts/advanced/replay_demo.py --sim=isaacsim --task=close_box --num_envs 4
task could also be:
pick_cubestack_cubeclose_box
States replay#
python scripts/advanced/replay_demo.py --sim=isaacsim --task=close_box --num_envs 4 --object-states
task could also be:
close_box
Varifies commands#
Libero#
e.g.
python scripts/advanced/replay_demo.py --sim=isaacsim --task=libero.pick_butter
Simulator:
isaacsimmujoco
Task:
libero.kitchen_scene1_open_bottom_drawerlibero.kitchen_scene1_open_top_drawerlibero.kitchen_scene1_put_the_black_bowl_on_the_plate
Humanoid#
e.g.
python scripts/advanced/replay_demo.py --sim=isaacsim --num_envs=1 --robot=h1 --task=Stand --object-states
python scripts/advanced/replay_demo.py --sim=mujoco --num_envs=1 --robot=h1 --task=Stand --object-states
Simulator:
isaacsimmujoco
Task:
StandWalkRun
Note:
MuJoCoreplay supports only one environment at a time, akanum_envsshould be 1 (but training supports multiple environments).
Add scene:#
Note: only single environment is supported for adding scene.
python scripts/advanced/replay_demo.py --sim=isaacsim --task=CloseBox --num_envs 1 --scene=tapwater_scene_131
Code Highlights#
Trajectory Loading: Use get_traj() to automatically download and load trajectories from roboverse_data:
from metasim.utils.demo_util import get_traj
# Load trajectory data (auto-downloads from HuggingFace if needed)
init_states, all_actions, _ = get_traj(traj_filepath, scenario.robots[0], env.handler)
# Action replay: Execute actions step by step
for step in range(len(all_actions)):
actions = get_actions(all_actions, step, num_envs, robot)
obs, reward, success, time_out, extras = env.step(actions)
# State replay: Directly set states (more reliable)
for i, state in enumerate(captured_states):
env.handler.set_states(state)
env.handler.refresh_render()
obs = env.handler.get_states()
Two Replay Modes:
Action replay: Executes recorded actions through physics simulation (may fail across simulators)
State replay: Directly sets system states (guaranteed to succeed across simulators)
Automatic download: Trajectories are automatically downloaded from HuggingFace when needed