Skip to content

mujocolab/mjlab

Repository files navigation

Project banner

mjlab

tests docs benchmarks

mjlab combines Isaac Lab's proven API with best-in-class MuJoCo physics to provide lightweight, modular abstractions for RL robotics research and sim-to-real deployment.


Quick Start

mjlab requires an NVIDIA GPU for training (via MuJoCo Warp). macOS is supported only for evaluation, which is significantly slower.

# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh

Run the demo (no installation needed):

uvx --from mjlab --with "mujoco-warp @ git+https://github.com/google-deepmind/mujoco_warp@f2f795796fc433adf8e235f01fae3747585ae5db" demo

This launches an interactive viewer with a pre-trained Unitree G1 agent tracking a reference dance motion in MuJoCo Warp.

❓ Having issues? See the FAQ.

Try in Google Colab (no local setup required):

Open In Colab

Launch the demo directly in your browser with an interactive Viser viewer.


Installation

From source:

git clone https://github.com/mujocolab/mjlab.git
cd mjlab
uv run demo

From PyPI:

uv add mjlab "mujoco-warp @ git+https://github.com/google-deepmind/mujoco_warp@f2f795796fc433adf8e235f01fae3747585ae5db"

A Dockerfile is also provided.

For full setup instructions, see the Installation Guide.


Training Examples

1. Velocity Tracking

Train a Unitree G1 humanoid to follow velocity commands on flat terrain:

uv run train Mjlab-Velocity-Flat-Unitree-G1 --env.scene.num-envs 4096

Multi-GPU Training: Scale to multiple GPUs using --gpu-ids:

uv run train Mjlab-Velocity-Flat-Unitree-G1 \
  --gpu-ids 0 1 \
  --env.scene.num-envs 4096

See the Distributed Training guide for details.

Evaluate a policy while training (fetches latest checkpoint from Weights & Biases):

uv run play Mjlab-Velocity-Flat-Unitree-G1 --wandb-run-path your-org/mjlab/run-id

2. Motion Imitation

Train a Unitree G1 to mimic reference motions. mjlab uses WandB to manage reference motion datasets:

  1. Create a registry collection in your WandB workspace named Motions

  2. Set your WandB entity:

    export WANDB_ENTITY=your-organization-name
  3. Process and upload motion files:

    MUJOCO_GL=egl uv run src/mjlab/scripts/csv_to_npz.py \
      --input-file /path/to/motion.csv \
      --output-name motion_name \
      --input-fps 30 \
      --output-fps 50 \
      --render  # Optional: generates preview video

Note

For detailed motion preprocessing instructions, see the BeyondMimic documentation.

Train and Play

uv run train Mjlab-Tracking-Flat-Unitree-G1 --registry-name your-org/motions/motion-name --env.scene.num-envs 4096

uv run play Mjlab-Tracking-Flat-Unitree-G1 --wandb-run-path your-org/mjlab/run-id

3. Sanity-check with Dummy Agents

Use built-in agents to sanity check your MDP before training.

uv run play Mjlab-Your-Task-Id --agent zero  # Sends zero actions.
uv run play Mjlab-Your-Task-Id --agent random  # Sends uniform random actions.

Note

When running motion-tracking tasks, add --registry-name your-org/motions/motion-name to the command.


Documentation

Full documentation is available at mujocolab.github.io/mjlab.


Development

Run tests:

make test          # Run all tests
make test-fast     # Skip slow integration tests

Format code:

uvx pre-commit install
make format

Compile documentation locally:

uv pip install -r docs/requirements.txt
make docs

License

mjlab is licensed under the Apache License, Version 2.0.

Third-Party Code

Some portions of mjlab are forked from external projects:

  • src/mjlab/utils/lab_api/ — Utilities forked from NVIDIA Isaac Lab (BSD-3-Clause license, see file headers)

Forked components retain their original licenses. See file headers for details.


Acknowledgments

mjlab wouldn't exist without the excellent work of the Isaac Lab team, whose API design and abstractions mjlab builds upon.

Thanks to the MuJoCo Warp team — especially Erik Frey and Taylor Howell — for answering our questions, giving helpful feedback, and implementing features based on our requests countless times.