| Motion | AMP Animation | Sensors | RL + AMP | Sim2Sim |
|---|---|---|---|---|
| Walk | ||||
| Run |
This framework is an RL-based locomotion control system designed for full-sized humanoid robots, TienKung. It integrates AMP-style rewards with periodic gait rewards, facilitating natural, stable, and efficient walking and running behaviors.
The codebase is built on IsaacLab, supports Sim2Sim transfer to MuJoCo, and features a modular architecture for seamless customization and extension. Additionally, it incorporates ray-casting-based sensors for enhanced perception, enabling precise environmental interaction and obstacle avoidance.
- Motion retargeting support 2025-09-27
- Add more sensors
- Add Perceptive Control
TienKung-Lab is built with IsaacSim 4.5.0 and IsaacLab 2.1.0.
-
Install Isaac Lab by following the installation guide. We recommend using the conda installation as it simplifies calling Python scripts from the terminal.
-
Clone this repository separately from the Isaac Lab installation (i.e. outside the
IsaacLabdirectory) -
Using a python interpreter that has Isaac Lab installed, install the library
cd TienKung-Lab
pip install -e .- Install the rsl-rl library
cd TienKung-Lab/rsl_rl
pip install -e .- Verify that the extension is correctly installed by running the following command:
python legged_lab/scripts/train.py --task=walk --logger=tensorboard --headless --num_envs=64| AMASS | GMR | TIENKUNGLAB |
|---|---|---|
This section uses GMR for motion retargeting, Tienkung currently supports motion retargeting only for SMPLX types (AMASS, OMOMO).
1. Prepare the dataset and Motion retargeting with GMR.
python scripts/smplx_to_robot.py --smplx_file <path_to_smplx_data> --robot tienkung --save_path <path_to_save_robot_data.pkl>2. Data Processing and Data Saving.
The dataset consists of two parts with distinct functions and formats, requiring conversion in two steps.
-
motion_visualization/
Used for motion playback withplay_amp_animation.pyto check motion correctness and quality.
Data fields: [root_pos, root_rot, dof_pos, root_lin_vel, root_ang_vel, dof_vel] -
motion_amp_expert/
Used during training as expert reference data for AMP.
Data fields: [dof_pos, dof_vel, end-effector pos] -
Step 1: Data Processing and Visualization Data Saving.
python legged_lab/scripts/gmr_data_conversion.py --input_pkl <path_to_save_robot_data.pkl> --output_txt legged_lab/envs/tienkung/datasets/motion_visualization/motion.txtNote: Before starting step 2, set the amp_motion_files_display path in the config to the file generated in step 1.
- Step 2: Motion Visualization and Expert Data Saving.
python legged_lab/scripts/play_amp_animation.py --task=walk --num_envs=1 --save_path legged_lab/envs/tienkung/datasets/motion_amp_expert/motion.txt --fps 30.0Note: After step 2, set the amp_motion_files path in the config to the file generated in step 2.
Visualize the motion by updating the simulation with data from tienkung/datasets/motion_visualization.
python legged_lab/scripts/play_amp_animation.py --task=walk --num_envs=1
python legged_lab/scripts/play_amp_animation.py --task=run --num_envs=1Visualize the motion with sensors by updating the simulation with data from tienkung/datasets/motion_visualization.
python legged_lab/scripts/play_amp_animation.py --task=walk_with_sensor --num_envs=1
python legged_lab/scripts/play_amp_animation.py --task=run_with_sensor --num_envs=1Train the policy using AMP expert data from tienkung/datasets/motion_amp_expert.
python legged_lab/scripts/train.py --task=walk --headless --logger=tensorboard --num_envs=4096
python legged_lab/scripts/train.py --task=run --headless --logger=tensorboard --num_envs=4096Run the trained policy.
python legged_lab/scripts/play.py --task=walk --num_envs=1
python legged_lab/scripts/play.py --task=run --num_envs=1Evaluate the trained policy in MuJoCo to perform cross-simulation validation.
Exported_policy/ contains pretrained policies provided by the project. When using the play script, trained policy is exported automatically and saved to path like logs/run/[timestamp]/exported/policy.pt.
python legged_lab/scripts/sim2sim.py --task walk --policy Exported_policy/walk.pt --duration 10
python legged_lab/scripts/sim2sim.py --task run --policy Exported_policy/run.pt --duration 10tensorboard --logdir=logs/walk
tensorboard --logdir=logs/runWe have a pre-commit template to automatically format your code. To install pre-commit:
pip install pre-commitThen you can run pre-commit with:
pre-commit run --all-filesIn some VsCode versions, the indexing of part of the extensions is missing. In this case, add the path to your extension in .vscode/settings.json under the key "python.analysis.extraPaths".
{
"python.analysis.extraPaths": [
"${workspaceFolder}/legged_lab",
"<path-to-IsaacLab>/source/isaaclab_tasks",
"<path-to-IsaacLab>/source/isaaclab_mimic",
"<path-to-IsaacLab>/source/extensions",
"<path-to-IsaacLab>/source/isaaclab_assets",
"<path-to-IsaacLab>/source/isaaclab_rl",
"<path-to-IsaacLab>/source/isaaclab",
]
}- GMR:General Motion Retargeting.
- Legged Lab: a direct IsaacLab Workflow for Legged Robots.
- Humanoid-Gym:a reinforcement learning (RL) framework based on NVIDIA Isaac Gym, with Sim2Sim support.
- RSL RL: a fast and simple implementation of RL algorithms.
- AMP_for_hardware: codebase for learning skills from short reference motions using Adversarial Motion Priors.
- Omni-Perception: a perception library for legged robots, which provides a set of sensors and perception algorithms.
- Warp: a Python framework for writing high-performance simulation and graphics code.
If you're interested in TienKung-Lab, welcome to join our WeChat group for discussions.