Code for paper "Discrete Latent Plans via Semantic Skill Abstractions" accepted by ICLR 2025 [PDF].
For LOReL dataset, first download the dataset from LOReL repository. Then use utils/h5py2pkl.py to process the dataset, converting it from h5py to pickle. Finally, place the generated .pkl file into data/lorel.
For Kitchen dataset, the processed dataset can be downloaded from here. Unzip the file and place the two .pkl files into data/kitchen and data/kitchen_image respectively. The state in the kitchen image dataset comprises 512-dimensional image embeddings from ResNet18 and 9-dimensional joint states of the robotic arm.
The LADS checkpoints can be downloaded from here, including four checkpoints for LOReL with state observations, LOReL with image observations, Kitchen with state observations and Kitchen with image observations, respectively. To evaluate these checkpoints, you can run the script ./scripts/eval.sh.
For training, please see ./scripts/train.sh.
If you find our work useful in your research and would like to cite our project, please use the following citation:
@inproceedings{jiang2025discrete,
title={Discrete Latent Plans via Semantic Skill Abstractions},
author={Jiang, Haobin and Wang, Jiangxing and Lu, Zongqing},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025}
}