A novel approach that addresses these challenges of scalability and perspective difference in re-localization by using LiDAR-based submaps. Accepted to IROS 2025.
- Create a virtual environment with python 3.11. We tested REGRACE using CUDA 11.7.
python3.11 -m venv .venv
source .venv/bin/activate
- Install the dependencies using pip or pdm
pip install -r requirements.txt
# or (choose one)
pdm install-
Compile and install the
pointnet2package. Please follow the instructions in thepointnet2-wheelfolder to compile a wheel and install it. -
For good practices, export the following CUDA seed variables to your
~/.bashrc
export 'PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512'
export 'CUBLAS_WORKSPACE_CONFIG=:4096:8'-
Download the SemanticKITTI dataset.
-
Download Cylinder3D weight from here and save it to
./config/cyl3d_weights.pt -
If you want to use the pretrained model, download the weights trained on KITTI sequences 00 to 10 from our latest release. Further instructions on how to use the weights are provided in the Testing section.
First, adjust the parameters in the configuration YAML data-generation.yaml:
sequenceto the desired KITTI sequencekitti_dirto the root path of the SemanticKITTI datasetoutput_folderto the desired output folder
Then, run the following command:
python run.py --config_file config/data-generation.yaml --generate_submapsThis will create a folder in the following structure:
preprocessed_data_folder
├── seq00
│ ├── single-scan
│ | ├── label-prediction
| | └── probability-labels
│ └── submap
│ ├── all-points
| └── cluster
├── seq01
...
The single-scan folder contains the predictions of the Cylinder3D model for each scan in the sequence. The submap folder contains the submaps generated by accumulating scans. Those submaps are already voxelized (all-points) and clustered (cluster). The total size for KITTI sequence 00 to 10 is around 1.5TB. If you don't have enough memory, you can follow the instructions at the end of this section.
We then compact the submaps into a parquet file containing default.yaml:
dataset/train_foldersanddataset/test_foldersto the folders of the preprocessed data for each KITTI sequence (preprocessed_data_folder/seqXX/submap/cluster). You can add multiple folders as a list.dataset/preprocessing_folderto the folder where the compressed preprocessed data should be storedflag/generate_tripletstoTrueflag/trainandflag/testtoFalse
Then, run the following command:
python run.py --config_file <YOUR_CONFIG>.yamlThis will create a folder in the following structure:
preprocessing_folder
├── 00
│ ├── pickle
│ └── parquet
├── 01
│ ├── pickle
│ └── parquet
...
and a folder <repo path>/data/pickle_list/eval_seqXX containing the compacted dataset for faster loading during training and testing.
- Uncomment L28-29 in
generate_cluster.py. This will delete the item in folderall-pointswhen clustering the submap to foldercluster. - Generate the submaps following the instructions in Generating submaps and clusters.
- Generate the triplets following Generating the compacted pickle and parquet files.
This will reduce the total submap folder size to 250GB. You may delete it after generating the triplet. Note that if you change the test_folder or train_folder parameters in the `default.yaml, you have to generate the triplets again, and for that you need the submap folder.
To test the model, you need to have the model trained. Weights are available in the latest release. Adjust the in the configuration YAML default.yaml as:
flag/traintoFalseflag/testtoTrue.- Add the path to the weights in
training/checkpoint_path. - Set
flags/initialize_from_checkpointtoTrue. dataset/preprocessing_folderto the compressed preprocessed data folder.flag/generate_tripletstoFalse.
Then, run the following command:
python run.py --config_file <YOUR_CONFIG>.yamlYour output will be a table with the metrics for the test set.
To train the model, you need to adjust the in the configuration YAML default.yaml as:
dataset/preprocessing_folderto the compressed preprocessed data folder.flag/generate_tripletstoFalse.flag/traintoTrueflag/testtoFalse.flag/initialize_from_checkpointtoFalse.
Then, run the following command:
python run.py --config_file <YOUR_CONFIG>.yamlIf you want to use wandb to log the training, you can set the wandb_logging flag in the configuration YAML to True and set the project and entity in utils.py to your desired project and entity (usually your username). Don't forget to login first:
wandb loginFor the final refinement step, set the configuration YAML default.yaml as:
training:
batch_size: 90
checkpoint_path: <path_to_checkpoint>
epochs: 50
loss:
margin: 1.0
p: 2
type: both
num_workers: 12
optimizer:
lr: 1.0e-05
scheduler:
decay_rate: 0.1
milestones:
- 25Also set flags/initialize_from_checkpoint to True. Then, run the following command:
python run.py --config_file <YOUR_CONFIG>.yaml