This repository is the official PyTorch implementation of the CVPR paper "Diff-Palm: Realistic Palmprint Generation with Polynomial Creases and Intra-Class Variation Controllable Diffusion Models".
Palmprint recognition is significantly limited by the lack of large-scale publicly available datasets.
Previous methods have adopted B'ezier curves to simulate the palm creases, which then serve as input for conditional GANs to generate realistic palmprints.
However, without employing real data fine-tuning, the performance of the recognition model trained on these synthetic datasets would drastically decline, indicating a large gap between generated and real palmprints.
This is primarily due to the utilization of an inaccurate palm crease representation and challenges in balancing intra-class variation with identity consistency.
To address this, we introduce a polynomial-based palm crease representation that provides a new palm crease generation mechanism more closely aligned with the real distribution.
We also propose the palm creases conditioned diffusion model with a novel intra-class variation control method.
By applying our proposed
Clone our repo to your local machine using the following command:
git clone https://github.com/Ukuer/Diff-Palm.git
cd Diff-Palm- The dependent packages are listed in
requirements.txtfile. - Note that this diffusion model is based on guided-diffusion. If you have any environment problems, you may find solutions in guided-diffusion and its parent repo improved-diffusion.
The palm creases conditioned diffusion model are placed in DiffModels. To train this model, please refer to the following steps:
- prepare the palmprint ROI images.
- extract the palm crease images using PCEM.
- place paired palmprint and palm crease directory in
palmandlabelsub-directory respectively, as follows :DATASETS/ ├── palm/ │ ├── 1.png │ ├── 2.png │ └── ... └── label/ ├── 1.png ├── 2.png └── ... - modify the
run.shto meet your requirements. bash run.shto start training. You need to kill the process manually to finish the training.
- Additionally, we have found that training with plain palmprint images would easily result in synthesizd images with color shift. A small discussion about this problem is in issues. To avoid this, we opt to apply
scale.pyto scale each palmprint images.
The infernece is two-stage. First synthsize polynomial creases, then generate palmprints with these creases.
This code is in PolyCreases.
- run
syn_polypalm_mp.pyto synthesize polynomial crease images. - Note that each synthesized image is regarded as an identity.
This code is in DiffModels.
- download pretrained weights in Google drive or Baidu drive.
- place this pretrained file in
./checkpoint/diffusion-netpalm-scale-128 - modify the
sample.shto meet your requirements. - run
bash sample.shto synthsize realistic palmprint datasets.
TODO
TODO
Our implementation is based on the following works: PCE-Palm, guided-diffusion, ElasticFace.
@misc{jin2025diffpalm,
title={Diff-Palm: Realistic Palmprint Generation with Polynomial Creases and Intra-Class Variation Controllable Diffusion Models},
author={Jianlong Jin and Chenglong Zhao and Ruixin Zhang and Sheng Shang and Jianqing Xu and Jingyun Zhang and ShaoMing Wang and Yang Zhao and Shouhong Ding and Wei Jia and Yunsheng Wu},
year={2025},
eprint={2503.18312},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.18312},
}