conda create -n WaveDH python=3.10 # create a virtual env
conda activate WaveDH # activate the env
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
pip install -r requirements.txt # install other needed packages
RESIDE official website here. We use the same data structure as Dehazeformer. Please refer to their repository to prepare datasets.
Finally, you should get the following dataset structure:
── data
├─ RESIDE-IN
│ ├─ train
│ │ ├─ GT
│ │ │ └─ ... (image filename)
│ │ └─ hazy
│ │ └─ ... (corresponds to the former)
│ └─ test
│ └─ ...
└─ RESIDE-OUT
├─ train
│ ├─ GT
│ │ └─ ... (image filename)
│ └─ hazy
│ └─ ... (corresponds to the former)
└─ test
└─ ...
Run the following script to test the trained model:
python test.py --data_dir (path to dataset)--dataset (dataset name) --exp (exp name)
For example, we test the WaveDH on the SOTS indoor set:
python test.py --data_dir ./data --dataset RESIDE-IN --exp indoor
- The benchmark results of our models can be downloaded from WaveDH and WaveDH-tiny.
- Performance in PSNR/SSIM on SOTS-indoor and SOTS-outdoor.
Model | SOTS-indoor | SOTS-outdoor |
---|---|---|
WaveDH | 39.35/0.995 | 34.89/0.984 |
WaveDH-Tiny | 36.93/0.992 | 34.52/0.983 |
- Add instructions
- Add test code
- Add checkpoint files
- Add training code
If you find this work useful in your research, please consider citing:
@article{hwang2024wavedh,
title={WaveDH: Wavelet Sub-bands Guided ConvNet for Efficient Image Dehazing},
author={Seongmin Hwang and Daeyoung Han and Cheolkon Jung and Moongu Jeon},
journal={arXiv preprint arXiv:2404.01604},
year={2024}
}
Thanks to Yuda Song et al for releasing their official implementation of the Dehazeformer paper. Our code is heavily borrowed from the implementation.