You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- This shows the performance result of HRNet with the full dataset. The images on the left are ground truth, and the images on the right are predicted results. The IoU of the upper and lower images are 94.76\% and 97.45\%, respectively.
- Training dataset 위치 [[Link]](https://arxiv.org/pdf/1505.04597.pdf)
31
+
Please download dataset below and put them in each path
32
+
- Training dataset [[Link]](https://drive.google.com/file/d/1Ugk6c_iadvlD-ycxNQlw9SHLDDqNAn1f/view?usp=sharing)
32
33
```
33
34
/SAR-water-segmentation/data/train_full/
34
35
```
35
-
- Test dataset 위치 [[Link]](https://arxiv.org/pdf/1505.04597.pdf)
36
+
- Test dataset [[Link]](https://drive.google.com/file/d/1MbyK4ljGmin5JeRroO80qTicbYfxGVAu/view?usp=sharing)
36
37
```
37
38
/SAR-water-segmentation/data/val_full/
38
39
```
39
40
40
41
## Pretrained model
41
-
-HRNetV2를 새롭게 training을 할 경우
42
-
ImageNet dataset으로 미리 학습된 모델 위치[[Link]](https://arxiv.org/pdf/1505.04597.pdf)
42
+
-When training the HRNet, download this model and put it in the path below
43
+
This model is a pre-trained model with ImageNet dataset, NOT our KOMPSAT-5 dataset.[[Link]](https://drive.google.com/file/d/1euYbOpJbs9di7W8IO4_hDizN_EoRWfAA/view?usp=sharing)
K5 Training dataset으로 학습된 모델 위치 [[Link]](https://arxiv.org/pdf/1505.04597.pdf)
48
+
-When testing the HRNet, download this model and put it in the path below
49
+
This model is a pre-trained model with ImageNet dataset, AND our KOMPSAT-5 dataset. [[Link]](https://drive.google.com/file/d/1gfLbsv9_6ZNtG7K3bmUf2r1Ig0CfQHIo/view?usp=sharing)
- HRNet is one of the latest models for learning-based image segmentation. A typical features of this model is that while the model is being trained, (1) the features of the high-resolution are retained while simultaneously extracting the low-resolution features in parallel. (2) This model repeatedly exchanges feature information between different resolutions. (3) Since this has a large number of layers and a lot of weights to be stored, it consumes a lot of memory and the speed of processing one image is relatively slow, however its performance is superior compared to other previous models.
- FCN is one of the most classic and representative models of image semantic segmentation methods. It is a model modified for the purpose of image semantic segmentation by changing the fully connected layer of the last layer into a convolutional layer in the existing classification model such as VGG16. Representative characteristics of this model, (1) by adding up sampling layers to the coarse feature map predicted through the convolutional layer, it is possible to predict dense features and restore back to the original image size. (2) By adding the skip architecture, local information of the shallow layer and semantic information of the deep layer can be combined.
- U-Net is an end-to-end FCN-based model proposed for image semantic segmentation. This model consists of the contracting path to obtain the overall context information of the input image and the expanding path to obtain the dense prediction from the coarse map in a symmetrical form. Because of this symmetry, the shape of the network is in the form of 'U' and is named U-Net.
85
86
86
-
## Test code 실행
87
+
## Test code
87
88
```
88
89
export TF_XLA_FLAGS=--tf_xla_cpu_global_jit
89
90
python main_UNet.py
90
91
```
91
92
92
-
# 딥러닝 알고리즘 4: Deep U-Net [[Paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8370071)
93
+
# Deep U-Net [[Paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8370071)
93
94
- Compared with U-Net, DeepUNet is a deeper model with more layers. Unlike U-Net, this model is characterized by an added 'plus layer'. The plus layer connects two adjacent layers, while the skip architecture commonly used in FCN, U-Net, and DeepUNet connects the shallow layer and the deep layer. This plus layer has the effect of preventing the loss of deep network from expanding to infinity and the model from getting trapped into the local optima.
0 commit comments