The codes for the work "U-MLLA: A Cognitive-Inspired Enhancement of Linear Attention for Medical Image Segmentation".
| model | Resolution | #Params | FLOPs | acc@1 | config | pretrained weights |
|---|---|---|---|---|---|---|
| MLLA-T | 224 | 25M | 4.2G | 83.5 | config | TsinghuaCloud |
| MLLA-S | 224 | 43M | 7.3G | 84.4 | config | TsinghuaCloud |
| MLLA-B | 224 | 96M | 16.2G | 85.3 | config | TsinghuaCloud |
Ref: [MLLA Official Implementation]
Ref: nnUNet Detailed procedure: link
Please follow the above procedure from the scratch, you are not recommended use the preprocessed data from the other work directly, otherwise it would get the worse results.
- Please prepare an environment with python=3.9 and then use the command
pip install -r requirements.txtfor the dependencies.
-
Run the train script on synapse dataset. The batch size we used is 48. If you do not have enough GPU memory, the bacth size can be reduced to 12 or 6 to save memory.
-
Train
sh train.sh- Test
sh test.sh