Code Monkey home page Code Monkey logo

ce-net's Introduction

Context Encoder Network for 2D Medical Image Segmentation

CE-Net: Context Encoder Network for 2D Medical Image Segmentation,
Zaiwang Gu, Jun Cheng, Huazhu Fu, Kang Zhou, Huaying Hao, Yitian Zhao, Tianyang Zhang, Shenghua Gao, Jiang Liu
arXiv technical report (arXiv 1903.02740)

Contact: [email protected] or [email protected]. Any questions or discussions are welcomed!

Abstract

Medical image segmentation is an important step in medical image analysis. With the rapid development of convolutional neural network in image processing, deep learning has been used for medical image segmentation, such as optic disc segmentation, blood vessel detection, lung segmentation, cell segmentation, etc. Previously, U-net based approaches have been proposed. However, the consecutive pooling and strided convolutional operations lead to the loss of some spatial information. In this paper, we propose a context encoder network (referred to as CE-Net) to capture more high-level information and preserve spatial information for 2D medical image segmentation. CENet mainly contains three major components: a feature encoder module, a context extractor and a feature decoder module. We use pretrained ResNet block as the fixed feature extractor. The context extractor module is formed by a newly proposed dense atrous convolution (DAC) block and residual multi-kernel pooling (RMP) block. We applied the proposed CE-Net to different 2D medical image segmentation tasks. Comprehensive results show that the proposed method outperforms the original U-Net method and other state-of-the-art methods for optic disc segmentation, vessel detection, lung segmentation, cell contour segmentation and retinal optical coherence tomography layer segmentation.

Use CE-Net

Please start up the "visdom" before running the main.py. Then, run the main.py file.

We have uploaded the DRIVE dataset to run the retinal vessel detection. The other medical datasets will be uploaded in the next submission.

The submission mainly contains:

  1. architecture (called CE-Net) in networks/cenet.py
  2. multi-class dice loss in loss.py
  3. data augmentation in data.py

Update: We have modified the loss function. The cuda error (or warning) will not occur.

Update: The test code has been uploaded. Besides, we release a pretrained model, which achieves 0.9819 in the AUC scor in the DRIVE dataset.

Citation

If you find this project useful for your research, please use the following BibTeX entry.

@article{gu2019net,
  title={Ce-net: Context encoder network for 2d medical image segmentation},
  author={Gu, Zaiwang and Cheng, Jun and Fu, Huazhu and Zhou, Kang and Hao, Huaying and Zhao, Yitian and Zhang, Tianyang and Gao, Shenghua and Liu, Jiang},
  journal={IEEE transactions on medical imaging},
  volume={38},
  number={10},
  pages={2281--2292},
  year={2019},
  publisher={IEEE}
}

The manuscript has been accepted in TMI.

ce-net's People

Contributors

guzaiwang avatar zaiwanggu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

ce-net's Issues

Different channel input

Hello, thanks for sharing the code, I am trying to test the network on different medical images with different numbers of input channels, after reading the code, I think the network does not accept images that do not have 3 channels, even though I may be wrong , but I would like to hear from you if it really is.

Reproducing the results in the paper

Hi, I am wondering how to reproduce the retinal vessel segmentation results in the paper with DRIVE dataset, such as the details of training and the inference. Thank you!

TTA

Could you release TTA(test time augument) code?

PermissionError: [Errno 13] Permission denied: '/data'

Hello, I am trying to run the script however I am getting the same error on two different machines

(torch-kernel) mb01761@heron158:/research/CE-Net/src$ python main.py
/conda/.conda/envs/torch-kernel/lib/python3.10/site-packages/scipy/init.py:146: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.4
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
GPUS device is [0]
training chunk_sizes: [24]
The output will be saved to /data/UBT_Seg/binSeg/ORIGA_OD_cenet_dice_bce_loss
Traceback (most recent call last):
File "/vol/research/Neurocomp/mb01761/research/CE-Net/src/main.py", line 126, in
main(opt)
File "/vol/research/Neurocomp/mb01761/research/CE-Net/src/main.py", line 40, in main
logger = Logger(opt)
File "/vol/research/Neurocomp/mb01761/research/CE-Net/src/lib/logger.py", line 25, in init
os.makedirs(opt.save_dir)
File "/vol/research/TopDownVideo/mb01761/conda/.conda/envs/torch-kernel/lib/python3.10/os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/vol/research/TopDownVideo/mb01761/conda/.conda/envs/torch-kernel/lib/python3.10/os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/vol/research/TopDownVideo/mb01761/conda/.conda/envs/torch-kernel/lib/python3.10/os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/vol/research/TopDownVideo/mb01761/conda/.conda/envs/torch-kernel/lib/python3.10/os.py", line 225, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/data'

预测时出现错误

RuntimeError: Error(s) in loading state_dict for CE_Net_:
Missing key(s) in state_dict:

"firstconv.weight", "firstbn.weight", "firstbn.bias", "firstbn.running_mean", "firstbn.running_var", "encoder1.0.conv1.weight", "encoder1.0.bn1.weight", "encoder1.0.bn1.bias", "encoder1.0.bn1.running_mean", "encoder1.0.bn1.running_var", "encoder1.0.conv2.weight", "encoder1.0.bn2.weight", "encoder1.0.bn2.bias", "encoder1.0.bn2.running_mean", "encoder1.0.bn2.running_var", "encoder1.1.conv1.weight", "encoder1.1.bn1.weight", "encoder1.1.bn1.bias", "encoder1.1.bn1.running_mean", "encoder1.1.bn1.running_var", "encoder1.1.conv2.weight", "encoder1.1.bn2.weight", "encoder1.1.bn2.bias", "encoder1.1.bn2.running_mean", "encoder1.1.bn2.running_var", "encoder1.2.conv1.weight", "encoder1.2.bn1.weight", "encoder1.2.bn1.bias", "encoder1.2.bn1.running_mean", "encoder1.2.bn1.running_var", "encoder1.2.conv2.weight", "encoder1.2.bn2.weight", "encoder1.2.bn2.bias", "encoder1.2.bn2.running_mean", "encoder1.2.bn2.running_var", "encoder2.0.conv1.weight", "encoder2.0.bn1.weight", "encoder2.0.bn1.bias", "encoder2.0.bn1.running_mean", "encoder2.0.bn1.running_var", "encoder2.0.conv2.weight", "encoder2.0.bn2.weight", "encoder2.0.bn2.bias", "encoder2.0.bn2.running_mean", "encoder2.0.bn2.running_var", "encoder2.0.downsample.0.weight", "encoder2.0.downsample.1.weight", "encoder2.0.downsample.1.bias", "encoder2.0.downsample.1.running_mean", "encoder2.0.downsample.1.running_var", "encoder2.1.conv1.weight", "encoder2.1.bn1.weight", "encoder2.1.bn1.bias", "encoder2.1.bn1.running_mean", "encoder2.1.bn1.running_var", "encoder2.1.conv2.weight", "encoder2.1.bn2.weight", "encoder2.1.bn2.bias", "encoder2.1.bn2.running_mean", "encoder2.1.bn2.running_var", "encoder2.2.conv1.weight", "encoder2.2.bn1.weight", "encoder2.2.bn1.bias", "encoder2.2.bn1.running_mean", "encoder2.2.bn1.running_var", "encoder2.2.conv2.weight", "encoder2.2.bn2.weight", "encoder2.2.bn2.bias", "encoder2.2.bn2.running_mean", "encoder2.2.bn2.running_var", "encoder2.3.conv1.weight", "encoder2.3.bn1.weight", "encoder2.3.bn1.bias", "encoder2.3.bn1.running_mean", "encoder2.3.bn1.running_var", "encoder2.3.conv2.weight", "encoder2.3.bn2.weight", "encoder2.3.bn2.bias", "encoder2.3.bn2.running_mean", "encoder2.3.bn2.running_var", "encoder3.0.conv1.weight", "encoder3.0.bn1.weight", "encoder3.0.bn1.bias", "encoder3.0.bn1.running_mean", "encoder3.0.bn1.running_var", "encoder3.0.conv2.weight", "encoder3.0.bn2.weight", "encoder3.0.bn2.bias", "encoder3.0.bn2.running_mean", "encoder3.0.bn2.running_var", "encoder3.0.downsample.0.weight", "encoder3.0.downsample.1.weight", "encoder3.0.downsample.1.bias", "encoder3.0.downsample.1.running_mean", "encoder3.0.downsample.1.running_var", "encoder3.1.conv1.weight", "encoder3.1.bn1.weight", "encoder3.1.bn1.bias", "encoder3.1.bn1.running_mean", "encoder3.1.bn1.running_var", "encoder3.1.conv2.weight", "encoder3.1.bn2.weight", "encoder3.1.bn2.bias", "encoder3.1.bn2.running_mean", "encoder3.1.bn2.running_var", "encoder3.2.conv1.weight", "encoder3.2.bn1.weight", "encoder3.2.bn1.bias", "encoder3.2.bn1.running_mean", "encoder3.2.bn1.running_var", "encoder3.2.conv2.weight", "encoder3.2.bn2.weight", "encoder3.2.bn2.bias", "encoder3.2.bn2.running_mean", "encoder3.2.bn2.running_var", "encoder3.3.conv1.weight", "encoder3.3.bn1.weight", "encoder3.3.bn1.bias", "encoder3.3.bn1.running_mean", "encoder3.3.bn1.running_var", "encoder3.3.conv2.weight", "encoder3.3.bn2.weight", "encoder3.3.bn2.bias", "encoder3.3.bn2.running_mean", "encoder3.3.bn2.running_var", "encoder3.4.conv1.weight", "encoder3.4.bn1.weight", "encoder3.4.bn1.bias", "encoder3.4.bn1.running_mean", "encoder3.4.bn1.running_var", "encoder3.4.conv2.weight", "encoder3.4.bn2.weight", "encoder3.4.bn2.bias", "encoder3.4.bn2.running_mean", "encoder3.4.bn2.running_var", "encoder3.5.conv1.weight", "encoder3.5.bn1.weight", "encoder3.5.bn1.bias", "encoder3.5.bn1.running_mean", "encoder3.5.bn1.running_var", "encoder3.5.conv2.weight", "encoder3.5.bn2.weight", "encoder3.5.bn2.bias", "encoder3.5.bn2.running_mean", "encoder3.5.bn2.running_var", "encoder4.0.conv1.weight", "encoder4.0.bn1.weight", "encoder4.0.bn1.bias", "encoder4.0.bn1.running_mean", "encoder4.0.bn1.running_var", "encoder4.0.conv2.weight", "encoder4.0.bn2.weight", "encoder4.0.bn2.bias", "encoder4.0.bn2.running_mean", "encoder4.0.bn2.running_var", "encoder4.0.downsample.0.weight", "encoder4.0.downsample.1.weight", "encoder4.0.downsample.1.bias", "encoder4.0.downsample.1.running_mean", "encoder4.0.downsample.1.running_var", "encoder4.1.conv1.weight", "encoder4.1.bn1.weight", "encoder4.1.bn1.bias", "encoder4.1.bn1.running_mean", "encoder4.1.bn1.running_var", "encoder4.1.conv2.weight", "encoder4.1.bn2.weight", "encoder4.1.bn2.bias", "encoder4.1.bn2.running_mean", "encoder4.1.bn2.running_var", "encoder4.2.conv1.weight", "encoder4.2.bn1.weight", "encoder4.2.bn1.bias", "encoder4.2.bn1.running_mean", "encoder4.2.bn1.running_var", "encoder4.2.conv2.weight", "encoder4.2.bn2.weight", "encoder4.2.bn2.bias", "encoder4.2.bn2.running_mean", "encoder4.2.bn2.running_var", "dblock.dilate1.weight", "dblock.dilate1.bias", "dblock.dilate2.weight", "dblock.dilate2.bias", "dblock.dilate3.weight", "dblock.dilate3.bias", "dblock.conv1x1.weight", "dblock.conv1x1.bias", "spp.conv.weight", "spp.conv.bias", "decoder4.conv1.weight", "decoder4.conv1.bias", "decoder4.norm1.weight", "decoder4.norm1.bias", "decoder4.norm1.running_mean", "decoder4.norm1.running_var", "decoder4.deconv2.weight", "decoder4.deconv2.bias", "decoder4.norm2.weight", "decoder4.norm2.bias", "decoder4.norm2.running_mean", "decoder4.norm2.running_var", "decoder4.conv3.weight", "decoder4.conv3.bias", "decoder4.norm3.weight", "decoder4.norm3.bias", "decoder4.norm3.running_mean", "decoder4.norm3.running_var", "decoder3.conv1.weight", "decoder3.conv1.bias", "decoder3.norm1.weight", "decoder3.norm1.bias", "decoder3.norm1.running_mean", "decoder3.norm1.running_var", "decoder3.deconv2.weight", "decoder3.deconv2.bias", "decoder3.norm2.weight", "decoder3.norm2.bias", "decoder3.norm2.running_mean", "decoder3.norm2.running_var", "decoder3.conv3.weight", "decoder3.conv3.bias", "decoder3.norm3.weight", "decoder3.norm3.bias", "decoder3.norm3.running_mean", "decoder3.norm3.running_var", "decoder2.conv1.weight", "decoder2.conv1.bias", "decoder2.norm1.weight", "decoder2.norm1.bias", "decoder2.norm1.running_mean", "decoder2.norm1.running_var", "decoder2.deconv2.weight", "decoder2.deconv2.bias", "decoder2.norm2.weight", "decoder2.norm2.bias", "decoder2.norm2.running_mean", "decoder2.norm2.running_var", "decoder2.conv3.weight", "decoder2.conv3.bias", "decoder2.norm3.weight", "decoder2.norm3.bias", "decoder2.norm3.running_mean", "decoder2.norm3.running_var", "decoder1.conv1.weight", "decoder1.conv1.bias", "decoder1.norm1.weight", "decoder1.norm1.bias", "decoder1.norm1.running_mean", "decoder1.norm1.running_var", "decoder1.deconv2.weight", "decoder1.deconv2.bias", "decoder1.norm2.weight", "decoder1.norm2.bias", "decoder1.norm2.running_mean", "decoder1.norm2.running_var", "decoder1.conv3.weight", "decoder1.conv3.bias", "decoder1.norm3.weight", "decoder1.norm3.bias", "decoder1.norm3.running_mean", "decoder1.norm3.running_var", "finaldeconv1.weight", "finaldeconv1.bias", "finalconv2.weight", "finalconv2.bias", "finalconv3.weight", "finalconv3.bias".
Unexpected key(s) in state_dict: "module.firstconv.weight", "module.firstbn.weight", "module.firstbn.bias", "module.firstbn.running_mean", "module.firstbn.running_var", "module.firstbn.num_batches_tracked", "module.encoder1.0.conv1.weight", "module.encoder1.0.bn1.weight", "module.encoder1.0.bn1.bias", "module.encoder1.0.bn1.running_mean", "module.encoder1.0.bn1.running_var", "module.encoder1.0.bn1.num_batches_tracked", "module.encoder1.0.conv2.weight", "module.encoder1.0.bn2.weight", "module.encoder1.0.bn2.bias", "module.encoder1.0.bn2.running_mean", "module.encoder1.0.bn2.running_var", "module.encoder1.0.bn2.num_batches_tracked", "module.encoder1.1.conv1.weight", "module.encoder1.1.bn1.weight", "module.encoder1.1.bn1.bias", "module.encoder1.1.bn1.running_mean", "module.encoder1.1.bn1.running_var", "module.encoder1.1.bn1.num_batches_tracked", "module.encoder1.1.conv2.weight", "module.encoder1.1.bn2.weight", "module.encoder1.1.bn2.bias", "module.encoder1.1.bn2.running_mean", "module.encoder1.1.bn2.running_var", "module.encoder1.1.bn2.num_batches_tracked", "module.encoder1.2.conv1.weight", "module.encoder1.2.bn1.weight", "module.encoder1.2.bn1.bias", "module.encoder1.2.bn1.running_mean", "module.encoder1.2.bn1.running_var", "module.encoder1.2.bn1.num_batches_tracked", "module.encoder1.2.conv2.weight", "module.encoder1.2.bn2.weight", "module.encoder1.2.bn2.bias", "module.encoder1.2.bn2.running_mean", "module.encoder1.2.bn2.running_var", "module.encoder1.2.bn2.num_batches_tracked", "module.encoder2.0.conv1.weight", "module.encoder2.0.bn1.weight", "module.encoder2.0.bn1.bias", "module.encoder2.0.bn1.running_mean", "module.encoder2.0.bn1.running_var", "module.encoder2.0.bn1.num_batches_tracked", "module.encoder2.0.conv2.weight", "module.encoder2.0.bn2.weight", "module.encoder2.0.bn2.bias", "module.encoder2.0.bn2.running_mean", "module.encoder2.0.bn2.running_var", "module.encoder2.0.bn2.num_batches_tracked", "module.encoder2.0.downsample.0.weight", "module.encoder2.0.downsample.1.weight", "module.encoder2.0.downsample.1.bias", "module.encoder2.0.downsample.1.running_mean", "module.encoder2.0.downsample.1.running_var", "module.encoder2.0.downsample.1.num_batches_tracked", "module.encoder2.1.conv1.weight", "module.encoder2.1.bn1.weight", "module.encoder2.1.bn1.bias", "module.encoder2.1.bn1.running_mean", "module.encoder2.1.bn1.running_var", "module.encoder2.1.bn1.num_batches_tracked", "module.encoder2.1.conv2.weight", "module.encoder2.1.bn2.weight", "module.encoder2.1.bn2.bias", "module.encoder2.1.bn2.running_mean", "module.encoder2.1.bn2.running_var", "module.encoder2.1.bn2.num_batches_tracked", "module.encoder2.2.conv1.weight", "module.encoder2.2.bn1.weight", "module.encoder2.2.bn1.bias", "module.encoder2.2.bn1.running_mean", "module.encoder2.2.bn1.running_var", "module.encoder2.2.bn1.num_batches_tracked", "module.encoder2.2.conv2.weight", "module.encoder2.2.bn2.weight", "module.encoder2.2.bn2.bias", "module.encoder2.2.bn2.running_mean", "module.encoder2.2.bn2.running_var", "module.encoder2.2.bn2.num_batches_tracked", "module.encoder2.3.conv1.weight", "module.encoder2.3.bn1.weight", "module.encoder2.3.bn1.bias", "module.encoder2.3.bn1.running_mean", "module.encoder2.3.bn1.running_var", "module.encoder2.3.bn1.num_batches_tracked", "module.encoder2.3.conv2.weight", "module.encoder2.3.bn2.weight", "module.encoder2.3.bn2.bias", "module.encoder2.3.bn2.running_mean", "module.encoder2.3.bn2.running_var", "module.encoder2.3.bn2.num_batches_tracked", "module.encoder3.0.conv1.weight", "module.encoder3.0.bn1.weight", "module.encoder3.0.bn1.bias", "module.encoder3.0.bn1.running_mean", "module.encoder3.0.bn1.running_var", "module.encoder3.0.bn1.num_batches_tracked", "module.encoder3.0.conv2.weight", "module.encoder3.0.bn2.weight", "module.encoder3.0.bn2.bias", "module.encoder3.0.bn2.running_mean", "module.encoder3.0.bn2.running_var", "module.encoder3.0.bn2.num_batches_tracked", "module.encoder3.0.downsample.0.weight", "module.encoder3.0.downsample.1.weight", "module.encoder3.0.downsample.1.bias", "module.encoder3.0.downsample.1.running_mean", "module.encoder3.0.downsample.1.running_var", "module.encoder3.0.downsample.1.num_batches_tracked", "module.encoder3.1.conv1.weight", "module.encoder3.1.bn1.weight", "module.encoder3.1.bn1.bias", "module.encoder3.1.bn1.running_mean", "module.encoder3.1.bn1.running_var", "module.encoder3.1.bn1.num_batches_tracked", "module.encoder3.1.conv2.weight", "module.encoder3.1.bn2.weight", "module.encoder3.1.bn2.bias", "module.encoder3.1.bn2.running_mean", "module.encoder3.1.bn2.running_var", "module.encoder3.1.bn2.num_batches_tracked", "module.encoder3.2.conv1.weight", "module.encoder3.2.bn1.weight", "module.encoder3.2.bn1.bias", "module.encoder3.2.bn1.running_mean", "module.encoder3.2.bn1.running_var", "module.encoder3.2.bn1.num_batches_tracked", "module.encoder3.2.conv2.weight", "module.encoder3.2.bn2.weight", "module.encoder3.2.bn2.bias", "module.encoder3.2.bn2.running_mean", "module.encoder3.2.bn2.running_var", "module.encoder3.2.bn2.num_batches_tracked", "module.encoder3.3.conv1.weight", "module.encoder3.3.bn1.weight", "module.encoder3.3.bn1.bias", "module.encoder3.3.bn1.running_mean", "module.encoder3.3.bn1.running_var", "module.encoder3.3.bn1.num_batches_tracked", "module.encoder3.3.conv2.weight", "module.encoder3.3.bn2.weight", "module.encoder3.3.bn2.bias", "module.encoder3.3.bn2.running_mean", "module.encoder3.3.bn2.running_var", "module.encoder3.3.bn2.num_batches_tracked", "module.encoder3.4.conv1.weight", "module.encoder3.4.bn1.weight", "module.encoder3.4.bn1.bias", "module.encoder3.4.bn1.running_mean", "module.encoder3.4.bn1.running_var", "module.encoder3.4.bn1.num_batches_tracked", "module.encoder3.4.conv2.weight", "module.encoder3.4.bn2.weight", "module.encoder3.4.bn2.bias", "module.encoder3.4.bn2.running_mean", "module.encoder3.4.bn2.running_var", "module.encoder3.4.bn2.num_batches_tracked", "module.encoder3.5.conv1.weight", "module.encoder3.5.bn1.weight", "module.encoder3.5.bn1.bias", "module.encoder3.5.bn1.running_mean", "module.encoder3.5.bn1.running_var", "module.encoder3.5.bn1.num_batches_tracked", "module.encoder3.5.conv2.weight", "module.encoder3.5.bn2.weight", "module.encoder3.5.bn2.bias", "module.encoder3.5.bn2.running_mean", "module.encoder3.5.bn2.running_var", "module.encoder3.5.bn2.num_batches_tracked", "module.encoder4.0.conv1.weight", "module.encoder4.0.bn1.weight", "module.encoder4.0.bn1.bias", "module.encoder4.0.bn1.running_mean", "module.encoder4.0.bn1.running_var", "module.encoder4.0.bn1.num_batches_tracked", "module.encoder4.0.conv2.weight", "module.encoder4.0.bn2.weight", "module.encoder4.0.bn2.bias", "module.encoder4.0.bn2.running_mean", "module.encoder4.0.bn2.running_var", "module.encoder4.0.bn2.num_batches_tracked", "module.encoder4.0.downsample.0.weight", "module.encoder4.0.downsample.1.weight", "module.encoder4.0.downsample.1.bias", "module.encoder4.0.downsample.1.running_mean", "module.encoder4.0.downsample.1.running_var", "module.encoder4.0.downsample.1.num_batches_tracked", "module.encoder4.1.conv1.weight", "module.encoder4.1.bn1.weight", "module.encoder4.1.bn1.bias", "module.encoder4.1.bn1.running_mean", "module.encoder4.1.bn1.running_var", "module.encoder4.1.bn1.num_batches_tracked", "module.encoder4.1.conv2.weight", "module.encoder4.1.bn2.weight", "module.encoder4.1.bn2.bias", "module.encoder4.1.bn2.running_mean", "module.encoder4.1.bn2.running_var", "module.encoder4.1.bn2.num_batches_tracked", "module.encoder4.2.conv1.weight", "module.encoder4.2.bn1.weight", "module.encoder4.2.bn1.bias", "module.encoder4.2.bn1.running_mean", "module.encoder4.2.bn1.running_var", "module.encoder4.2.bn1.num_batches_tracked", "module.encoder4.2.conv2.weight", "module.encoder4.2.bn2.weight", "module.encoder4.2.bn2.bias", "module.encoder4.2.bn2.running_mean", "module.encoder4.2.bn2.running_var", "module.encoder4.2.bn2.num_batches_tracked", "module.dblock.dilate1.weight", "module.dblock.dilate1.bias", "module.dblock.dilate2.weight", "module.dblock.dilate2.bias", "module.dblock.dilate3.weight", "module.dblock.dilate3.bias", "module.dblock.conv1x1.weight", "module.dblock.conv1x1.bias", "module.spp.conv.weight", "module.spp.conv.bias", "module.decoder4.conv1.weight", "module.decoder4.conv1.bias", "module.decoder4.norm1.weight", "module.decoder4.norm1.bias", "module.decoder4.norm1.running_mean", "module.decoder4.norm1.running_var", "module.decoder4.norm1.num_batches_tracked", "module.decoder4.deconv2.weight", "module.decoder4.deconv2.bias", "module.decoder4.norm2.weight", "module.decoder4.norm2.bias", "module.decoder4.norm2.running_mean", "module.decoder4.norm2.running_var", "module.decoder4.norm2.num_batches_tracked", "module.decoder4.conv3.weight", "module.decoder4.conv3.bias", "module.decoder4.norm3.weight", "module.decoder4.norm3.bias", "module.decoder4.norm3.running_mean", "module.decoder4.norm3.running_var", "module.decoder4.norm3.num_batches_tracked", "module.decoder3.conv1.weight", "module.decoder3.conv1.bias", "module.decoder3.norm1.weight", "module.decoder3.norm1.bias", "module.decoder3.norm1.running_mean", "module.decoder3.norm1.running_var", "module.decoder3.norm1.num_batches_tracked", "module.decoder3.deconv2.weight", "module.decoder3.deconv2.bias", "module.decoder3.norm2.weight", "module.decoder3.norm2.bias", "module.decoder3.norm2.running_mean", "module.decoder3.norm2.running_var", "module.decoder3.norm2.num_batches_tracked", "module.decoder3.conv3.weight", "module.decoder3.conv3.bias", "module.decoder3.norm3.weight", "module.decoder3.norm3.bias", "module.decoder3.norm3.running_mean", "module.decoder3.norm3.running_var", "module.decoder3.norm3.num_batches_tracked", "module.decoder2.conv1.weight", "module.decoder2.conv1.bias", "module.decoder2.norm1.weight", "module.decoder2.norm1.bias", "module.decoder2.norm1.running_mean", "module.decoder2.norm1.running_var", "module.decoder2.norm1.num_batches_tracked", "module.decoder2.deconv2.weight", "module.decoder2.deconv2.bias", "module.decoder2.norm2.weight", "module.decoder2.norm2.bias", "module.decoder2.norm2.running_mean", "module.decoder2.norm2.running_var", "module.decoder2.norm2.num_batches_tracked", "module.decoder2.conv3.weight", "module.decoder2.conv3.bias", "module.decoder2.norm3.weight", "module.decoder2.norm3.bias", "module.decoder2.norm3.running_mean", "module.decoder2.norm3.running_var", "module.decoder2.norm3.num_batches_tracked", "module.decoder1.conv1.weight", "module.decoder1.conv1.bias", "module.decoder1.norm1.weight", "module.decoder1.norm1.bias", "module.decoder1.norm1.running_mean", "module.decoder1.norm1.running_var", "module.decoder1.norm1.num_batches_tracked", "module.decoder1.deconv2.weight", "module.decoder1.deconv2.bias", "module.decoder1.norm2.weight", "module.decoder1.norm2.bias", "module.decoder1.norm2.running_mean", "module.decoder1.norm2.running_var", "module.decoder1.norm2.num_batches_tracked", "module.decoder1.conv3.weight", "module.decoder1.conv3.bias", "module.decoder1.norm3.weight", "module.decoder1.norm3.bias", "module.decoder1.norm3.running_mean", "module.decoder1.norm3.running_var", "module.decoder1.norm3.num_batches_tracked", "module.finaldeconv1.weight", "module.finaldeconv1.bias", "module.finalconv2.weight", "module.finalconv2.bias", "module.finalconv3.weight", "module.finalconv3.bias".

HOW to get desired prediction?

i run the main.py, parameter is default,then run the test_cenet.py,but the prediction is not good,could you give me some suggestion?thank you very much!
01_test-mask

wrong test

I run test_cenet.py, but i got the all black or all white mask. how can i solve this?
and i want to know where is your pretrained model? Thanks @Guzaiwang

where is the data.py

I can see the data augmentation in Readme.
But I can‘t find the data.py in your fold.

How to start the "Visdom"

I run the python -m visdom.server command in the anaconda3 environment and connect to the web page, but then I can’t continue to operate. What is going on?

test_center报错,维度不匹配

RuntimeError: The size of tensor a (38) must match the size of tensor b (37) at non-singleton dimension 2

在运行test_center中d4 = self.decoder4(e4) + e3时,pycharm报错,请问有人知道原因吗?非常感谢

request

Can you configure the environment for the next code? For example, torch version? Thank you

About learing rate

Does anyone have the same situation as me? In the process of training the network, the learning rate update directly becomes 0

Set_A.txt and Set_B.txt in ORIGA dataset

Hi! Could you please provide Set_A.txt and Set_B.txt for experiment on ORIGA dataset? I searched online for how ORIGA dataset was divided into training set and test set, but found nothing. According to your data.py, Set_A.txt and Set_B.txt are used to divide ORIGA dataset, so I would be appreciated if you could provide them. Thx!

Peradventure about MulticlassDiceLoss

First of all, I would like to thank you for disclosing your own code, which can give me a chance to learn. After practice your code, I have one question about
`

class MulticlassDiceLoss(nn.Module):
"""
requires one hot encoded target. Applies DiceLoss on each class iteratively.
requires input.shape[0:1] and target.shape[0:1] to be (N, C) where N is
batch size and C is number of classes
"""
def init(self):

    super(MulticlassDiceLoss, self).__init__()

def forward(self, input, target, weights=None):

    C = target.shape[1]
 
    totalLoss = 0

    for i in range(C):

        diceLoss = dice(input[:, i, :, :], target[:, i, :, :])

        if weights is not None:

            diceLoss *= weights[i]

        totalLoss += diceLoss

    return totalLoss

`,
the 'input' is ground truth,it has only one channel,how to transform the channel into channels which are equal to the predicted segmentation image?

Pretrained Model

Thanks for your effort.

Could you please share your pretrained model?

关于多分类问题询问

已经将
ROOT= './dataset/CamVid'
BINARY_CLASS = 5
改为测试需要的值,并且写好读该数据的函数,测试用的loss为loss.py的MulticlassDiceLoss,因维度问题出现多出报错
N, H, W = target.size(0), target.size(2), target.size(3) IndexError: Dimension out of range (expected to be in range of [-3, 2], but got 3)
修正这个问题后又出现
diceLoss = dice(input[:, i, :, :], target[:, i,:, :]) IndexError: index 1 is out of bounds for dimension 1 with size 1

采用nn.CrossEntropyLoss()为loss,出现
if size_average and reduce: RuntimeError: bool value of Tensor with more than one value is ambiguous
能否上传一份测试多分类分割的示例代码呢

Retinal OCT layer Dataset

Hi, dear author, thank you for amazing segmentation network CENet and your open source code. I am wondering whether you can release the Retinal OCT layer Dataset for us to follow your work.

引用写全了吗?

感觉作者的引用不太全呀,还有代码最好写上based on哪个开源代码

it seems Unet in your code can outperform CE_Net in my training

it seems Unet in your code can outperform CE_Net in my training. I directly used your code and retrained the Unet, I got a accuracy of [acc: 0.956 | sen: 0.844 | auc:0.98]. This should better than the results of CE_Net. could you please share you weights?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.