Code Monkey home page Code Monkey logo

unet_3d's Introduction

3D Unet Equipped with Advanced Deep Learning Methods

Created by Zhengyang Wang and Shuiwang Ji at Texas A&M University.

This project was presented as a poster (please find it in this repository) in BioImage Informatics Conference 2017.

Introduction

This repository includes a 3D version of Unet equipped with 2 advanced deep learning methods: VoxelDCL (derived from PixelDCL) and Dense Transformer Networks.

The preprocessing code and data input interface is for our dataset introduced below. To apply this model on other 3D segmentation datasets, you only need to change preprocessing code and data_reader.py.

Citation

If using this code, please cite our paper.

@article{gao2017pixel,
  title={Pixel Deconvolutional Networks},
  author={Hongyang Gao and Hao Yuan and Zhengyang Wang and Shuiwang Ji},
  journal={arXiv preprint arXiv:1705.06820},
  year={2017}
}
@article{li2017dtn,
  title={Dense Transformer Networks},
  author={Jun Li and Yongjun Chen and Lei Cai and Ian Davidson and Shuiwang
Ji},
  journal={arXiv preprint arXiv:1705.08881},
  year={2017}
}
@article{wang2018global,
  title={Global Deep Learning Methods for Multimodality Isointense Infant Brain Image Segmentation},
  author={Wang, Zhengyang and Zou, Na and Shen, Dinggang and Ji, Shuiwang},
  journal={arXiv preprint arXiv:1812.04103},
  year={2018}
}

Dataset

The dataset is from UNC and currently not available to the public. Basically, it is composed of multi-modality isointense infant brain MR images (3D) of 10 subjects. Each subject has two 3D images (T1WI and T2WI) with a manually 3D segmentation label.

It is an important step in brain development study to perform automatic segmentation of infant brain magnetic resonance (MR) images into white matter (WM), grey matter (GM) and cerebrospinal fluid (CSF) regions. This task is especially challenging in the isointense stage (approximately 6-8 months of age) when WM and GM exhibit similar levels of intensities in MR images.

System requirement

Programming language

Python 3.5+

Python Packages

tensorflow-gpu (GPU), numpy, h5py, nibabel

unet_3d's People

Contributors

zhengyang-wang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

unet_3d's Issues

Batch Normalization

It seems that the BN layer has some problems for 3D cases. It does not work as I expected, which may affect the accuracy. I'll try to fix it.

Some error and how it fix

Thanks for update your work. I try to run your work in my machine and I got some errors and this is how I fix it.

  1. python main.py --action='predict'/'test' changed to python main.py --option='predict'/'test' in ReadMe

  2. from utils.* in network.py , you may get error ImportError: No module named utils.*. To fix it, you just add a file __init__.py in the folder utils. You don't need to put anything into the file.

  3. When you perform python main.py --option='predict'. You may get the error about inconsistent shape. (5x32x32x32x2) to the target shape (1x32x32x32x2) . I did not know how to fix it. :(

  4. For generating the hdf5, I think we should you 9 subjects, instead of 10 subject as you did. Because you will use one subject for validation. Hence, the validation subjects should not include in the hdf5 data that used for training.

Thanks!

how do i get the dataset

@zhengyang-wang
I am interested in this repo unet_3d, I'd like to run it, but no input file, I was wondering how can I make a simple dataset, such as generating a random input tensor rather than reading from the file, so that I could run it well?

the moving mean and moving variance of batch_norm seems not being saved

As title, it seems that in test/predict mode the output of batch_norm is very weird, I believe it is due to the moving mean and variances are not saved.

so in network.py use the following code:

def configure_networks(self):
self.build_network()
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
optimizer = tf.train.AdamOptimizer(self.conf.learning_rate)
self.train_op = optimizer.minimize(self.loss_op, name='train_op')
tf.set_random_seed(self.conf.random_seed)
self.sess.run(tf.global_variables_initializer())
self.saver = tf.train.Saver()
self.writer = tf.summary.FileWriter(self.conf.logdir, self.sess.graph)

insted of using :
trainable_vars = tf.trainable_variables()
self.saver = tf.train.Saver(var_list=trainable_vars, max_to_keep=0)
use :
self.saver = tf.train.Saver()
so that the moving mean and variace are saved.

also, for batch_norm function, use a train_flag and put it into a bool placeholder(self.phase), remember to modify all calls to conv, deconv, pixel_dcl, ipixel_cl, ipixel_dcl functions:
in network.py:
def build_network(self):
......
self.phase = tf.placeholder(tf.bool) #define a bool placeholder
...
conv1 = ops.conv(inputs, out_num, self.conv_size,
name+'/conv1', self.conf.data_type, train_flag=self.phase)
...

in ops.py:
def conv(inputs, out_num, kernel_size, scope, data_type='2D', norm=True, train_flag=False):
......
if norm:
return tf.contrib.layers.batch_norm(
outs, decay=0.9, epsilon=1e-5, activation_fn=tf.nn.relu,
updates_collections=tf.GraphKeys.UPDATE_OPS, scope=scope+'/batch_norm', is_training=train_flag)
......

when train:
feed_dict = {self.inputs: inputs,
self.annotations: annotations,
self.phase: True}

when test/predict:
feed_dict = {self.inputs: inputs,
self.annotations: annotations,
self.phase: False}

How about the performance evaluation?

Thanks for sharing your work!

I would like to ask you two questions:

  1. In your preprocessing>generate_h5.py, you have
      if SUBSTRACT_MEAN:
           inputs -= MEAN
       if NORMALIZE:
           inputs /= 1000.

What is value 1000? Is it maximum intensity? As my knowledge, normalize with zero mean and unit variance must be (inputs-MEANS)/std, hence the value 1000 must be standard variance value. Is it right?

  1. I am regarding your performance with/without data augment. In my testing, with/without augment data have the same performance. How about you?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.