zizhaozhang / unet-tensorflow-keras Goto Github PK
View Code? Open in Web Editor NEWA concise code for training and evaluating Unet using tensorflow+keras
License: MIT License
A concise code for training and evaluating Unet using tensorflow+keras
License: MIT License
In train directory
train/
img/
0/
gt/
0/
in 0/ need I to put one image or batch?
In my case, I have 6 classes. How to look groundtruth mask image? 0,1,2,3,4,5 in the last channel, in all channels, certain colors like in palette_refer.png
Hello,
Thanks for sharing your code !
I was able to use cityscapes dataset and train (just doing 2 classes for start).
However, during evaluation - on any jpg (either a jpg of training data, or new image)
python eval.py --data_path ./eval --load_from_checkpoint ./checkpoints/unet_example/model-146 --batch_size 1
I always get the exact same error:
ValueError: Input 0 of layer conv1_1 is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [None, 256, 256]
(full trace here - https://pastebin.com/Rjzt5N7V )
it seems as if nothing I do can avoid that error.
I'll keep at it - but if anything comes to mind, appreciate any tips or ideas meanwhile.
Thanks again !
你好:
最近在学习深度学习,大神的代码写的非常高深。
我按照你的readme的步骤来操作的时候,系统会出现错误,提示说:系统找不到指定的路径,我已经将你的说明中的your-data-path改成了我自己的文件夹了(使用的是绝对路径),然后就出现了这个问题。
所以我想请教一下,可以在.py文件中声明相关的路径吗?还有,总共需要自己设置几个路径呢?
还请大神不吝赐教。
谢谢!!
python train.py
Found 0 images belonging to 0 classes.
Found 0 images belonging to 0 classes.
Found 0 images belonging to 0 classes.
Found 0 images belonging to 0 classes.
./datasets/
build UNet ...
please how are images and labels.
thank you very much
thank you very much...........
Hi, thank you for sharing code.
Could you give me the dataset link?
Hi, could you provide some license to your code? I suggest MIT ;)
When create the Unet,
with tf.name_scope('unet'):
pred = UNet().create_model(img_shape, backend='tf', tf_input=img)
The following error occurred
Traceback (most recent call last):
File "G:/Tensorflow/unet-tensorflow-keras-master/train.py", line 52, in <module>
pred = UNet().create_model(img_shape, backend='tf', tf_input=img)
File "G:\Tensorflow\unet-tensorflow-keras-master\model.py", line 84, in create_model
conv9 = ZeroPadding2D(padding=(ch[0], ch[1], cw[0], cw[1]), dim_ordering=dim_ordering)(conv9)
File "E:\Program Files\python35\lib\site-packages\keras\legacy\interfaces.py", line 88, in wrapper
return func(*args, **kwargs)
File "E:\Program Files\python35\lib\site-packages\keras\layers\convolutional.py", line 1307, in __init__
'Found: ' + str(padding))
ValueError: `padding` should have two elements. Found: (6, 6, 6, 6)
Besides, there are many warning
build UNet ...
G:\Tensorflow\unet-tensorflow-keras-master\model.py:37: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(32, (3, 3), name="conv1_1", activation="relu", data_format="channels_last", padding="same")`
conv1 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', dim_ordering=dim_ordering, name='conv1_1')(inputs)
G:\Tensorflow\unet-tensorflow-keras-master\model.py:38: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(32, (3, 3), activation="relu", padding="same", data_format="channels_last")`
conv1 = Convolution2D(32, 3, 3, activation='relu', border_mode='same', dim_ordering=dim_ordering)(conv1)
G:\Tensorflow\unet-tensorflow-keras-master\model.py:39: UserWarning: Update your `MaxPooling2D` call to the Keras 2 API: `MaxPooling2D(data_format="channels_last", pool_size=(2, 2))`
pool1 = MaxPooling2D(pool_size=(2, 2), dim_ordering=dim_ordering)(conv1)
G:\Tensorflow\unet-tensorflow-keras-master\model.py:40: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(64, (3, 3), activation="relu", padding="same", data_format="channels_last")`
conv2 = Convolution2D(64, 3, 3, activation='relu', border_mode='same', dim_ordering=dim_ordering)(pool1)
...
Looking forward to your reply
Best wishes to you
x_batch, y_batch = next(train_generator)
feed_dict = { img: x_batch,
label: y_batch
}
This way is very slowly.
at line136 in train.py, i add: dice = dice_coef_loss(y_batch[0], pred_map) but it seems don't work, could you please give me some information about how to use this metric, thx!
I am trying to input data into this U-Net and I keep getting the error:
Traceback (most recent call last):
File "train.py", line 119, in
loss, pred_logits = sess.run([cross_entropy_loss, pred], feed_dict=feed_dict)
File "/Users/ishapuri/anaconda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/Users/ishapuri/anaconda/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 975, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (1, 288, 432, 4) for Tensor u'unet/input_1:0', which has shape '(?, 256, 256, 3)'
My images are png files and have shape (288, 432, 3).
How do I fix this? What is the exact shape you are looking for? And what is the 1st number in this shape dimension? (what is the significance of the spot that has a ? in it).
Thanks!
I want to use nifit data to train a unet, here is a demo for datalayer.
Anyone who finds mistakes please make a suggestion.
Thanks
import numpy as np
import nibabel as nb
class DataLayer():
def __init__(self, opt):
# 'batch_size': 2,
# 'learning_rate': 0.0001,
# 'lr_decay': 0.5,
# 'save_model_every': 100,
# 'checkpoint_path': 'checkpoints/unet',
# 'epoch': 5,
# 'load_from_checkpoint': 'unet-654'
self.batch_size = opt['batch_size']
self.learning_rate = opt['learning_rate']
self.lr_decay = opt['lr_decay']
self.save_model_every = opt['save_model_every']
self.checkpoint_path = opt['checkpoint_path']
self.epoch = opt['epoch']
self.load_from_checkpoint = opt['load_from_checkpoint']
def get_iter_epoch(self):
return self.epoch
def load_batch(self):
# read custome data
TrainFilePath = "H:/e_PublicData/H_LiverTumor2017/Training Batch 2/" #data pyth
Train_Case = 120
data_name = TrainFilePath + 'volume-' + str(Train_Case) + '.nii'
label_name = TrainFilePath + 'segmentation-' + str(Train_Case) + '.nii'
Train_nii = nb.load(data_name) # load data
Train_Label_nii = nb.load(label_name) #load label
Train_data = Train_nii.get_data()
Train_data = Train_data.astype(np.uint8)
Train_Label_data = Train_Label_nii.get_data()
Train_Label_data[Train_Label_data == 2] = 1 # laber number:2 background 0, ground truth 1
[nx, ny, nz] = Train_data.shape
X = np.zeros([nz, nx, ny, 1], dtype= float)
Y = np.zeros([nz, nx, ny, 1], dtype= float)
Train_data = Train_data.transpose(2, 0, 1)
Train_Label_data = Train_Label_data.transpose(2, 0, 1)
X[..., 0] = Train_data
Y[..., 0] = Train_Label_data
X_temp = X[0:2, ...]
Y_temp = Y[0:2, ...]
return X_temp, Y_temp
Hello Mr Zhang, sorry to interrupt you
i am a university student from China
Could you please tell me if there is a dataset that can be used to train model or i need to download the satelite image and train the model by myself?
Thank you very much!!!
With the default value of 256 for --imSize, the eval.py produces 768x256 images that comprises of the validation img image, the validation gt image, and the output from the uNet Model. The images look as expected. It's a really nice output as one can compare results.
But, when I change that value to 512 the image that it outputs is incorrect. It has the correct 1536x512 image size, but the validation img part is 256x512, and the validation gt part is also 256x512.
Excuse me, I'm a fresh man in tensorflow and keras. I failed to create a data layer demo.
Would it be possible for you to provide a data layer demo to read image and corresponding groundtruth mask ?
I would appreciate your help.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.