Code Monkey home page Code Monkey logo

Comments (14)

jakeret avatar jakeret commented on July 17, 2024 2

For a binary classification task tf_unet is expecting a boolean label array.

Something like this might help:

y_data = np.load(DATASET_FOLDER+"y_data.npy").astype(np.bool)

from tf_unet.

stefat77 avatar stefat77 commented on July 17, 2024

Thx for the answer :)

I casted my label arrays as booleans... but now I have the same error as before:

Loading dataset...

TRAIN data shape:  (1560, 128, 128, 2)
TRAIN labels shape (1560, 128, 128)
TEST data shape:  (120, 128, 128, 2)
TEST labels shape:  (120, 128, 128)
2017-06-23 17:27:44,948 Layers 4, features 64, filter size 3x3, pool size: 2x2
2017-06-23 17:27:47,566 Removing '/home/stefano/Dropbox/DeepWave/prediction'
2017-06-23 17:27:47,566 Removing '/home/stefano/Dropbox/DeepWave/unet_trained'
2017-06-23 17:27:47,566 Allocating '/home/stefano/Dropbox/DeepWave/prediction'
2017-06-23 17:27:47,566 Allocating '/home/stefano/Dropbox/DeepWave/unet_trained'
2017-06-23 17:27:47.567290: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-23 17:27:47.567318: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-23 17:27:47.567327: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-06-23 17:27:50,001 Verification error= 0.0%, loss= -0.5074
Traceback (most recent call last):
  File "Unet.py", line 48, in <module>
    path = trainer.train(data_provider, "./unet_trained", training_iters=training_iters, epochs=epochs, dropout=dropout, display_step=display_step, restore=restore)
  File "./tf_unet/unet.py", line 404, in train
    pred_shape = self.store_prediction(sess, test_x, test_y, "_init")
  File "./tf_unet/unet.py", line 457, in store_prediction
    img = util.combine_img_prediction(batch_x, batch_y, prediction)
  File "/home/stefano/Dropbox/DeepWave/tf_unet/util.py", line 106, in combine_img_prediction
    to_rgb(pred[..., 1].reshape(-1, ny, 1))), axis=1)
ValueError: all the input array dimensions except for the concatenation axis must match exactly

gt, data and pred have the following shapes in combile_img_prediction:
(4, 128, 128, 2) --> gt
(4, 128, 128, 2) --> data
(4, 36, 36, 2) --> pred

EDIT: I had also some errors in function to_rgb because I have some dataset images completly full of zeros. In that function I normalize Img only if np.amax(Img) != 0.

from tf_unet.

jakeret avatar jakeret commented on July 17, 2024

Hmm I think there is a bug in the to_rgb function. Looks like it only works if the data has 1 or 3 channels

from tf_unet.

stefat77 avatar stefat77 commented on July 17, 2024

ok. I try to see...
Thank you

from tf_unet.

stefat77 avatar stefat77 commented on July 17, 2024

Hi, I modified combine_img_prediction in order to concatenate 4 images: my two channels + label + prediction... but now I have another problem:
How can I obtain output prediction shape equal to the input shape?
My data samples have shape 128x128x2 but predictions have 88x88x2...

from tf_unet.

jakeret avatar jakeret commented on July 17, 2024

Great! Is this somehow reuseable for others? If so, would you mind to send me a PR?

This is an expected behaviour (see the original Ronneberger et al. paper). What works well is to mirror the edges of the images to create a larger input in order to compensate the loss

from tf_unet.

stefat77 avatar stefat77 commented on July 17, 2024

I changed the net from:
net = unet.Unet(channels=generator.channels, n_class=generator.n_class, layers=3, features_root=64, cost="dice_coefficient")

to:
net = unet.Unet(channels=generator.channels, n_class=generator.n_class, layers=4, features_root=128, cost="dice_coefficient")

but now the output prediction has size 36x36x2... is it still normal?

from tf_unet.

stefat77 avatar stefat77 commented on July 17, 2024

Great! Is this somehow reuseable for others? If so, would you mind to send me a PR?

For now it is not reusable but I can do it next week. In these days i don't have so much free time, but I would be glad to help :)

from tf_unet.

jakeret avatar jakeret commented on July 17, 2024

This seems to happen if the gradients explode (See #28). So far I haven't found a definite solution to this.

from tf_unet.

stefat77 avatar stefat77 commented on July 17, 2024

Thank you for the answer... you are very patient.
I have another issue... this is my terminal output:

TRAIN data shape:  (1080, 256, 256, 2)
TRAIN labels shape (1080, 256, 256)
TEST data shape:  (120, 256, 256, 2)
TEST labels shape:  (120, 256, 256)
2017-06-27 17:01:24,657 Layers 3, features 64, filter size 3x3, pool size: 2x2
2017-06-27 17:01:26,164 Removing '/home/stefano/Dropbox/DeepWave/prediction'
2017-06-27 17:01:26,164 Removing '/home/stefano/Dropbox/DeepWave/unet_trained'
2017-06-27 17:01:26,164 Allocating '/home/stefano/Dropbox/DeepWave/prediction'
2017-06-27 17:01:26,164 Allocating '/home/stefano/Dropbox/DeepWave/unet_trained'
2017-06-27 17:01:26.165215: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-27 17:01:26.165233: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-27 17:01:26.165242: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-06-27 17:01:38,682 Verification error= 100.0%, loss= -0.4882
2017-06-27 17:01:39,196 Start optimization
2017-06-27 17:01:48,665 Iter 0, Minibatch Loss= -0.5830, Training Accuracy= 1.0000, Minibatch error= 0.0%
2017-06-27 17:02:05,571 Iter 2, Minibatch Loss= -0.7644, Training Accuracy= 1.0000, Minibatch error= 0.0%
2017-06-27 17:02:22,366 Iter 4, Minibatch Loss= -0.9726, Training Accuracy= 1.0000, Minibatch error= 0.0%
2017-06-27 17:02:39,085 Iter 6, Minibatch Loss= -0.9947, Training Accuracy= 1.0000, Minibatch error= 0.0%
2017-06-27 17:02:57,761 Iter 8, Minibatch Loss= -0.9972, Training Accuracy= 1.0000, Minibatch error= 0.0%
2017-06-27 17:03:16,034 Iter 10, Minibatch Loss= -0.9982, Training Accuracy= 1.0000, Minibatch error= 0.0%
2017-06-27 17:03:34,933 Iter 12, Minibatch Loss= -0.9987, Training Accuracy= 1.0000, Minibatch error= 0.0%
2017-06-27 17:03:54,922 Iter 14, Minibatch Loss= -0.5565, Training Accuracy= 0.5566, Minibatch error= 44.3%
2017-06-27 17:04:17,774 Iter 16, Minibatch Loss= -0.9991, Training Accuracy= 1.0000, Minibatch error= 0.0%
2017-06-27 17:04:35,185 Iter 18, Minibatch Loss= -0.8922, Training Accuracy= 0.8927, Minibatch error= 10.7%
2017-06-27 17:04:44,692 Epoch 0, Average loss: -0.8885, learning rate: 0.2000
2017-06-27 17:04:58,064 Verification error= 0.0%, loss= -0.9994
2017-06-27 17:04:59,293 Optimization Finished!
INFO:tensorflow:Restoring parameters from ./unet_trained/model.cpkt
2017-06-27 17:05:00,109 Restoring parameters from ./unet_trained/model.cpkt
2017-06-27 17:05:00,298 Model restored from file: ./unet_trained/model.cpkt
PREDICTION (20, 216, 216, 2)

I cannot understand why train accuracy is almost every iteration = 1... also minibatch error = 0%.

This is my code... maybe I did again something wrong:

print('Loading dataset...\n')
X_data = np.load(DATASET_FOLDER+"X_data.npy")
X_test = np.load(DATASET_FOLDER+"X_test.npy")
y_data = np.load(DATASET_FOLDER+"y_data.npy")
y_test = np.load(DATASET_FOLDER+"y_test.npy")

print("TRAIN data shape: ", X_data.shape)
print("TRAIN labels shape", y_data.shape)
print("TEST data shape: ", X_test.shape)
print("TEST labels shape: ", y_test.shape)

X_data = X_data.astype(np.float32)
y_data = y_data.astype(np.bool)
X_test = X_test.astype(np.float32)
y_test = y_test.astype(np.bool)

training_iters = 20
epochs = 1
dropout = 0.75 # Dropout, probability to keep units
display_step = 2
restore = False
 
generator = image_util.SimpleDataProvider(X_data, y_data, channels=2, n_class=2)
test_generator = image_util.SimpleDataProvider(X_test, y_test, channels=2, n_class=2)

net = unet.Unet(channels=generator.channels, n_class=generator.n_class, layers=3, features_root=64, cost="dice_coefficient")
    
trainer = unet.Trainer(net, optimizer="momentum", opt_kwargs=dict(momentum=0.2))
path = trainer.train(generator, "./unet_trained", training_iters=training_iters, epochs=epochs, dropout=dropout, display_step=display_step, restore=restore)

prediction = net.predict(path, X_test[0:20,:,:,:])
print("PREDICTION",prediction.shape)
     

About dataset... the dataset have shape:
TRAIN data shape: (1080, 256, 256, 2)
TRAIN labels shape (1080, 256, 256)
TEST data shape: (120, 256, 256, 2)
TEST labels shape: (120, 256, 256)

After conversion X_test, X_data have float32 values between 0 and 255.
y_test and y_data are boolean matrices.

Thanks for your help.

from tf_unet.

stefat77 avatar stefat77 commented on July 17, 2024

I found the problem... the generator made with SimpleDataProvider creates a lot of blank images 0.o''

generator = image_util.SimpleDataProvider(X_data, y_data, channels=2, n_class=2)
s, t = generator(1000)

Almost every channel in s in a zero image and almost every t image is composed by a full True matrix and a full False matrix

EDIT:
The test_generator
test_generator = image_util.SimpleDataProvider(X_test, y_test, channels=2, n_class=2)
works properly and it does't create blank images...
The only difference is the size of the two datasets:
test-->120 imgs
train-->over 1500 imgs

from tf_unet.

jakeret avatar jakeret commented on July 17, 2024

From this I'm somehow deriving that the code is working alright and that there is some difference between the test and train data (structure)

from tf_unet.

stefat77 avatar stefat77 commented on July 17, 2024

You are right! 👍 I's my fault
I did a mistake when I created the dataset

from tf_unet.

jakeret avatar jakeret commented on July 17, 2024

No worries. I'm closing this. Feel free to reopen if anything pops up again.

from tf_unet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.