lwchen6309 / unsupervised-image-segmentation-by-wnet-with-normalizedcut Goto Github PK
View Code? Open in Web Editor NEWA tensorflow implementation of WNet for unsupervised image segmentation on PASCAL VOC 2012 dataset
License: MIT License
A tensorflow implementation of WNet for unsupervised image segmentation on PASCAL VOC 2012 dataset
License: MIT License
Pattern ./data/VOCdevkit/VOC2012/SegmentationClass/2011_005667.png
Annotation file not found for 2010_001669 - Skipping
Pattern ./data/VOCdevkit/VOC2012/SegmentationClass/2010_001669.png
Nunmber of files: 2913
Pickling ...
Initializing Batch Dataset Reader...
Traceback (most recent call last):
File "./unsupervised-image-segmentation-by-WNet-with-NormalizedCut/src/WNet_naive.py", line 194, in
train_dataset_reader, validation_dataset_reader = create_BatchDatset()
File "/content/unsupervised-image-segmentation-by-WNet-with-NormalizedCut/src/data_io/BatchDatsetReader_VOC.py", line 109, in create_BatchDatset
train_dataset = BatchDatset(data_record['training'], True)
File "/content/unsupervised-image-segmentation-by-WNet-with-NormalizedCut/src/data_io/BatchDatsetReader_VOC.py", line 127, in init
self.read_data_to_self(data_records)
File "/content/unsupervised-image-segmentation-by-WNet-with-NormalizedCut/src/data_io/BatchDatsetReader_VOC.py", line 135, in read_data_to_self
[resize_size, resize_size], interp='bilinear') for datum in data_records])
File "/content/unsupervised-image-segmentation-by-WNet-with-NormalizedCut/src/data_io/BatchDatsetReader_VOC.py", line 135, in
[resize_size, resize_size], interp='bilinear') for datum in data_records])
AttributeError: module 'scipy.misc' has no attribute 'imresize'
This function no longer works, I had to install scipy==1.0.0 and then training worked
File "D:\deep learning codes\unsupervised-image-segmentation-by-WNet-with-NormalizedCut\src\UNet.py", line 179, in plot_segmentation_under_test_dir
colorized_test_preds = utils.batch_colorize_ndarray(test_preds,0, self.flags.num_class, self.flags.cmap)[:,:,:,:3]
IndexError: too many indices for array
Hi,
I have made a lot of changes so that this code is compatible for TensorFlow latest version but I am struck at this:
Use `tf.cast` instead.
Traceback (most recent call last):
File "WNet_bright.py", line 186, in <module>
net = Wnet_bright(flags)
File "WNet_bright.py", line 87, in __init__
batch_soft_ncut = soft_ncut(self.image, image_segment, image_weights)
File "C:\IU\Spring 2020\CSCI-B 657 Computer Vision\Project\unsupervised\wnet\unsupervised-image-segmentation-by-WNet-with-NormalizedCut\src\soft_ncut.py", line 515, in soft_ncut
W_Ak = sparse_tensor_dense_tensordot(image_weights, image_segment, axes=2) #axes = [[2],[2]]
File "C:\IU\Spring 2020\CSCI-B 657 Computer Vision\Project\unsupervised\wnet\unsupervised-image-segmentation-by-WNet-with-NormalizedCut\src\soft_ncut.py", line 238, in sparse_tensor_dense_tensordot
with tf.name_scope(name, "SparseTensorDenseTensordot", [sp_a, b, axes]) as name:
TypeError: __init__() takes 2 positional arguments but 4 were given
Can you please help me with solving this? Or provide me with the version that works for Tensorflow 2.
Thank you.
Say you have been training this model for 500 epochs, can I resume it by loading the trained model?
Here is the image of the errors I'm getting:
The first time I ran the code and got errors, it listed each of the data files and said that they were not found. Now, it just says "ValueError: need at least one array to stack", which was also present before after indicating that none of the data files were present.
I'm pretty new to using Python in Windows with Anaconda, so any ideas/help you can give are appreciated!
Initializing VOC2012 Batch Dataset Reader...
Found pickle file!
Initializing Batch Dataset Reader...
Initializing Batch Dataset Reader...
2019-08-27 22:44:45.364464: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.7
Traceback (most recent call last):
File "./unsupervised-image-segmentation-by-WNet-with-NormalizedCut/src/WNet_naive.py", line 200, in
valid_images, preds = net.visaulize_pred(validation_dataset_reader)
File "./unsupervised-image-segmentation-by-WNet-with-NormalizedCut/src/WNet_naive.py", line 135, in visaulize_pred
utils.save_image(valid_images[itr].astype(np.uint8), self.flags.logs_dir, name="inp_" + str(5+itr),mean=1)
TypeError: save_image() got an unexpected keyword argument 'mean'
In the file Soft_ncut.py there is an error that says "the module 'TensorflowUtils' has no attribute 'weight_variable'
Causing the problem is this line of code:
kernels = tf.cast(utils.weight_variable([3, 3, 3, NUM_OF_CLASSES], name="weight"), tf.float32)
Would you please let me know what should I define the weight_variable?
File "D:\deep learning codes\unsupervised-image-segmentation-by-WNet-with-NormalizedCut\src\soft_ncut.py", line 456, in convert_to_batchTensor
new_indeces = tf.concat([tile_batch,tile_indeces],axis=1)
NameError: name 'tile_indeces' is not defined
In line 20 of TenforflowUtils.py, the method unprocess_image() is invoked, but there is no definition of this function in the code. Please add a definition.
Hello,
I have tried using the wnet code but each time i try to train it, it says :
Initializing Batch Dataset Reader... Traceback (most recent call last):
File "src\UNet.py", line 355, in train_dataset_reader, validation_dataset_reader = create_BatchDatset()
File "D:\alex\Documents\Workspace py\seg\unsupervised-image-segmentation-by-WNet-with-NormalizedCut-master\src\data_io\BatchDatsetReader_VOC.py", line 109, in create_BatchDatset train_dataset = BatchDatset(data_record['training'],
File "D:\alex\Documents\Workspace py\seg\unsupervised-image-segmentation-by-WNet-with-NormalizedCut-master\src\data_io\BatchDatsetReader_VOC.py", line 127, in init self.read_data_to_self(data_records)
File "D:\alex\Documents\Workspace py\seg\unsupervised-image-segmentation-by-WNet-with-NormalizedCut-master\src\data_io\BatchDatsetReader_VOC.py", line 135, in read_data_to_self [resize_size, resize_size], interp='bilinear') for datum in data_records])
File "D:\Programmes\Anaconda\lib\site-packages\numpy\core\shape_base.py", line 349, in stack raise ValueError('need at least one array to stack') ValueError: need at least one array to stack
Hi!
I was wondering whether you could upload the pre-trained weights so that we can get started with making improvements to the model from the get-go instead of training from scratch. Thanks!
Use standard file APIs to check for files with this prefix.
Model restored...
No files found
Traceback (most recent call last):
File "./unsupervised-image-segmentation-by-WNet-with-NormalizedCut/src/WNet_naive.py", line 190, in
test_images, preds = net.plot_segmentation_under_test_dir()
File "/content/unsupervised-image-segmentation-by-WNet-with-NormalizedCut/src/UNet.py", line 190, in plot_segmentation_under_test_dir
return test_images, test_preds
UnboundLocalError: local variable 'test_images' referenced before assignment
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.