Code Monkey home page Code Monkey logo

segcaps's People

Contributors

cheng-lin-li avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

segcaps's Issues

RGB images with binary masks

Hi @Cheng-Lin-Li, thanks for your great effort.

I have a question about using custom dataset.
Is it possible to train SegCapsR3 with 3 channels RGB images and 1 channel grayscale mask?

UNet works with the same configuration (3 channels input + 1 channel mask) but SegCapsR3 doesn't.

SegCapsR3 fails with the error message of ValueError: Error when checking target: expected out_recon to have shape (224, 224, 1) but got array with shape (224, 224, 3)

Here is the network summary. I didn't do anything on the network and I'm at the very beginning of this road and couldn't find a solution.
Regards,

Layer (type)                         Output Shape              Param #   Connected to             


input_1 (InputLayer)                 (None, 224, 224, 3)       0                                  
conv1 (Conv2D)                       (None, 224, 224, 16)      1216      input_1[0][0]            
reshape_1 (Reshape)                  (None, 224, 224, 1, 16)   0         conv1[0][0]              
primarycaps (ConvCapsuleLayer)       (None, 112, 112, 2, 16)   12832     reshape_1[0][0]          
conv_cap_2_1 (ConvCapsuleLayer)      (None, 112, 112, 4, 16)   25664     primarycaps[0][0]        
conv_cap_2_2 (ConvCapsuleLayer)      (None, 56, 56, 4, 32)     51328     conv_cap_2_1[0][0]       
conv_cap_3_1 (ConvCapsuleLayer)      (None, 56, 56, 8, 32)     205056    conv_cap_2_2[0][0]       
conv_cap_3_2 (ConvCapsuleLayer)      (None, 28, 28, 8, 64)     410112    conv_cap_3_1[0][0]       
conv_cap_4_1 (ConvCapsuleLayer)      (None, 28, 28, 8, 32)     409856    conv_cap_3_2[0][0]       
deconv_cap_1_1 (DeconvCapsuleLayer)  (None, 56, 56, 8, 32)     131328    conv_cap_4_1[0][0]       
up_1 (Concatenate)                   (None, 56, 56, 16, 32)    0         deconv_cap_1_1[0][0]     
                                                                         conv_cap_3_1[0][0]       
deconv_cap_1_2 (ConvCapsuleLayer)    (None, 56, 56, 4, 32)     102528    up_1[0][0]               
deconv_cap_2_1 (DeconvCapsuleLayer)  (None, 112, 112, 4, 16)   32832     deconv_cap_1_2[0][0]     
up_2 (Concatenate)                   (None, 112, 112, 8, 16)   0         deconv_cap_2_1[0][0]     
                                                                         conv_cap_2_1[0][0]       
deconv_cap_2_2 (ConvCapsuleLayer)    (None, 112, 112, 4, 16)   25664     up_2[0][0]               
deconv_cap_3_1 (DeconvCapsuleLayer)  (None, 224, 224, 2, 16)   8224      deconv_cap_2_2[0][0]     
up_3 (Concatenate)                   (None, 224, 224, 3, 16)   0         deconv_cap_3_1[0][0]     
                                                                         reshape_1[0][0]          
seg_caps (ConvCapsuleLayer)          (None, 224, 224, 1, 16)   272       up_3[0][0]               
input_2 (InputLayer)                 (None, 224, 224, 1)       0                                  
mask_1 (Mask)                        (None, 224, 224, 1, 16)   0         seg_caps[0][0]           
                                                                         input_2[0][0]            
reshape_2 (Reshape)                  (None, 224, 224, 16)      0         mask_1[0][0]             
recon_1 (Conv2D)                     (None, 224, 224, 64)      1088      reshape_2[0][0]          
recon_2 (Conv2D)                     (None, 224, 224, 128)     8320      recon_1[0][0]            
out_seg (Length)                     (None, 224, 224, 1)       0         seg_caps[0][0]           
out_recon (Conv2D)                   (None, 224, 224, 1)       129       recon_2[0][0]            

expected out_recon shape (512, 512, 1)

I'm training CapsNetR3 on my own dataset. I get the following error

Error when checking target: expected out_recon to have shape (512, 512, 1) but got array with shape (512, 512, 3)

my input shape is 512, 512, 3. Any clue or hint is highly appreciated

Final output and raw output are always same no matter what input images are given

@Cheng-Lin-Li Thank you for your nice work!

I have one confusing question, I can successfully run the codes (both Train and Test on segcapsr3) using both MSCOCO17 and my own grey images. However, I found the final output and raw output images (stored in the folder of ../SegCaps/data/results/segcapsr3/split_0) are always the same, no matter what input images are given?

Any suggestions will be appreciated. The following is my testing code.

python3 ./main.py --test --Kfold 2 --net segcapsr3 --data_root_dir=data --loglevel 2 --which_gpus=-2 --gpus=0 --dataset mscoco17 --weights_path saved_models/segcapsr3/split-0_batch-1_shuff-1_aug-1_loss-dice_slic-1_sub--1_strid-1_lr-0.1_recon-131.072_model_20190918-151252.hdf5

OOM error when training on my own dataset

Hello, thanks for the code and documents, they are easy to understand.

When I train the model on my own dataset (dataset includes 16 512*512 2D grayscale png images, and masks are black-white), it always get an error that tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[16,1,4,512,512,2] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc

I have 2 GTX-1080 gpus with 8GB dedicated memory. And the options I use to do training is
python ./main.py --train --split_num=0 --batch_size=2 --aug_data=1 --loglevel=2 --net segcapsr3 --data_root_dir=data --which_gpus=-1 --gpus=2 --loss bce --dataset mscoco17

In addition, is there any way that produces test outputs in black-and-white instead of yellow-and-purple?

Instruction for running the code from .jpg files

Hello, i have put images and into .jpg and mask under imgs and ans masks folder respectively. after running code getting some error which is mentioned into following figure, how to run your code. My images are gray scale. and os is ubuntu 16.04
Screenshot from 2019-07-04 18-05-42

input size

Hello,
my data is 3D volumes with dimensions of (217, 181, 50). In the readme, you mentioned that the code can handle to input size itself, but I get this error:
"ValueError: could not broadcast input array from shape (217,181,50) into shape (512,512,1)".
Another issue is when trying to load my own data, load_2d_data invokes, although my data is 3D.
Is there anything that I should take care of?
Thank you

reconstruction layer

Hi Cheng

I looked at your code and description. I just want to know why we need reconstruction layer, what if we just use original images and pixel-wised true labels as input and output?

Best

Training image and mask error

Thank you for your implementation of SegCaps.
I train on my dataset ,got a problem"Unable to load img or masks for 00001,too many indices for array,Skipping file".
When loading MSCOO dataset with 10 images, it works.
My image are 2D uint8 greyscale of (512,512), masks are the same 2D uint8 greyscale of (512,512). I checked the Issues, someone said masks should be (512,512,1), Does my images and masks fit the input format?If not, what should I do for it?
Any advice would be appreciated.

Can not load images and masks

Unable to load img or masks for 43
index 3 is out of bounds for axis 2 with size 3
Skipping file

INFO 2019-07-23 21:14:04,744:
path_to_np=data/np_files/404.npz
INFO 2019-07-23 21:14:04,744:
Pre-made numpy array not found for 404.
Creating now...
DEBUG 2019-07-23 21:14:04,744: STREAM b'IHDR' 16 13
DEBUG 2019-07-23 21:14:04,744: STREAM b'pHYs' 41 9
DEBUG 2019-07-23 21:14:04,744: STREAM b'IDAT' 62 8192

this error/bug keeps happening over and over and the training doesn't start please help me. The stuff after the dotted line is another thing that keeps happening over and over for all of the images and masks and yes I have in the data/imgs and data/masks folder

bad mask

I run the following command :
python3 ./main.py --test --Kfold 2 --net segcapsr3 --data_root_dir=data --loglevel 2 --which_gpus=-2 --gpus=0 --dataset mscoco17 --weights_path saved_models/segcapsr3/split-0_batch-1_shuff-1_aug-0_loss-dice_slic-1_sub--1_strid-1_lr-0.0001_recon-20.0_model_20180705-092846.hdf5.
I joined the image, the reference mask and the output mask (data/results/segcapsr3/split0/final_output
train2
train2
train2_final_output
What am I doing wrong ?

Creating MSCOCO dataset

How did you create the label vector (y) when using the model on MSCOCO dataset? For each input (image), MSCOCO has multiple outputs (categories). How to create the dataset in this case (i.e. the input vector x and the label vector y)?
Any help would be appreciated. Thank You.

TypeError: deconv_length() missing 1 required positional argument: 'output_padding'

Traceback (most recent call last):
File "./main.py", line 281, in
main(arguments)
File "./main.py", line 95, in main
model_list = create_model(args=args, input_shape=net_input_shape, enable_decoder=True)
File "/home/nd/capsnet/SegCaps-master (copy)/utils/model_helper.py", line 29, in create_model
model_list = CapsNetR3(input_shape, args.num_class, enable_decoder)
File "/home/nd/capsnet/SegCaps-master (copy)/segcapsnet/capsnet.py", line 55, in CapsNetR3
name='deconv_cap_1_1')(conv_cap_4_1)
File "/home/nd/anaconda3/lib/python3.6/site-packages/keras/engine/base_layer.py", line 457, in call
output = self.call(inputs, **kwargs)
File "/home/nd/capsnet/SegCaps-master (copy)/segcapsnet/capsule_layers.py", line 250, in call
out_height = deconv_length(self.input_height, self.scaling, self.kernel_size, self.padding)
TypeError: deconv_length() missing 1 required positional argument: 'output_padding'

ValueError: No gradients provided for any variable

On running code on google colab I am getting following error, please help:

[ValueError: in user code:

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function  *
    return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function  **
    outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
    return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
    return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
    return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step  **
    outputs = model.train_step(data)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:757 train_step
    self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:498 minimize
    return self.apply_gradients(grads_and_vars, name=name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:598 apply_gradients
    grads_and_vars = optimizer_utils.filter_empty_gradients(grads_and_vars)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/utils.py:79 filter_empty_gradients
    ([v.name for _, v in grads_and_vars],))

ValueError: No gradients provided for any variable: ['conv1/kernel:0', 'conv1/bias:0', 'primarycaps/W:0', 'primarycaps/b:0', 'seg_caps/W:0', 'seg_caps/b:0', 'recon_1/kernel:0', 'recon_1/bias:0', 'recon_2/kernel:0', 'recon_2/bias:0', 'out_recon/kernel:0', 'out_recon/bias:0'].](url)

Training Performance Do Not Improve

Does this result of training process that I got reasonable and should i proceed to the end of the epoch?
It looks like the dice_hard do not improve and the optimizer has achieved local minima.

I use MRI dataset from ISLES 2017 and has adjusted the load data process without using K Fold.

Epoch 1/50
369/369 [==============================] - 1370s 4s/step - loss: 1.4192 - out_seg_loss: 1.2236 - out_recon_loss: 0.1956 - out_seg_dice_hard: 0.0746 - val_loss: 1.1187 - val_out_seg_loss: 1.0050 - val_out_recon_loss: 0.1137 - val_out_seg_dice_hard: 0.0304
Epoch 00001: val_out_seg_dice_hard improved from -inf to 0.03035, saving model to [my folder]

Epoch 2/50
369/369 [==============================] - 1371s 4s/step - loss: 1.1169 - out_seg_loss: 1.0563 - out_recon_loss: 0.0605 - out_seg_dice_hard: 0.0995 - val_loss: 1.0128 - val_out_seg_loss: 1.0010 - val_out_recon_loss: 0.0118 - val_out_seg_dice_hard: 0.0522
Epoch 00002: val_out_seg_dice_hard improved from 0.03035 to 0.05218, saving model to [my folder]

Epoch 3/50
369/369 [==============================] - 1365s 4s/step - loss: 1.0673 - out_seg_loss: 1.0443 - out_recon_loss: 0.0229 - out_seg_dice_hard: 0.1156 - val_loss: 1.0043 - val_out_seg_loss: 0.9994 - val_out_recon_loss: 0.0049 - val_out_seg_dice_hard: 0.0485
Epoch 00003: val_out_seg_dice_hard did not improve from 0.05218

Epoch 4/50
369/369 [==============================] - 1365s 4s/step - loss: 1.0133 - out_seg_loss: 0.9917 - out_recon_loss: 0.0216 - out_seg_dice_hard: 0.0873 - val_loss: 0.9998 - val_out_seg_loss: 0.9957 - val_out_recon_loss: 0.0041 - val_out_seg_dice_hard: 9.4607e-09
Epoch 00004: val_out_seg_dice_hard did not improve from 0.05218

Epoch 5/50
369/369 [==============================] - 1370s 4s/step - loss: 1.0076 - out_seg_loss: 0.9868 - out_recon_loss: 0.0207 - out_seg_dice_hard: 0.0623 - val_loss: 0.9991 - val_out_seg_loss: 0.9952 - val_out_recon_loss: 0.0039 - val_out_seg_dice_hard: 9.4697e-09
.....

Epoch 14/50
369/369 [==============================] - 1373s 4s/step - loss: 1.0047 - out_seg_loss: 0.9830 - out_recon_loss: 0.0217 - out_seg_dice_hard: 0.0644 - val_loss: 0.9982 - val_out_seg_loss: 0.9945 - val_out_recon_loss: 0.0038 - val_out_seg_dice_hard: 9.9502e-09
Epoch 00014: val_out_seg_dice_hard did not improve from 0.05218

Value Error

Hi eveyone,

When I run code, I get an error like below. Could you help me about it,please?
hata

ValueError: Dimension 0 in both shapes must be equal, but are 3 and 16.

I am trying to use jupyter notebook 20180701-SegCapsR3-image-segmentation-with Color image input.ipynb but while loading the model it throws an error :

ValueError: Dimension 0 in both shapes must be equal, but are 3 and 16. Shapes are [3,3,1,32] and [16,1,5,5]. for 'Assign' (op: 'Assign') with input shapes: [3,3,1,32], [16,1,5,5].

Any idea why it's not able to load the model?

Iteration Time Issue

Hi Cheng_Lin_Li,

I hope this message finds you well. I've been using your GitHub project and I encountered an issue regarding the iteration time. Specifically, I selected 10 person photos for processing, but each iteration is taking more than 80 hours. I wanted to reach out to you and confirm whether this duration is expected or if there might be some issue on my end.

I appreciate your time and assistance in resolving this matter. If you need any additional information or logs from my side, please let me know.

Thank you,
AiBing
path_to_np=data/np_files/train9.npz
1/10000 [..............................] - ETA: 89:09:12 - loss: 0.8261 - out_seg_loss: 0.5970 - out_recon_loss: 0.2291 - out_seg_dice_hard: 0.2155INFO 2023-12-24 13:36:55,324:
path_to_np=data/np_files/train4.npz
2/10000 [..............................] - ETA: 79:53:11 - loss: 0.7541 - out_seg_loss: 0.5197 - out_recon_loss: 0.2344 - out_seg_dice_hard: 0.2829INFO 2023-12-24 13:37:20,756:
path_to_np=data/np_files/train4.npz
3/10000 [..............................] - ETA: 77:02:40 - loss: 0.8524 - out_seg_loss: 0.6285 - out_recon_loss: 0.2239 - out_seg_dice_hard: 0.1892INFO 2023-12-24 13:37:46,455:
path_to_np=data/np_files/train9.npz
4/10000 [..............................] - ETA: 75:12:46 - loss: 0.9051 - out_seg_loss: 0.6828 - out_recon_loss: 0.2223 - out_seg_dice_hard: 0.1423INFO 2023-12-24 13:38:11,586:
path_to_np=data/np_files/train6.npz
5/10000 [..............................] - ETA: 74:24:26 - loss: 0.8601 - out_seg_loss: 0.6315 - out_recon_loss: 0.2285 - out_seg_dice_hard: 0.1833INFO 2023-12-24 13:38:37,227:
path_to_np=data/np_files/train8.npz
6/10000 [..............................] - ETA: 73:38:04 - loss: 0.7891 - out_seg_loss: 0.5567 - out_recon_loss: 0.2325 - out_seg_dice_hard: 0.3061INFO 2023-12-24 13:39:02,370:
path_to_np=data/np_files/train3.npz
7/10000 [..............................] - ETA: 73:05:20 - loss: 0.7929 - out_seg_loss: 0.5625 - out_recon_loss: 0.2304 - out_seg_dice_hard: 0.2835INFO 2023-12-24 13:39:27,537:
path_to_np=data/np_files/train7.npz
8/10000 [..............................] - ETA: 72:38:26 - loss: 0.7882 - out_seg_loss: 0.5554 - out_recon_loss: 0.2328 - out_seg_dice_hard: 0.2500INFO 2023-12-24 13:39:52,595:
path_to_np=data/np_files/train6.npz
9/10000 [..............................] - ETA: 72:22:19 - loss: 0.7823 - out_seg_loss: 0.5525 - out_recon_loss: 0.2298 - out_seg_dice_hard: 0.2298INFO 2023-12-24 13:40:17,919:
path_to_np=data/np_files/train9.npz
10/10000 [..............................] - ETA: 72:15:30 - loss: 0.7474 - out_seg_loss: 0.5154 - out_recon_loss: 0.2319 - out_seg_dice_hard: 0.2990INFO 2023-12-24 13:40:43,619:
path_to_np=data/np_files/train7.npz
11/10000 [..............................] - ETA: 72:02:35 - loss: 0.7496 - out_seg_loss: 0.5167 - out_recon_loss: 0.2329 - out_seg_dice_hard: 0.3056INFO 2023-12-24 13:41:08,827:
path_to_np=data/np_files/train8.npz
12/10000 [..............................] - ETA: 71:50:43 - loss: 0.7657 - out_seg_loss: 0.5331 - out_recon_loss: 0.2326 - out_seg_dice_hard: 0.2858INFO 2023-12-24 13:41:33,973:
path_to_np=data/np_files/train3.npz

Multiclass Segmentation

Hi Cheng-Lin-Li,

I found your modification of this repo really good, specially the adaptions for training on MS Coco Dataset. Nevertheless, did you also perform on images with more than 2 classes apart from binary segmentation?

Thanks man!
Cheers,
Oushesh

test.py

script file: test.py --> function threshold_mask

Why did you use the threshold_otsu filter? During segmentation normally you are not supposed to use any manually defined filter since it will make the network lose valuable information?

Proplem with deconv_length from keras.utils.conv_utils

I'm trying to evaluate the performance of the code for segmentation of my 2D CT images. However, there is problem for loading deconv_length from keras.utils.conv_utils or even tensorflow.keras.utils. I googled the problem and replaced the "deconv_length" by "deconv_output_length". Again, an error occurs for the following line:

deconv_cap_1_1 = DeconvCapsuleLayer(kernel_size=4, num_capsule=8, num_atoms=32, upsamp_type='deconv', scaling=2, routings=3, padding='same', name='deconv_cap_1_1')(conv_cap_4_1)

Exception has occurred: AssertionError
Exception encountered when calling layer "deconv_cap_1_1" (type DeconvCapsuleLayer).

in user code:

File "d:\My_Codes\New_Cheng\segcapsnet\capsule_layers.py", line 250, in call  *
    out_height = deconv_output_length(self.input_height, self.scaling, self.kernel_size, self.padding)
File "C:\Users\ASUS\anaconda3\lib\site-packages\keras\utils\conv_utils.py", line 177, in deconv_output_length  **
    assert padding in {'same', 'valid', 'full'}

AssertionError: 

Call arguments received:
• input_tensor=tf.Tensor(shape=(None, 64, 64, 8, 32), dtype=float32)
• training=None
File "C:\Users\ASUS\AppData\Local\Temp_autograph_generated_filekx3mg4bh.py", line 63, in tf__call
ag
_.if_stmt(ag__.ld(self).upsamp_type == 'resize', if_body_1, else_body_1, get_state_1, set_state_1, ('outputs',), 1)
File "C:\Users\ASUS\AppData\Local\Temp_autograph_generated_filekx3mg4bh.py", line 55, in else_body_1
ag
_.if_stmt(ag__.ld(self).upsamp_type == 'subpix', if_body, else_body, get_state, set_state, ('outputs',), 1)
File "C:\Users\ASUS\AppData\Local\Temp_autograph_generated_filekx3mg4bh.py", line 45, in else_body
out_height = ag
_.converted_call(ag__.ld(deconv_output_length), (ag__.ld(self).input_height, ag__.ld(self).scaling, ag__.ld(self).kernel_size, ag__.ld(self).padding), None, fscope)

During handling of the above exception, another exception occurred:

During handling of the above exception, another exception occurred:

File "D:\My_Codes\New_Cheng\segcapsnet\capsnet.py", line 53, in CapsNetR3
deconv_cap_1_1 = DeconvCapsuleLayer(kernel_size=4, num_capsule=8, num_atoms=32, upsamp_type='deconv',
File "D:\My_Codes\New_Cheng\utils\model_helper.py", line 29, in create_model
model_list = CapsNetR3(input_shape, args.num_class, enable_decoder)
File "D:\My_Codes\New_Cheng\main.py", line 98, in main
model_list = create_model(args=args, input_shape=net_input_shape, enable_decoder=True)
File "D:\My_Codes\New_Cheng\main.py", line 284, in
main(arguments)

What does num_atom mean?

What does num_atom mean?
Does this mean the number of feature maps in each capsule?

Can I put another image instead of a mask file?
Must the values ​​be 0 and 1?

Why transformation matrix shared by different child capsule types?

@Cheng-Lin-Li Thanks for sharing this wonderful work!

I have a question about the transformation matrix. It's mentioned in the paper that different transformation matrices are used for each type of capsule. Does it mean both child and parent capsule types or only for parent capsule types? It seems to me that the transformation matrices are shared between different child capsule types. If so, why do you use multiple child capsule types and what's the meaning of applying same transformation matrix to different child capsule types?

I'd appreciate it a lot if you could help me!
Thank you!

dependencies version

Hi Cheng Lin Li,

Could you list all the dependencies versions in the requirements.txt as well?
I just can't import print_summary from keras.utils, so I really need to know the keras version that you used.

Thank you,
Ian

Training on own dataset

Hello,
First, thank you for your implementation of SegCaps.
I try to train on a small dataset consisting of 100 pairs of greyscaled/ 4 levels groundtruth ("masks") images:

pair-grey_mask
The original dataset is available

100 pairs of images were copied in SegCaps/data/ :

  • from: ...data/imgs/train0000000.png and ...data/masks/train0000000.png

  • to : ...data/imgs/train0000099.png and ...data/masks/train0000099.png

then I tried to train secaps as follow. The given arguments are wrong, how could I modify them to train segcaps on those images?

Thank you.

(DeepFish) jeanpat@Dell-T5500:~/Developpement/SegCaps$ python3 ./main.py --test --Kfold 2 --net segcapsr3 --data_root_dir=data --loglevel 2 --which_gpus=-2 --gpus=0 --dataset mscoco17 --weights_path data/saved_models/segcapsr3/overlap_test.hdf5
Using TensorFlow backend.
INFO 2018-08-25 17:55:49,226: 
No existing training, validate, test files...System will generate it.
basename=['train0000050.png']
basename=['train0000051.png']
basename=['train0000052.png']
basename=['train0000053.png']
basename=['train0000054.png']
basename=['train0000055.png']
basename=['train0000056.png']
basename=['train0000057.png']
basename=['train0000058.png']
basename=['train0000059.png']
basename=['train0000060.png']
basename=['train0000061.png']
basename=['train0000062.png']
basename=['train0000063.png']
basename=['train0000064.png']
basename=['train0000065.png']
basename=['train0000066.png']
basename=['train0000067.png']
basename=['train0000068.png']
basename=['train0000069.png']
basename=['train0000070.png']
basename=['train0000071.png']
basename=['train0000072.png']
basename=['train0000073.png']
basename=['train0000074.png']
basename=['train0000075.png']
basename=['train0000076.png']
basename=['train0000077.png']
basename=['train0000078.png']
basename=['train0000079.png']
basename=['train0000080.png']
basename=['train0000081.png']
basename=['train0000082.png']
basename=['train0000083.png']
basename=['train0000084.png']
basename=['train0000085.png']
basename=['train0000086.png']
basename=['train0000087.png']
basename=['train0000088.png']
basename=['train0000089.png']
basename=['train0000090.png']
basename=['train0000091.png']
basename=['train0000092.png']
basename=['train0000093.png']
basename=['train0000094.png']
basename=['train0000095.png']
basename=['train0000096.png']
basename=['train0000097.png']
basename=['train0000098.png']
basename=['train0000099.png']
basename=['train0000000.png']
basename=['train0000001.png']
basename=['train0000002.png']
basename=['train0000003.png']
basename=['train0000004.png']
basename=['train0000005.png']
basename=['train0000006.png']
basename=['train0000007.png']
basename=['train0000008.png']
basename=['train0000009.png']
basename=['train0000010.png']
basename=['train0000011.png']
basename=['train0000012.png']
basename=['train0000013.png']
basename=['train0000014.png']
basename=['train0000015.png']
basename=['train0000016.png']
basename=['train0000017.png']
basename=['train0000018.png']
basename=['train0000019.png']
basename=['train0000020.png']
basename=['train0000021.png']
basename=['train0000022.png']
basename=['train0000023.png']
basename=['train0000024.png']
basename=['train0000025.png']
basename=['train0000026.png']
basename=['train0000027.png']
basename=['train0000028.png']
basename=['train0000029.png']
basename=['train0000030.png']
basename=['train0000031.png']
basename=['train0000032.png']
basename=['train0000033.png']
basename=['train0000034.png']
basename=['train0000035.png']
basename=['train0000036.png']
basename=['train0000037.png']
basename=['train0000038.png']
basename=['train0000039.png']
basename=['train0000040.png']
basename=['train0000041.png']
basename=['train0000042.png']
basename=['train0000043.png']
basename=['train0000044.png']
basename=['train0000045.png']
basename=['train0000046.png']
basename=['train0000047.png']
basename=['train0000048.png']
basename=['train0000049.png']
INFO 2018-08-25 17:55:49,245: 
Read image files...data/imgs/train0000077.png
Traceback (most recent call last):
  File "./main.py", line 276, in <module>
    main(arguments)
  File "./main.py", line 94, in main
    model_list = create_model(args=args, input_shape=net_input_shape, enable_decoder=True)
  File "/home/jeanpat/Developpement/SegCaps/utils/model_helper.py", line 29, in create_model
    model_list = CapsNetR3(input_shape, args.num_class, enable_decoder)
  File "/home/jeanpat/Developpement/SegCaps/segcapsnet/capsnet.py", line 55, in CapsNetR3
    name='deconv_cap_1_1')(conv_cap_4_1)
  File "/home/jeanpat/anaconda3/envs/DeepFish/lib/python3.6/site-packages/keras/engine/base_layer.py", line 457, in __call__
    output = self.call(inputs, **kwargs)
  File "/home/jeanpat/Developpement/SegCaps/segcapsnet/capsule_layers.py", line 250, in call
    out_height = deconv_length(self.input_height, self.scaling, self.kernel_size, self.padding)
TypeError: deconv_length() missing 1 required positional argument: 'output_padding'

PS:
The GPU is a GTX 960, 4Gb

$ nvidia-smi 
Sun Aug 26 10:00:11 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.51                 Driver Version: 396.51                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 960     Off  | 00000000:03:00.0  On |                  N/A |
| 39%   29C    P8    12W / 130W |    312MiB /  4042MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.