Code Monkey home page Code Monkey logo

adaptationseg's People

Contributors

yangzhang4065 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

adaptationseg's Issues

Class-wise evaluation IoU

Hi @YangZhang4065 ,
Thank you so much for your study. I could train and evaluate your code but I couldn't achieve class wise evaluation IoUs with test_FCN_DA.py. How could I get these metrics ? Thank you.

Inconsistent Label Resizing during evaluation

Hi, Nice work! I have a query regarding your evaluation.

In your paper you have mentioned that: " Since we have to resize the images before feeding them to the segmentation network, we resize the output segmentation mask back to the original image size before running the evaluation against the groundtruth annotations. "

However, in warp_data.py which is called by your eval. code, it seems like the label is also resized to (320,640).

Can you please clarify this inconsistency for me ? Thanks!

nan-loss after iteration 9.

train_val_FCN_DA.py:155: RuntimeWarning: divide by zero encountered in divide
SP_weight=avg_pixel_number/SP_pixelperSP_num
1/4543 [..............................] - ETA: 7:04:32 - loss: 1.4686 - output_loss: 1.249 2/4543 [..............................] - ETA: 5:56:23 - loss: 1.2655 - output_loss: 1.076 3/4543 [..............................] - ETA: 6:04:32 - loss: 1.2219 - output_loss: 1.064 4/4543 [..............................] - ETA: 7:24:51 - loss: 1.0931 - output_loss: 0.937 5/4543 [..............................] - ETA: 8:40:20 - loss: 1.1262 - output_loss: 0.974 6/4543 [..............................] - ETA: 10:07:33 - loss: 1.0875 - output_loss: 0.94 7/4543 [..............................] - ETA: 11:10:59 - loss: 1.0446 - output_loss: 0.90 8/4543 [..............................] - ETA: 11:22:12 - loss: 0.9914 - output_loss: 0.85 9/4543 [..............................] - ETA: 11:30:05 - loss: 0.9571 - output_loss: 0.82 10/4543 [..............................] - ETA: 11:22:53 - loss: 0.9300 - output_loss: 0.80 11/4543 [..............................] - ETA: 11:24:29 - loss: nan - output_loss: nan - o 12/4543 [..............................] - ETA: 11:17:30 - loss: nan - output_loss: nan - o

After I manually modify the output_shape, otherwise it will lead to the dimension dismatch problem.
out=Lambda(lambda x:x+0., name='output', output_shape=(class_num + 1,nb_rows,nb_cols))(output)
out_2=Lambda(lambda x:x+0., name='output_2', output_shape=(class_num ,1,1))(output)

Guidance for downloding leftImg8bit_trainvaltest.zip and leftImg8bit_trainextra.zip

Hi Yang Zhang,

How are you doing?

I am wondering if the link that in the following instruction is supposed to point so other web page? Current it is pointing to the Pillow installation page.

1, Download leftImg8bit_trainvaltest.zip and leftImg8bit_trainextra.zip in CityScape dataset here. (Require registration)

I am new to Pillow, so I don't know if it is possible to down load dataset with it.

Thank you,
Heng

What's the superpixel loss?

Thanks for your remarkable work! But I have a question:
What's the superpixel loss? Does it mean the segmentation loss on the target image, or label distributions over local landmark superpixels? But how to produce label distributions over local landmark superpixels?
Looking forward to your answer!Thanks!

I got some error with running train_val_FCN_Da.py .

I did set data path folder and .h5 file.
When I ran train_val_FCN_DA.py code , I got this error.

Using Theano backend.
/home/tf/anaconda2/lib/python2.7/site-packages/keras/layers/core.py:633: UserWarning: output_shape argument not specified for layer output and cannot be automatically inferred with the Theano backend. Defaulting to output shape (None, 22, 320, 640) (same as input shape). If the expected output shape is different, specify it via the output_shape argument.
.format(self.name, input_shape))
/home/tf/anaconda2/lib/python2.7/site-packages/keras/layers/core.py:633: UserWarning: output_shape argument not specified for layer output_2 and cannot be automatically inferred with the Theano backend. Defaulting to output shape (None, 22, 320, 640) (same as input shape). If the expected output shape is different, specify it via the output_shape argument.
.format(self.name, input_shape))
Start loading files
Start training
Epoch 1/60
Exception in thread Thread-1:
Traceback (most recent call last):
File "/home/tf/anaconda2/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/home/tf/anaconda2/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/tf/anaconda2/lib/python2.7/site-packages/keras/engine/training.py", line 612, in data_generator_task
generator_output = next(self._generator)
File "train_val_FCN_DA.py", line 144, in myGenerator
tar_idx=sample(range(len(cityscape_im_generator)),target_batch_size)
File "/home/tf/anaconda2/lib/python2.7/random.py", line 323, in sample
raise ValueError("sample larger than population")
ValueError: sample larger than population

Traceback (most recent call last):
File "train_val_FCN_DA.py", line 203, in
seg_model.fit_generator(myGenerator(),callbacks=[Validate_on_CityScape()], steps_per_epoch=steps_per_epoch, epochs=60)
File "/home/tf/anaconda2/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 88, in wrapper
return func(*args, **kwargs)
File "/home/tf/anaconda2/lib/python2.7/site-packages/keras/engine/training.py", line 1877, in fit_generator
str(generator_output))
ValueError: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None

Logistic regression model

Hello @YangZhang4065 ,
Thanks for the nice work. I'm trying to train model to estimate the global label distribution. Can you give more details of the architecture and training scheme?
Thanks!

Pretrained model?

Do you have pretrained model for your final version of model ((CC+I+SP) of TPAMI paper) or for any of them?
Thank you.

ImportError: No module named city_meanIU

hello YangZhang
When I ran train_val_FCN_DA.py code , I got this error.
gnss@gnss:/media/gnss/文档/mask/AdaptationSeg-master$ python train_val_FCN_DA.pyUsing Theano backend.
Traceback (most recent call last):
File "train_val_FCN_DA.py", line 14, in
from city_meanIU import city_meanIU
ImportError: No module named city_meanIU

Get stuck before training

Hi,

I tried to run the code, but the training does not proceed even for one iteration. Actually, it does nothing after printing:

Start loading files
Start training
Epoch 1/100

And starts consuming memory.
Any idea on that?

Thanks.

About Superpixel Landmarks

Hi Yang,

The code of your paper is elegant and easy to reproduce. But where can I find the code to train the multi-class SVM that yields the superpixel annotations of the target domain? Thanks!

`image_mean` type problem

Hi, thanks for your great work! It's really impressive that you do the segmentation translation from the virtual world to the real world scene. But when I run your code, I have a little problem with your image_mean type. It seems that you loaded images in RGB (https://github.com/YangZhang4065/AdaptationSeg/blob/master/warp_data.py#L10L25) and subtracted image mean value [103.939, 116.779, 123.68] from them (https://github.com/YangZhang4065/AdaptationSeg/blob/master/train_val_FCN_DA.py#L134).

However, in the VGG network, they used image mean value [103.939, 116.779, 123.68] in BGR, not RGB (https://gist.github.com/ksimonyan/211839e770f7b538e2d8#description). I just wondered that why you do the operation of RGB-images with BGR-image_mean. (Maybe your SYNTHIA images are saved in BGR type before loading into the model?)
I would be very grateful if you could resolve my doubt.
Thank you so much. :)

SYNTHIA_FCN.h5 hdf5 file not found...

Dear Sir/Madam,

I was executing your test_FCN_DA.py . In line no:30, i found an exception @ seg_model.load_weights('SYNTHIA_FCN.h5') because i don't from where i had to get SYNTHIA_FCN.h5.
Kindly suggest

expected input_1 to have shape (None, 3, 320, 640) but got array with shape (1, 3, 1, 0)

I manually check the size of using "print loaded_im.shape, loaded_label.shape, loaded_target_obj_pre.shape", it looks good. Have no idea why it crashed. Any idea?
File "train_val_FCN_DA.py", line 207, in
seg_model.fit_generator(myGenerator(),callbacks=[Validate_on_CityScape()], steps_per_epoch=steps_per_epoch, epochs=60)
File "build/bdist.linux-x86_64/egg/keras/legacy/interfaces.py", line 87, in wrapper
File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 2110, in fit_generator
File "build/bdist.linux-x86_64/egg/keras/callbacks.py", line 85, in on_batch_begin
File "train_val_FCN_DA.py", line 193, in on_batch_begin
current_predicted_val=self.model.predict(loaded_val_im,batch_size=batch_size)
File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 1765, in predict
File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 153, in _standardize_input_data
ValueError: Error when checking : expected input_1 to have shape (None, 3, 320, 640) but got array with shape (1, 3, 1, 0)

Need some help.

I'm sorry, but I just can't find where to download the datasheet.Please give me some instruction.

Problem while calling create_vgg16_FCN in FCN_da.py

Hi @YangZhang4065 ,
Im using create_vgg16_FCN to create FCN model , but on calling the function i'm facing following error :

Negative dimension size caused by subtracting 2 from 1 for '{{node max_pooling2d_21/MaxPool}} = MaxPoolT=DT_FLOAT, data_format="NHWC", explicit_paddings=[], ksize=[1, 2, 2, 1], padding="VALID", strides=[1, 2, 2, 1]' with input shapes: [?,1,160,128].

I have used padding ='same' on maxpooling in create_vgg16_FCN function , then above error does not comes , but while loading the pretrained weights from Synthia_FCA , i facing incompatible dimensions issue .

Kindly let me know If m missing something .

About color-constancy

Thanks for your great help first, I wonder how to obtain the gamut-based color
constancy method in the Paper. I have searched for a long time, but I can not find it. Can you share me the code about color constancy.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.