Code Monkey home page Code Monkey logo

gatedconvolution's People

Contributors

avalonstrel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gatedconvolution's Issues

About irrmask_flist.txt

Thank you for your work.!
Could you tell me what the irrmask file in inpaint.yml looks like? How should I generate it?

bad result of the training

Firstly, thanks for the sharing the code

I have trained for a while, reach around 10+ epoch and the result seems not good.
I am not sure if I did any wrong on the configuration or others.
It seems the result is very weird and can't converge in the long run.

It would be very nice if any hints or suggestion.
Thanks in advanced.

image

TypeError: conv2d() got an unexpected keyword argument 'dilations'

when i run the python3 train.py the error occurs . it is:
[2019-01-15 07:49:19 @init.py:79] Set root logger. Unset logger with neuralgym.unset_logger().
[2019-01-15 07:49:19 @init.py:80] Saving logging to file: neuralgym_logs/20190115074918983527.
[2019-01-15 07:49:19 @config.py:92] ---------------------------------- APP CONFIG ----------------------------------
[2019-01-15 07:49:19 @config.py:119] WGAN_GP_LAMBDA: 10
[2019-01-15 07:49:19 @config.py:119] RANDOM_CROP: False
[2019-01-15 07:49:19 @config.py:119] FEATURE_LOSS: False
[2019-01-15 07:49:19 @config.py:119] VAL: True
[2019-01-15 07:49:19 @config.py:119] L1_LOSS: True
[2019-01-15 07:49:19 @config.py:119] MASKFROMFILE: False
[2019-01-15 07:49:19 @config.py:119] LOG_DIR: places2_256
[2019-01-15 07:49:19 @config.py:119] MASKDATASET: irrmask
[2019-01-15 07:49:19 @config.py:119] SPATIAL_DISCOUNTING_GAMMA: 0.9
[2019-01-15 07:49:19 @config.py:119] GAN_WITH_GUIDE: False
[2019-01-15 07:49:19 @config.py:119] DATASET: places2
[2019-01-15 07:49:19 @config.py:119] FEATURE_LOSS_ALPHA: 0.01
[2019-01-15 07:49:19 @config.py:119] DISCOUNTED_MASK: True
[2019-01-15 07:49:19 @config.py:119] HORIZONTAL_MARGIN: 0
[2019-01-15 07:49:19 @config.py:119] GAN: sn_pgan
[2019-01-15 07:49:19 @config.py:119] MODEL_RESTORE:
[2019-01-15 07:49:19 @config.py:119] GRAMS_LOSS_ALPHA: 50
[2019-01-15 07:49:19 @config.py:119] TRAIN_SPE: 10000
[2019-01-15 07:49:19 @config.py:119] AE_LOSS_ALPHA: 1.2
[2019-01-15 07:49:19 @config.py:119] WIDTH: 128
[2019-01-15 07:49:19 @config.py:119] VIZ_MAX_OUT: 10
[2019-01-15 07:49:19 @config.py:119] GRADIENT_CLIP: False
[2019-01-15 07:49:19 @config.py:119] MAXBRUSHWIDTH: 10
[2019-01-15 07:49:19 @config.py:119] GPU_ID: 3
[2019-01-15 07:49:19 @config.py:119] LOAD_VGG_MODEL: False
[2019-01-15 07:49:19 @config.py:119] MAXLENGTH: 40
[2019-01-15 07:49:19 @config.py:119] NUM_GPUS: 1
[2019-01-15 07:49:19 @config.py:119] TV_LOSS_ALPHA: 0.0
[2019-01-15 07:49:19 @config.py:119] PADDING: SAME
[2019-01-15 07:49:19 @config.py:119] MAX_ITERS: 1000000
[2019-01-15 07:49:19 @config.py:119] MAX_DELTA_WIDTH: 32
[2019-01-15 07:49:19 @config.py:119] GLOBAL_DCGAN_LOSS_ALPHA: 1.0
[2019-01-15 07:49:19 @config.py:119] COARSE_L1_ALPHA: 1.2
[2019-01-15 07:49:19 @config.py:119] VAL_PSTEPS: 1000
[2019-01-15 07:49:19 @config.py:119] GAN_LOSS_ALPHA: 0.001
[2019-01-15 07:49:19 @config.py:111] DATA_FLIST:
[2019-01-15 07:49:19 @config.py:119] celebahq: ['data/celeba_hq/train_shuffled.flist', 'data/celeba_hq/validation_static_view.flist']
[2019-01-15 07:49:19 @config.py:119] horse_mask: ['/unsullied/sharefs/linhangyu/Inpainting/Data/VOCData/voc_horse_bbox_train_flist.txt', '/unsullied/sharefs/linhangyu/Inpainting/Data/VOCData/voc_horse_bbox_val_flist.txt']
[2019-01-15 07:49:19 @config.py:119] horse: ['/unsullied/sharefs/linhangyu/Inpainting/Data/VOCData/voc_horse_train_flist.txt', '/unsullied/sharefs/linhangyu/Inpainting/Data/VOCData/voc_horse_val_flist.txt']
[2019-01-15 07:49:19 @config.py:119] celeba: ['data/celeba/train_shuffled.flist', 'data/celeba/validation_static_view.flist']
[2019-01-15 07:49:19 @config.py:119] places2: ['/data/data_256/place_train.list', '/data/data_256/place_val.flist']
[2019-01-15 07:49:19 @config.py:119] imagenet: ['data/imagenet/train_shuffled.flist', 'data/imagenet/validation_static_view.flist']
[2019-01-15 07:49:19 @config.py:119] irrmask: ['../Data/MaskData/irrmask_flist.txt', '../Data/MaskData/irrmask_flist.txt']
[2019-01-15 07:49:19 @config.py:119] RANDOM_SEED: False
[2019-01-15 07:49:19 @config.py:119] MAX_DELTA_HEIGHT: 32
[2019-01-15 07:49:19 @config.py:119] BATCH_SIZE: 16
[2019-01-15 07:49:19 @config.py:119] GRAMS_LOSS: False
[2019-01-15 07:49:19 @config.py:119] PRETRAIN_COARSE_NETWORK: False
[2019-01-15 07:49:19 @config.py:119] L1_LOSS_ALPHA: 1.2
[2019-01-15 07:49:19 @config.py:119] VERTICAL_MARGIN: 0
[2019-01-15 07:49:19 @config.py:119] HEIGHT: 128
[2019-01-15 07:49:19 @config.py:119] AE_LOSS: True
[2019-01-15 07:49:19 @config.py:119] TV_LOSS: False
[2019-01-15 07:49:19 @config.py:119] GAN_WITH_MASK: True
[2019-01-15 07:49:19 @config.py:119] IMG_SHAPES: [256, 256, 3]
[2019-01-15 07:49:19 @config.py:119] GRADIENT_CLIP_VALUE: 0.1
[2019-01-15 07:49:19 @config.py:119] STATIC_VIEW_SIZE: 30
[2019-01-15 07:49:19 @config.py:119] MAXVERTEX: 5
[2019-01-15 07:49:19 @config.py:119] VGG_MODEL_FILE: data/model_zoo/vgg16.npz
[2019-01-15 07:49:19 @config.py:119] GLOBAL_WGAN_LOSS_ALPHA: 1.0
[2019-01-15 07:49:19 @config.py:119] GRADS_SUMMARY: False
[2019-01-15 07:49:19 @config.py:119] MAXANGLE: 4.0
[2019-01-15 07:49:19 @config.py:94] --------------------------------------------------------------------------------
[2019-01-15 07:49:19 @gpus.py:20] Set env: CUDA_VISIBLE_DEVICES=[3].
[2019-01-15 07:49:20 @dataset.py:26] --------------------------------- Dataset Info ---------------------------------
[2019-01-15 07:49:20 @dataset.py:36] nthreads: 8
[2019-01-15 07:49:20 @dataset.py:36] file_length: 1599999
[2019-01-15 07:49:20 @dataset.py:36] enqueue_size: 32
[2019-01-15 07:49:20 @dataset.py:36] filetype: image
[2019-01-15 07:49:20 @dataset.py:36] random_crop: False
[2019-01-15 07:49:20 @dataset.py:36] fn_preprocess: None
[2019-01-15 07:49:20 @dataset.py:36] return_fnames: False
[2019-01-15 07:49:20 @dataset.py:36] queue_size: 256
[2019-01-15 07:49:20 @dataset.py:36] dtypes: [tf.float32]
[2019-01-15 07:49:20 @dataset.py:36] random: False
[2019-01-15 07:49:20 @dataset.py:36] index: 0
[2019-01-15 07:49:20 @dataset.py:36] shapes: [[256, 256, 3]]
[2019-01-15 07:49:20 @dataset.py:36] batch_phs: [<tf.Tensor 'Placeholder:0' shape=(?, 256, 256, 3) dtype=float32>]
[2019-01-15 07:49:20 @dataset.py:37] --------------------------------------------------------------------------------
[2019-01-15 07:49:24 @inpaint_model_gc.py:167] Set batch_predicted to x2.
Traceback (most recent call last):
File "train.py", line 79, in
images, masks, guides, config=config)
File "/code/GatedConvolution-master/inpaint_model_gc.py", line 219, in build_graph_with_losses
pos_neg = self.build_sn_pgan_discriminator(batch_pos_neg, training=training, reuse=reuse)
File "/code/GatedConvolution-master/inpaint_model_gc.py", line 134, in build_sn_pgan_discriminator
x = gen_snconv(x, cnum, 5, 2, name='conv1', training=training)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
File "/code/GatedConvolution-master/inpaint_ops.py", line 105, in gen_snconv
strides=[1, stride, stride, 1], dilations=[1, rate, rate, 1], padding=padding, name=name)
TypeError: conv2d() got an unexpected keyword argument 'dilations'

the tensorflow is 1.4.0

How to train the User-Guided model?

Dear author, I am very interested in the user - guide image in painting part of your work. However, I found that if I train directly, there is no image of EDG. How should I train such a model? What part of the code should I change.
Another question, is your HED a pre-training model? Can you tell me where to modify the code that has been converted to TensorFlow in your code
Looking forward to your reply.

About inpaint.yml

Dear author, in inpaint.yml, I am confused about two parameters
TRAIN_SPE: 10000
MAX_ITERS: 1000000
When I set these two parameters:
TRAIN_SPE: 3000
MAX_ITERS: 1000
Training stopped after four iterations, and the result was also very bad.
What do these two parameters mean and how to determine the number of training iterations?
Thank you for your patience.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.