Code Monkey home page Code Monkey logo

pix2pix-tensorflow's Introduction

pix2pix-tensorflow

Based on pix2pix by Isola et al.

Article about this implemention

Interactive Demo

Tensorflow implementation of pix2pix. Learns a mapping from input images to output images, like these examples from the original paper:

This port is based directly on the torch implementation, and not on an existing Tensorflow implementation. It is meant to be a faithful implementation of the original work and so does not add anything. The processing speed on a GPU with cuDNN was equivalent to the Torch implementation in testing.

Setup

Prerequisites

  • Tensorflow 1.4.1

Recommended

  • Linux with Tensorflow GPU edition + cuDNN

Getting Started

# clone this repo
git clone https://github.com/affinelayer/pix2pix-tensorflow.git
cd pix2pix-tensorflow
# download the CMP Facades dataset (generated from http://cmp.felk.cvut.cz/~tylecr1/facade/)
python tools/download-dataset.py facades
# train the model (this may take 1-8 hours depending on GPU, on CPU you will be waiting for a bit)
python pix2pix.py \
  --mode train \
  --output_dir facades_train \
  --max_epochs 200 \
  --input_dir facades/train \
  --which_direction BtoA
# test the model
python pix2pix.py \
  --mode test \
  --output_dir facades_test \
  --input_dir facades/val \
  --checkpoint facades_train

The test run will output an HTML file at facades_test/index.html that shows input/output/target image sets.

If you have Docker installed, you can use the provided Docker image to run pix2pix without installing the correct version of Tensorflow:

# train the model
python tools/dockrun.py python pix2pix.py \
      --mode train \
      --output_dir facades_train \
      --max_epochs 200 \
      --input_dir facades/train \
      --which_direction BtoA
# test the model
python tools/dockrun.py python pix2pix.py \
      --mode test \
      --output_dir facades_test \
      --input_dir facades/val \
      --checkpoint facades_train

Datasets and Trained Models

The data format used by this program is the same as the original pix2pix format, which consists of images of input and desired output side by side like:

For example:

Some datasets have been made available by the authors of the pix2pix paper. To download those datasets, use the included script tools/download-dataset.py. There are also links to pre-trained models alongside each dataset, note that these pre-trained models require the current version of pix2pix.py:

dataset example
python tools/download-dataset.py facades
400 images from CMP Facades dataset. (31MB)
Pre-trained: BtoA
python tools/download-dataset.py cityscapes
2975 images from the Cityscapes training set. (113M)
Pre-trained: AtoB BtoA
python tools/download-dataset.py maps
1096 training images scraped from Google Maps (246M)
Pre-trained: AtoB BtoA
python tools/download-dataset.py edges2shoes
50k training images from UT Zappos50K dataset. Edges are computed by HED edge detector + post-processing. (2.2GB)
Pre-trained: AtoB
python tools/download-dataset.py edges2handbags
137K Amazon Handbag images from iGAN project. Edges are computed by HED edge detector + post-processing. (8.6GB)
Pre-trained: AtoB

The facades dataset is the smallest and easiest to get started with.

Creating your own dataset

Example: creating images with blank centers for inpainting

# Resize source images
python tools/process.py \
  --input_dir photos/original \
  --operation resize \
  --output_dir photos/resized
# Create images with blank centers
python tools/process.py \
  --input_dir photos/resized \
  --operation blank \
  --output_dir photos/blank
# Combine resized images with blanked images
python tools/process.py \
  --input_dir photos/resized \
  --b_dir photos/blank \
  --operation combine \
  --output_dir photos/combined
# Split into train/val set
python tools/split.py \
  --dir photos/combined

The folder photos/combined will now have train and val subfolders that you can use for training and testing.

Creating image pairs from existing images

If you have two directories a and b, with corresponding images (same name, same dimensions, different data) you can combine them with process.py:

python tools/process.py \
  --input_dir a \
  --b_dir b \
  --operation combine \
  --output_dir c

This puts the images in a side-by-side combined image that pix2pix.py expects.

Colorization

For colorization, your images should ideally all be the same aspect ratio. You can resize and crop them with the resize command:

python tools/process.py \
  --input_dir photos/original \
  --operation resize \
  --output_dir photos/resized

No other processing is required, the colorization mode (see Training section below) uses single images instead of image pairs.

Training

Image Pairs

For normal training with image pairs, you need to specify which directory contains the training images, and which direction to train on. The direction options are AtoB or BtoA

python pix2pix.py \
  --mode train \
  --output_dir facades_train \
  --max_epochs 200 \
  --input_dir facades/train \
  --which_direction BtoA

Colorization

pix2pix.py includes special code to handle colorization with single images instead of pairs, using that looks like this:

python pix2pix.py \
  --mode train \
  --output_dir photos_train \
  --max_epochs 200 \
  --input_dir photos/train \
  --lab_colorization

In this mode, image A is the black and white image (lightness only), and image B contains the color channels of that image (no lightness information).

Tips

You can look at the loss and computation graph using tensorboard:

tensorboard --logdir=facades_train

If you wish to write in-progress pictures as the network is training, use --display_freq 50. This will update facades_train/index.html every 50 steps with the current training inputs and outputs.

Testing

Testing is done with --mode test. You should specify the checkpoint to use with --checkpoint, this should point to the output_dir that you created previously with --mode train:

python pix2pix.py \
  --mode test \
  --output_dir facades_test \
  --input_dir facades/val \
  --checkpoint facades_train

The testing mode will load some of the configuration options from the checkpoint provided so you do not need to specify which_direction for instance.

The test run will output an HTML file at facades_test/index.html that shows input/output/target image sets:

Code Validation

Validation of the code was performed on a Linux machine with a ~1.3 TFLOPS Nvidia GTX 750 Ti GPU and an Azure NC6 instance with a K80 GPU.

git clone https://github.com/affinelayer/pix2pix-tensorflow.git
cd pix2pix-tensorflow
python tools/download-dataset.py facades
sudo nvidia-docker run \
  --volume $PWD:/prj \
  --workdir /prj \
  --env PYTHONUNBUFFERED=x \
  affinelayer/pix2pix-tensorflow \
    python pix2pix.py \
      --mode train \
      --output_dir facades_train \
      --max_epochs 200 \
      --input_dir facades/train \
      --which_direction BtoA
sudo nvidia-docker run \
  --volume $PWD:/prj \
  --workdir /prj \
  --env PYTHONUNBUFFERED=x \
  affinelayer/pix2pix-tensorflow \
    python pix2pix.py \
      --mode test \
      --output_dir facades_test \
      --input_dir facades/val \
      --checkpoint facades_train

Comparison on facades dataset:

Input Tensorflow Torch Target

Unimplemented Features

The following models have not been implemented:

  • defineG_encoder_decoder
  • defineG_unet_128
  • defineD_pixelGAN

Citation

If you use this code for your research, please cite the paper this code is based on: Image-to-Image Translation Using Conditional Adversarial Networks:

@article{pix2pix2016,
  title={Image-to-Image Translation with Conditional Adversarial Networks},
  author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
  journal={arxiv},
  year={2016}
}

Acknowledgments

This is a port of pix2pix from Torch to Tensorflow. It also contains colorspace conversion code ported from Torch. Thanks to the Tensorflow team for making such a quality library! And special thanks to Phillip Isola for answering my questions about the pix2pix code.

pix2pix-tensorflow's People

Contributors

christopherhesse avatar julien2512 avatar mermop avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pix2pix-tensorflow's Issues

How to get a good model, or how to judge whether I trained a good model?

I use my own data(face images) with different epochs and batch size to train some models. And then I have a look on the discriminator_loss and generator_loss on Tensorboard.
With the setting: batchsize= 60, epochs = 50, the discriminator_loss would increase while generator_loss drops down. It makes sense based on my limited knowledge of GAN.
About the test results, my test image results with the validation images dataset are worse than the images generated while training. Why ?
As a result, I want to find out what has happened. I set the batchsize = 4 or 30,, epochs=15, 50 or 100, the discriminator_loss would drop down while the generator_loss increases. Meanwhile, the generator_loss is greater than discriminator_lossใ€‚
So, my questions are:

  1. How to judge whether I get a good model? Should I judge it only depended on the loss ?
  2. How could I get a good model ?

What GPU are you using?

In your manual you mention:
(this may take 1-8 hours depending on GPU, on CPU you will be waiting for a bit)

May I ask what hardware is needed for this to finish within an hour? what hardware are you using? on a single K80 from AWS it looks like it's going to take about 6 hours

DEPRECATED: Cannot create a version on the v1beta1 endpoint. Please use v1

Hi -
Thanks for this great work. I was able to train the model I needed with fantastic results. I'm now trying to get it into Google ML

I was able to train and export my model, and I tried the following command

python server/tools/upload-model.py --bucket pix2pix-model --model_name pix2pix_model --model_dir models/myModel --credentials credential.json

and I got the following output

creating version v1
uploading export.meta.gz
uploading export.index.gz
uploading export.data-00000-of-00001.gz
Traceback (most recent call last):
  File "server/tools/upload-model.py", line 101, in <module>
    main()
  File "server/tools/upload-model.py", line 85, in main
    operation = ml.projects().models().versions().create(parent=model_path, body=version).execute()
  File "/home/apanagar/pix/lib/python3.5/site-packages/oauth2client/_helpers.py", line 133, in positional_wrapper
    return wrapped(*args, **kwargs)
  File "/home/apanagar/pix/lib/python3.5/site-packages/googleapiclient/http.py", line 840, in execute
    raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://ml.googleapis.com/v1beta1/projects/xxxxx-xxxx-123456/models/pix2pix_model/versions?alt=json returned "DEPRECATED: Cannot create a version on the v1beta1 endpoint. Please use v1.">


Am I doing something wrong? I tried changing the v1beta1 to v1 in the script but that resulted in another error. Apologies, I'm just getting started with Google's Cloud ML so it's possible I'm missing something obvious.

Where to find edge2cats pre-trained model?

Thanks for the implementation!

I noticed the datset and trained models part in the repo. But I failed to find the pre-trained edge2cats model.
Where can I find it?
Please point me out.

Process edges: OSError: [Errno 2] No such file or directory

Hi,
Getting the following error when trying to generate edges through process.pyโ€ฆ
Pointed hard coded path to HED caffe version. What could be the problem? Thanks!

Traceback (most recent call last):
  File "tools/process.py", line 340, in <module>
    main()
  File "tools/process.py", line 304, in main
    process(src_path, dst_path)
  File "tools/process.py", line 233, in process
    dst = edges(src)
  File "tools/process.py", line 210, in edges
    subprocess.check_output(args, stderr=subprocess.STDOUT)
  File "/usr/lib/python2.7/subprocess.py", line 567, in check_output
    process = Popen(stdout=PIPE, *popenargs, **kwargs)
  File "/usr/lib/python2.7/subprocess.py", line 711, in __init__
    errread, errwrite)
  File "/usr/lib/python2.7/subprocess.py", line 1343, in _execute_child
    raise child_exception
OSError: [Errno 2] No such file or directory

Cache training data in memory to avoid disk latency

(I am new to ML and Tensorflow).

I am using Google ml-engine to train a network, the training data is stored in Google Cloud Storage. I might be wrong, but I noticed that the training inputs are not stored in RAM and read from the disk (in this from GFS) so I think this is a big bottleneck for training.

Can the current implementation be updated to have an option to store all training (input) data in memory at the start of the program and somehow avoid disk reads while training?

From what I read this could be solved by using tf queues or actually uploading the training data to the GPU and using it from there?

PS: Using the ml-engine tier BASIC_GPU, I get a training rate of about 5 images/sec. Shouldn't this be higher considering it's a (half of a) K80 GPU? I tried using a complex_model_m_gpu (which has 4 K80 GPUs) and a better CPU but it only increased to 5.1 images/sec.

PS2: I have managed to upload my training files to the ml-engine instance (so they are locally, on the same machine, not loaded from Google Cloud Storage) and the rate (for BASIC_GPU) went from 5 to 5.5 images/sec.

training / testing

Hello,

Thanks for this great work. My first experience with tensor flow. I have installed it without problem.

Can you tell us, if there is a minimum training time ?
Does training end or we need to end it manually ?

Idem for testing is it possible to test quickly after a view minutes training (just to test that it show something ? )

I interompted the training after a few minutes

and test typed testing command

at the end I had this error

loading model from checkpoint
Traceback (most recent call last):
  File "pix2pix.py", line 800, in <module>
    main()
  File "pix2pix.py", line 716, in main
    saver.restore(sess, checkpoint)
  File "/Users/slucas/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1439, in restore
    {self.saver_def.filename_tensor_name: save_path})
  File "/Users/slucas/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 767, in run
    run_metadata_ptr)
  File "/Users/slucas/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 965, in _run
    feed_dict_string, options, run_metadata)
  File "/Users/slucas/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1015, in _do_run
    target_list, options, run_metadata)
  File "/Users/slucas/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1035, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Unable to get element from the feed as bytes.

and the files in this screenshots
screen shot 2017-02-22 at 22 52 31

What are they temporary files ?

Files to keep that will be improved by further training / testing ?

Different Input Channels

Hello,

I am a totally newbie in deep learning and also working with image data. I am reusing your source for a project and i am facing a problem with images having different channels. In my dataset i have both grayscale and color images. Can you suggest how can i handle the grayscale images in my dataset?

How to set a suitable batch size to improve training time?

I use my data to train the model. I have 30,000 samples. Now I am training my model on the TitanX, and it takes one epoch per hour with the default setting of batch size: 1. I guess it would take 200 hours in my case.
As a result, I want to set a larger batch size, but I don't have any experience in it. Need some help from you guys.
Thank you sooo much!

Cloud ML Serving issue - input and filter must have the same depth: 1 vs 3

When following the instructions under "Cloud ML Serving" on this page (server path), the upload-model.py call works, but the second command (call to process-cloud.py) fails.

The response returned from request.execute() is a key "error" with value:

Prediction failed: Exception during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="input and filter must have the same depth: 1 vs 3 [[Node: generator/encoder_1/conv/Conv2D = Conv2D[T=DT_FLOAT, _output_shapes=[[1,128,128,64]], data_format="NHWC", padding="VALID", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/cpu:0"](generator/encoder_1/conv/Pad, generator/encoder_1/conv/filter/read)]]

Wondering if anyone has insights what may be going wrong here.

AttributeError: 'module' object has no attribute 'cpu_count'

Hi, I am a graduate student currently working on a project related to this. I wanna run this code to get a grasp of how this program works. However, some issues happens

Here is what I got
screen shot 2017-05-03 at 23 40 57

Is there a possible fix for this? I have looked through all information on google related to this issue but cannot really find a fix

Thanks for reading this post

Tips for avoiding exploding gradients in the generator?

When training, I'm getting NaN errors on some layers. The actual layer depends on my seed but, for example, I just got the error :

Nan in summary histogram for: generator/decoder_7/batchnorm/offset/values

Sometimes I'll also get them on the encoder gradients. At least in those cases, inspecting the corresponding gradients via tensorboard/histograms shows huge values.

Did you come across this at all and any tips for how to adjust for these?

I'm training for colorization on my own dataset. The error tend to happen within the first 2000 images. Viewing the resulting generated imaged in tensorboard up to that point show promising results. I've played around with varying the 0.2 parameter getting used in the leaky relu layers but am not seeing any obvious improvements.

Output Images Size Larger Than 256x256?

Is there a way to specify the size of the output images to be anything other than 256x256?

Well, I guess this is a duplicate of #11? I understand the --aspect_ratio parameter but I'm looking to run process-local.py with an exported model to work with, say, 512x512 or 1024x1024 input/output pairs.

Thanks!

Use for 16 bit images

Hello,
I would like to use the architecture for 16 bit input/output images. But when I do so, the images are converted internally to 8bit images. Can I know what changes to make to make it compatible with 16 bit images.

Thanks,
Tanvir

Make it a library?

Thank you for providing the TensorFlow port and an excellent blog about it.

I was wondering whether you had considered making it a library?
For example I am using the model now and copied the code but if it was a library then I could use that instead. (I had to move some of the code to functions and pass things like the arguments to the functions)

How to change the size of output image?

The program could output the images with default 256x256 size. But my expected size of output images is 70x70 or any size(I want to change the size of output images). I reviewed the original paper and realized the model would have different structure of layers depended on the output size. Should I change the structure of model(G and D layers)?
Thank you so much.

add channels of inputs

Hello, I want to input two images, that is six channels, but I don't know how achieve it.Can you suggest how can i add channels of inputs? Thank you !!

What do predit_fake_summary and predit_real_summary show?

So there are the predit_fake_summary and predict_real_summary in the Tensorboard Images tab (next to the input/output/target images).
From my understanding this should show what parts of the input image are considered "real" by the discriminator and what pars of the fake (generated) image are considered "real".

The problem is that most of the times predict_fake is an entirely black image while predict_real is a blank white image. What should I learn from this?

Also, is there any way to output the generated image by the cGAN generator to tensorboard? Or is that output the actual final output of the network?

Training time for aerial to maps

Hi,

Apologies if this is not the right venue to ask this question, but I am just curious, for the Google Maps dataset (i.e., translating from street map to aerial image), how long does it generally take to train in order to get the (excellent) images presented in the paper? I am running experiments on my own dataset of similar 'visual complexity' and getting some rough estimates as to how long the Google Maps dataset takes can maybe help me diagnose my own experiments.

Thanks.

Discriminator's output shape

I'm looking into codes line by line.

I'm wondering why discriminator's output has a shape of [batch, 30, 30, 1].

The implemented discriminator seems to 70x70 discriminator in the original torch code.
For an input image of 256 x 256, I expected discriminator's output to be in the size of (256-70)x(256-70).
But it turns out to be 30x30.

Strangely, 1x1 discriminator in the original torch code also has an output in the size of 30x30.

Am I missing something?

Please answer me if there is anyone who knows about this.

Batch size

Did you try to increase batch size in training phase? Can a bigger batch size affect discriminator results?

Image does not have 3 channels

I have an color imageset as input, but I consistently trip this assertion:

assertion = tf.assert_equal(tf.shape(raw_input)[2], 3, message="image does not have 3 channels")

The error I see is:

tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [image does not have 3 channels]

Could you please elaborate on what the input image parameters are? Is having distinct R, G and B channel insufficient in an input JPEG? Is there a recommendation on post-processing JPEGs to fit the desired input image parameters?

These are the pertinent image properties as seen in IrfanView for my combined images:

  • Compression: JPEG, quality: 75, subsampling ON (2x2)
  • Resolution: 100 x 100 DPI
  • Original size: 2393 x 655 (1.57 MPixels) 3.65
  • Original colors: 16.7 Million (24 BitsPerPixel)
  • Number of unique colors: 6108 (Auto counted)

Things I've tried:

  • Using Latest commit d6f8e4c
  • Resizing the input images to 512x256
  • Converting the input images to PNG, also at 512x256

Key generator/encoder_8/batchnorm/offset/Adam_1 not found in checkpoint

hi
i tried to run on pretrained model (Facades)
here's what i did, downloaded facades model from BtoA and extracted archive file to facades_train directory (in same pix2pix-tensorflow folder)
i placed an jpg image in facades/val directory
note: tensorflow version: 1.0.1, python version 3.5, window 7 64bit

now when i run following command
python pix2pix.py --mode test --output_dir facades_test --input_dir facades/val --checkpoint facades_train
i got below output (sorry for posting such long output!)

loaded ndf = 64
loaded which_direction = BtoA
loaded lab_colorization = False
loaded ngf = 64
aspect_ratio = 1.0
batch_size = 1
beta1 = 0.5
checkpoint = facades_train
display_freq = 0
flip = False
gan_weight = 1.0
input_dir = facades/val
l1_weight = 100.0
lab_colorization = False
lr = 0.0002
max_epochs = None
max_steps = None
mode = test
ndf = 64
ngf = 64
output_dir = facades_test
output_filetype = png
progress_freq = 50
save_freq = 5000
scale_size = 256
seed = 1188493113
summary_freq = 100
trace_freq = 0
which_direction = BtoA
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:943] OpKernel ('op: "BestSplits" device_type:
"CPU"') for unknown op: BestSplits
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:943] OpKernel ('op: "CountExtremelyRandomStats
" device_type: "CPU"') for unknown op: CountExtremelyRandomStats
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:943] OpKernel ('op: "FinishedNodes" device_typ
e: "CPU"') for unknown op: FinishedNodes
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:943] OpKernel ('op: "GrowTree" device_type: "C
PU"') for unknown op: GrowTree
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:943] OpKernel ('op: "ReinterpretStringToFloat"
device_type: "CPU"') for unknown op: ReinterpretStringToFloat
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:943] OpKernel ('op: "SampleInputs" device_type
: "CPU"') for unknown op: SampleInputs
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:943] OpKernel ('op: "ScatterAddNdim" device_ty
pe: "CPU"') for unknown op: ScatterAddNdim
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:943] OpKernel ('op: "TopNInsert" device_type:
"CPU"') for unknown op: TopNInsert
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:943] OpKernel ('op: "TopNRemove" device_type:
"CPU"') for unknown op: TopNRemove
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:943] OpKernel ('op: "TreePredictions" device_t
ype: "CPU"') for unknown op: TreePredictions
E c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:943] OpKernel ('op: "UpdateFertileSlots" devic
e_type: "CPU"') for unknown op: UpdateFertileSlots
examples count = 1
parameter_count = 57183616
loading model from checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_8/batchn
orm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_2/batchn
orm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_2/batchn
orm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_4/deconv
/filter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_2/deconv
/filter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_2/deconv
/filter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_3/batc
hnorm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_3/conv
/filter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_3/conv
/filter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_4/batc
hnorm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_4/batc
hnorm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_4/batchn
orm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_4/batc
hnorm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_4/batchn
orm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_4/batc
hnorm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_4/conv
/filter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_4/conv
/filter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_4/batchn
orm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_5/conv
/filter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_5/conv
/filter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_1/conv
/filter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_4/batchn
orm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_1/conv
/filter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_2/batc
hnorm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_2/batc
hnorm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_3/deconv
/filter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_2/batc
hnorm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_2/batc
hnorm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_3/deconv
/filter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_2/conv
/filter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_2/conv
/filter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_3/batchn
orm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_3/batc
hnorm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_3/batchn
orm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_3/batc
hnorm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key discriminator/layer_3/batc
hnorm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_6/batchn
orm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_3/batchn
orm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_3/batchn
orm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_6/batchn
orm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_2/conv/f
ilter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_4/conv/f
ilter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_4/conv/f
ilter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_5/batchn
orm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_2/batchn
orm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_5/batchn
orm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_2/batchn
orm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_5/batchn
orm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_5/batchn
orm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_5/conv/f
ilter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_5/conv/f
ilter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_2/batchn
orm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_6/batchn
orm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_2/batchn
orm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_6/batchn
orm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_6/deconv
/filter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_6/deconv
/filter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_1/conv/f
ilter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_7/batchn
orm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_1/conv/f
ilter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_7/batchn
orm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_8/deconv
/filter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_7/batchn
orm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_8/deconv
/filter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_7/batchn
orm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_7/deconv
/filter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_7/deconv
/filter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_8/batchn
orm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_8/batchn
orm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_4/deconv
/filter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_8/batchn
orm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_8/batchn
orm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_5/batchn
orm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_8/batchn
orm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_5/batchn
orm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_5/batchn
orm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_5/batchn
orm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_7/conv/f
ilter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_5/deconv
/filter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_7/conv/f
ilter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_5/deconv
/filter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_6/batchn
orm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_6/batchn
orm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_8/batchn
orm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_7/batchn
orm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_8/batchn
orm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_7/batchn
orm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_8/conv/f
ilter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_8/conv/f
ilter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_7/batchn
orm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_7/batchn
orm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_6/conv/f
ilter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_2/conv/f
ilter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_6/conv/f
ilter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_3/batchn
orm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_3/batchn
orm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_3/batchn
orm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_6/batchn
orm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_3/batchn
orm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_6/batchn
orm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_3/conv/f
ilter/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_3/conv/f
ilter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_4/batchn
orm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_4/batchn
orm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_2/batchn
orm/offset/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_4/batchn
orm/scale/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/encoder_4/batchn
orm/scale/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_2/batchn
orm/offset/Adam not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_1/deconv
/filter/Adam_1 not found in checkpoint
W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_kernel.cc:993] Not found: Key generator/decoder_1/deconv
/filter/Adam not found in checkpoint
Traceback (most recent call last):
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1022, in _do_call
return fn(*args)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1004, in _run_fn
status, run_metadata)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\Lib\contextlib.py", line 66, in exit
next(self.gen)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exce
ption_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.NotFoundError: Key generator/encoder_8/batchnorm/offset/Adam_1 not found in checkpoint
[[Node: save/RestoreV2_161 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2
_161/tensor_names, save/RestoreV2_161/shape_and_slices)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "pix2pix.py", line 810, in
main()
File "pix2pix.py", line 726, in main
saver.restore(sess, checkpoint)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\training\saver.py", line 1428, in restore
{self.saver_def.filename_tensor_name: save_path})
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 767, in run
run_metadata_ptr)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 965, in _run
feed_dict_string, options, run_metadata)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1015, in _do_run
target_list, options, run_metadata)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1035, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Key generator/encoder_8/batchnorm/offset/Adam_1 not found in checkpoint
[[Node: save/RestoreV2_161 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2
_161/tensor_names, save/RestoreV2_161/shape_and_slices)]]

Caused by op 'save/RestoreV2_161', defined at:
File "pix2pix.py", line 810, in
main()
File "pix2pix.py", line 716, in main
saver = tf.train.Saver(max_to_keep=1)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\training\saver.py", line 1040, in init
self.build()
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\training\saver.py", line 1070, in build
restore_sequentially=self._restore_sequentially)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\training\saver.py", line 675, in build
restore_sequentially, reshape)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\training\saver.py", line 402, in _AddRestoreOps
tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\training\saver.py", line 242, in restore_op
[spec.tensor.dtype])[0])
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\ops\gen_io_ops.py", line 668, in restore_v2
dtypes=dtypes, name=name)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 763, in apply_o
p
op_def=op_def)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 2327, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 1226, in init
self._traceback = _extract_stack()

NotFoundError (see above for traceback): Key generator/encoder_8/batchnorm/offset/Adam_1 not found in checkpoint
[[Node: save/RestoreV2_161 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2
_161/tensor_names, save/RestoreV2_161/shape_and_slices)]]

how to fix this error?
Thanks

Architecture Clarification

It's more of a doubt than an issue. Can you tell he why does the graph have two discriminators, viz. real_discriminator and fake_discriminator.
I could not know from the paper or your blog post.
Is this how you update wrights separately for images generated by generator and the real images?
In that case are the real/fake inputs to the discriminator given at random or in a particular order?

By the way, thanks for the implementation!
Tanvir

How access the intermediate activation layers?

Hi!
First of all: thanks for the code, very well-done and impressive results!

For my purposes, I'm looking to extract intermediate activation layers, of the U-net autoencoder, for a given image.
Not exactly display filtering, but rather save layer activations in files.

Is it possible? At which point exactly in the code I should start?
Any suggestions? (I'm new to TF)
Thanks!

What are the features of this Project have (D and G structure)

The readme file says:
Unimplemented Features

The following models have not been implemented:
defineG_encoder_decoder
defineG_unet_128
defineD_pixelGAN
So I am a little bit confused, What kind of features from original paper do this project have?
I am a new comer of GAN.
Thank you !

What should the D-loss and G-loss look like theoretically?

While I train my model, the D-loss and the G-loss will have different tendency with different setting of batch size.
In other words, the D-loss would increase while the G-loss would drop down. Sometimes they are contrary.
So, what should the two loss like in theory? How could I say that I get a good model?

Can I specify the GPU id?

Hi!
I successfully run the code and get good result, but I am afraid the code is executed all by CPU
And I do have more than one GPUs on the machine.

May I ask How I can run the code with the help of GPU?

Clarification on license

Hi,

This is great work, thanks for sharing it! Could you clarify the license terms, please?

Best regards,
Agost

Generator loss keeps increasing in Map dataset using default parameters

Hi, I was running this code on Map dataset by the default parameters. However, the generator loss keeps increasing while the discriminator and L1 loss keep decreasing. The plots are as follows,
gen_loss
gen_l1_loss
dis_loss

I wonder if this is the same case in your experiment. Since for a GAN model, I expect the generator loss should keep decreasing.

in pix2pix.py at line 85 TypeError: __init__() got multiple values for keyword argument 'dtype'

Tensorflow version: 0.12.1
Comand: python pix2pix.py \ --mode train \ --output_dir facades_train \ --max_epochs 200 \ --input_dir facades/train \ --which_direction BtoA

Traceback:

`Traceback (most recent call last):
File "pix2pix.py", line 697, in
main()
File "pix2pix.py", line 531, in main
model = create_model(examples.inputs, examples.targets)
File "pix2pix.py", line 397, in create_model
outputs = create_generator(inputs, out_channels)
File "pix2pix.py", line 318, in create_generator
output = batchnorm(convolved)
File "pix2pix.py", line 85, in batchnorm

offset = tf.get_variable("offset",[channels], dtype=tf.float32, initializer=tf.zeros_initializer)

File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 988, in get_variable
custom_getter=custom_getter)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 890, in get_variable
custom_getter=custom_getter)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 348, in get_variable
validate_shape=validate_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 333, in _true_getter
caching_device=caching_device, validate_shape=validate_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 684, in _get_single_variable
validate_shape=validate_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 226, in init
expected_shape=expected_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 303, in _init_from_args
initial_value(), name="initial_value", dtype=dtype)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 673, in
shape.as_list(), dtype=dtype, partition_info=partition_info)

TypeError: init() got multiple values for keyword argument 'dtype'
`

Max epochs not taken in consideration?

I started my training with "--max_epochs 5" but during training there is progress output like:
"progress epoch 47 step 14"

Why is there an epoch 47 if the max_epochs was set to 5?

LE: Oh, I think it's continuing from last run (I don't remember if I had set or not a checkpoint while noticing this).

issue regarding the U-net decode

In the original paper, the author suggest a U-net decode in Generator architectures comparing the original encode-decode architectures.
screen shot 2017-06-22 at 10 16 37 pm
The encoder-decoder architecture consists of:
encoder:
C64-C128-C256-C512-C512-C512-C512-C512
decoder:
CD512-CD512-CD512-C512-C512-C256-C128-C64
where C stands for a Convolution-BatchNorm-ReLU layer.
CD stands for a Convolution-BatchNormDropout-ReLU layer with a dropout rate of 50%.

U-Net decoder:
CD512-CD1024-CD1024-C1024-C1024-C512-C256-C128
The U-Net architecture is identical except with skip connections between each layer i in the encoder and layer nโˆ’i in the decoder, where n is the total number of layers. The skip connections concatenate activations from layer i to layer n โˆ’ i.

While reviewing the code, I found it is slightly different as what the original paper did. here ngf = 64
layer_specs = [
(a.ngf * 8, 0.5), # decoder_8: [batch, 1, 1, ngf * 8] => [batch, 2, 2, ngf * 8 * 2] =1024
(a.ngf * 8, 0.5), # decoder_7: [batch, 2, 2, ngf * 8 * 2] => [batch, 4, 4, ngf * 8 * 2] =1024
(a.ngf * 8, 0.5), # decoder_6: [batch, 4, 4, ngf * 8 * 2] => [batch, 8, 8, ngf * 8 * 2] =1024
(a.ngf * 8, 0.0), # decoder_5: [batch, 8, 8, ngf * 8 * 2] => [batch, 16, 16, ngf * 8 * 2] =1024
(a.ngf * 4, 0.0), # decoder_4: [batch, 16, 16, ngf * 8 * 2] => [batch, 32, 32, ngf * 4 * 2] = 512
(a.ngf * 2, 0.0), # decoder_3: [batch, 32, 32, ngf * 4 * 2] => [batch, 64, 64, ngf * 2 * 2] = 256
(a.ngf, 0.0), # decoder_2: [batch, 64, 64, ngf * 2 * 2] => [batch, 128, 128, ngf * 2] = 128
]

num_encoder_layers = len(layers)
for decoder_layer, (out_channels, dropout) in enumerate(layer_specs):
    skip_layer = num_encoder_layers - decoder_layer - 1
    with tf.variable_scope("decoder_%d" % (skip_layer + 1)):
        if decoder_layer == 0:
            # first decoder layer doesn't have skip connections
            # since it is directly connected to the skip_layer
            input = layers[-1]
        else:
            input = tf.concat([layers[-1], layers[skip_layer]], axis=3)

        rectified = tf.nn.relu(input)
        # [batch, in_height, in_width, in_channels] => [batch, in_height*2, in_width*2, out_channels]
        output = deconv(rectified, out_channels)
        output = batchnorm(output)

        if dropout > 0.0:
            output = tf.nn.dropout(output, keep_prob=1 - dropout)

        layers.append(output)

# decoder_1: [batch, 128, 128, ngf * 2] => [batch, 256, 256, generator_outputs_channels]
with tf.variable_scope("decoder_1"):
    input = tf.concat([layers[-1], layers[0]], axis=3)
    rectified = tf.nn.relu(input)
    output = deconv(rectified, generator_outputs_channels)
    output = tf.tanh(output)
    layers.append(output)

So, did you use the original decoder for Generator or use the U-net decoder? I am little confused.

NaN in outputs when adding custom loss

Hi, I have added custom loss instead of L1 but I keep getting nan values as output

gen_loss_custom = tf.reduce_mean(tf.sqrt(tf.abs(tf.square(targets) - tf.square(outputs))))
Do you know what may be causing this issue?
Thanks in advance!

process-local.py + python3 = JSON issues

I can not seem to get process-local.py working with python3. I admit I am new to Python, but it seems to be something to do with the fact the strings are treated as binary eg b'....' so the JSON encoding fails.

Any ideas here what to do to tweak this code to get it running? Everything else for the training / testing works fine with p3.

Thanks for the great work by the way, had a lot of fun trying it out!

Fixed 256x256 ?

How is it possible to use the model on different sized images e.g.: 480 x 640?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.