Code Monkey home page Code Monkey logo

biggan-tensorflow's Introduction

biggan-tensorflow's People

Contributors

taki0112 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

biggan-tensorflow's Issues

Google colab: GPU memory usage is close to the limit

My dataset is about 1000 128x128 images. How can I reduce GPU memory load?

Your GPU is close to its memory limit. You will not be able to use any additional memory in this session. Currently, 10.72 GB / 11.17 GB is being used. Would you like to terminate some sessions in order to free up GPU memory (state will be lost for those sessions)?

Using TensorFlow backend.
2019-01-15 05:44:59.725488: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-01-15 05:44:59.725958: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 0000:00:04.0
totalMemory: 11.17GiB freeMemory: 11.10GiB
2019-01-15 05:44:59.725999: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-01-15 05:45:00.090022: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-01-15 05:45:00.090102: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2019-01-15 05:45:00.090124: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2019-01-15 05:45:00.090416: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2019-01-15 05:45:00.090487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10758 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7)

##### Information #####
# BigGAN 128
# gan type :  hinge
# dataset :  faces_resized
# dataset number :  1456
# batch_size :  2048
# epoch :  50
# iteration per epoch :  10000

##### Generator #####
# spectral normalization :  True
# learning rate :  5e-05

##### Discriminator #####
# the number of critic :  2
# spectral normalization :  True
# learning rate :  0.0002
WARNING:tensorflow:From /content/BigGAN-Tensorflow/BigGAN_128.py:215: shuffle_and_repeat (from tensorflow.contrib.data.python.ops.shuffle_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.experimental.shuffle_and_repeat(...)`.
WARNING:tensorflow:From /content/BigGAN-Tensorflow/BigGAN_128.py:216: map_and_batch (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.experimental.map_and_batch(...)`.
WARNING:tensorflow:From /content/BigGAN-Tensorflow/BigGAN_128.py:217: prefetch_to_device (from tensorflow.contrib.data.python.ops.prefetching_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.experimental.prefetch_to_device(...)`.
---------
Variables: name (type shape) [size]
---------
discriminator/resblock_down_1/res1/batch_norm/gamma:0 (float32_ref 3) [3, bytes: 12]
discriminator/resblock_down_1/res1/batch_norm/beta:0 (float32_ref 3) [3, bytes: 12]
discriminator/resblock_down_1/res1/conv_0/kernel:0 (float32_ref 3x3x3x96) [2592, bytes: 10368]


...

generator/resblock_up_2/res2/batch_norm/gamma/dense/kernel:0 (float32_ref 20x192) [3840, bytes: 15360]
generator/resblock_up_2/res2/batch_norm/gamma/dense/bias:0 (float32_ref 192) [192, bytes: 768]
generator/resblock_up_2/res2/deconv_0/kernel:0 (float32_ref 3x3x192x192) [331776, bytes: 1327104]
...

generator/resblock_up_1/res2/deconv_0/kernel:0 (float32_ref 3x3x96x96) [82944, bytes: 331776]
generator/resblock_up_1/skip/deconv_0/kernel:0 (float32_ref 3x3x96x192) [165888, bytes: 663552]
generator/batch_norm/gamma:0 (float32_ref 96) [96, bytes: 384]
generator/batch_norm/beta:0 (float32_ref 96) [96, bytes: 384]
generator/G_logit/kernel:0 (float32_ref 3x3x96x3) [2592, bytes: 10368]
Total size of variables: 198818145
Total bytes of variables: 795272580
 [*] Reading checkpoints...
 [*] Failed to find a checkpoint
 [!] Load failed...
2019-01-15 05:46:08.449964: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.69GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-15 05:46:18.455636: W tensorflow/core/common_runtime/bfc_allocator.cc:267] Allocator (GPU_0_bfc) ran out of memory trying to allocate 396.09MiB.  Current allocation summary follows.
2019-01-15 05:46:18.455791: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (256): 	Total Chunks: 180, Chunks in use: 178. 45.0KiB allocated for chunks. 44.5KiB in use in bin. 3.9KiB client-requested in use in bin.
2019-01-15 05:46:18.455820: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (512): 	Total Chunks: 150, Chunks in use: 150. 92.8KiB allocated for chunks. 92.8KiB in use in bin. 82.9KiB client-requested in use in bin.
...

2019-01-15 05:46:18.456213: I tensorflow/core/common_runtime/bfc_allocator.cc:613] Bin for 396.09MiB was 256.00MiB, Chunk State: 
2019-01-15 05:46:18.456242: I tensorflow/core/common_runtime/bfc_allocator.cc:619]   Size: 371.91MiB | Requested Size: 81.00MiB | in_use: 0, prev:   Size: 396.09MiB | Requested Size: 396.09MiB | in_use: 1, next:   Size: 768.00MiB | Requested Size: 768.00MiB | in_use: 1
2019-01-15 05:46:18.456313: I tensorflow/core/common_runtime/bfc_allocator.cc:619]   Size: 384.00MiB | Requested Size: 384.00MiB | in_use: 0, prev:   Size: 768.00MiB | Requested Size: 768.00MiB | in_use: 1, next:   Size: 384.00MiB | Requested Size: 384.00MiB | in_use: 1
2019-01-15 05:46:18.456333: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x703f20000 of size 256
2019-01-15 05:46:18.456366: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x703f20100 of size 256
2019-01-15 05:46:18.456381: I tensorflow/core/common_runtime/bfc_allocator.cc:632] Chunk at 0x703f20200 of size 256
2019-01-15 05:46:18.456395: I tensorflow/core/common_runtime/bfc_allocator.cc:632] 

...

2019-01-15 05:46:18.484870: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 1818230784 totalling 1.69GiB
2019-01-15 05:46:18.484885: I tensorflow/core/common_runtime/bfc_allocator.cc:645] Sum Total of in-use chunks: 9.09GiB
2019-01-15 05:46:18.484906: I tensorflow/core/common_runtime/bfc_allocator.cc:647] Stats: 
Limit:                 11281553818
InUse:                  9764617728
MaxInUse:              10167270912
NumAllocs:                    1328
MaxAllocSize:           1818230784

2019-01-15 05:46:18.485013: W tensorflow/core/common_runtime/bfc_allocator.cc:271] ***********************************__**************************************_****___********___****__
2019-01-15 05:46:18.485060: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at transpose_op.cc:199 : Resource exhausted: OOM when allocating tensor with shape[2048,3,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
2019-01-15 05:46:28.485851: W tensorflow/core/common_runtime/bfc_allocator.cc:267] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.00GiB.  Current allocation summary follows.
2019-01-15 05:46:28.485972: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (256): 	Total Chunks: 180, Chunks in use: 178. 45.0KiB allocated for chunks. 44.5KiB in use in bin. 3.9KiB client-requested in use in bin.
2019-01-15 05:46:28.486002: I tensorflow/core/common_runtime/bfc_allocator.cc:597] Bin (512): 	Total Chunks: 150, Chunks in use: 150. 92.8KiB allocated for chunks. 92.8KiB in use in bin. 82.9KiB client-requested in use in bin.
2019-01-15 05:46:28.486048: I tensorflow/core/common_runtime/bfc_allocator.cc:597] 
...
...

2019-01-15 05:46:39.427827: I tensorflow/core/common_runtime/bfc_allocator.cc:641] 1 Chunks of size 1818230784 totalling 1.69GiB
2019-01-15 05:46:39.427855: I tensorflow/core/common_runtime/bfc_allocator.cc:645] Sum Total of in-use chunks: 9.41GiB
2019-01-15 05:46:39.427892: I tensorflow/core/common_runtime/bfc_allocator.cc:647] Stats: 
Limit:                 11281553818
InUse:                 10099625472
MaxInUse:              10167270912
NumAllocs:                    1342
MaxAllocSize:           1818230784

2019-01-15 05:46:39.428321: W tensorflow/core/common_runtime/bfc_allocator.cc:271] ***********************************__**************************************_****___***************__
2019-01-15 05:46:39.429193: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at conv_grad_input_ops.cc:1050 : Resource exhausted: OOM when allocating tensor with shape[2048,768,16,16] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
2019-01-15 05:46:40.507376: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at iterator_ops.cc:1177 : Not found: Resource localhost/_0_OneShotIterator/N10tensorflow4data16IteratorResourceE does not exist.
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1334, in _do_call
    return fn(*args)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[2048,3,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
	 [[{{node gradients/discriminator/resblock_down_1/skip/conv_0/Conv2D_grad/Conv2DBackpropFilter-0-TransposeNHWCToNCHW-LayoutOptimizer}} = Transpose[T=DT_FLOAT, Tperm=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](discriminator/resblock_down_1/skip/conv_0/Pad, PermConstNHWCToNCHW-LayoutOptimizer)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

	 [[{{node add_2/_5}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7346_add_2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "main.py", line 132, in <module>
    main()
  File "main.py", line 120, in main
    gan.train()
  File "/content/BigGAN-Tensorflow/BigGAN_128.py", line 302, in train
    _, summary_str, d_loss = self.sess.run([self.d_optim, self.d_sum, self.d_loss])
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 929, in run
    run_metadata_ptr)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1152, in _run
    feed_dict_tensor, options, run_metadata)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1328, in _do_run
    run_metadata)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py", line 1348, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[2048,3,130,130] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
	 [[{{node gradients/discriminator/resblock_down_1/skip/conv_0/Conv2D_grad/Conv2DBackpropFilter-0-TransposeNHWCToNCHW-LayoutOptimizer}} = Transpose[T=DT_FLOAT, Tperm=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](discriminator/resblock_down_1/skip/conv_0/Pad, PermConstNHWCToNCHW-LayoutOptimizer)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

	 [[{{node add_2/_5}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7346_add_2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

memory error

[] Reading checkpoints...
[
] Failed to find a checkpoint
[!] Load failed...
2020-04-15 20:34:47.014974: W tensorflow/core/framework/allocator.cc:107] Allocation of 6442450944 exceeds 10% of system memory.
2020-04-15 20:34:52.285245: W tensorflow/core/framework/allocator.cc:107] Allocation of 6442450944 exceeds 10% of system memory.
Can anyone one?

Resource exhausted error

Thanks for contributing this. About a minute into each training run I am receiving the following error, after which the program exits:
(1) Resource exhausted: OOM when allocating tensor with shape[256,192,64,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc

Also, when initializing, the program reports the following:
[] Reading checkpoints...
[
] Failed to find a checkpoint
[!] Load failed...
But continues to run.

I have reduced batch_size to 256 and img_size to 128 and error persists. Running Tensorflow version 1.14.0.

Any ideas?

wan-gp

Hi @taki0112

Thank your contribution. I am trying you code. What I using is as following:

python main.y --dataset celebs --gan_type hinge --img_size 128

which works.

But when I try
python main.y --dataset celebs --gan_type wgan-gp --img_size 128 --critic_num 5

It stuck in
self.d_optim = tf.train.AdamOptimizer(self.d_learning_rate, beta1=self.beta1, beta2=self.beta2).minimize(self.d_loss, var_list=d_vars)

Did you test this?

channel question

dear author,
thank u for ur brilliant work
I'm wondering whether the program could work if i train it with the images of single channel and size of 128x128( gray scale image)
I only changed self.c_dim into 1(BigGAN_128.py), but it seems to have some errors in generators.
I wanna figure out that the error is raised because of img dimension or perhaps other reasons.

Invalid JPEG data or crop window

I got an error when training:
Traceback (most recent call last):
File "main.py", line 132, in
main()
File "main.py", line 120, in main
gan.train()
File "/content/BigGAN-Tensorflow/BigGAN_128.py", line 302, in train
_, summary_str, d_loss = self.sess.run([self.d_optim, self.d_sum, self.d_loss])
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 956, in run
run_metadata_ptr)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1359, in _do_run
run_metadata)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/client/session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Invalid JPEG data or crop window, data size 32768
[[{{node DecodeJpeg}}]]
[[IteratorGetNext]]
[[RemoteCall]]
[[IteratorGetNext]]
Could you figure out for me? thanks

Error defining optimizers

I am trying to run the code exactly as published on CIFAR10.
I am running tensorflow 1.12 on Ubuntu 16.04 with python version 3.6.7
When I run the code it seems to hang inside gan.build_model() at line 260
( self.g_optim = self.opt.minimize(self.g_loss, var_list=g_vars) )

After I quit (ctrl+c) I get many of these error messages:

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ronslos/anaconda3/envs/pggan-venv/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 403, in _MaybeCompile
    xla_compile = op.get_attr("_XlaCompile")
  File "/home/ronslos/anaconda3/envs/pggan-venv/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2327, in get_attr
    raise ValueError(str(e))
ValueError: Operation 'gradients/discriminator_2/resblock_down_1_0/res1/batch_norm/FusedBatchNorm_grad/FusedBatchNormGrad' has no attr named '_XlaCompile'.

Please advise on how to solve this issue.
Thanks.

computation

can u mention the training time as well as the configuration of the device you trained the model

minimum GPU memory / working conda env

Hi,

Thanks for the great implementation.

Can I ask what kind of GPU you trained it on and if you have an idea of minimum spec to run this model? I keep hitting memory errors but then I only have RTX 2070. I'm considering Titan RTX (24GB), do you think that would be enough? Would be sad to spend all that money and still not be able to use models like this!

Best wishes,

Mark

PS in case it helps anyone else - the only way I got this working with GPU in a Conda env was with:

conda create -n [envname] python=3.6
conda activate [envname]
conda install -c anaconda tensorflow-gpu==1.9
pip install keras==2.2.2
(and install scipy, mkl)

It seems that there are problems with conda-forge and anaconda versions of tensorflow-gpu==1.8, the above was the only thing that worked for me.

TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string

Do I need to download celebaHQ dataset or do something else before training on it?
I used Google Colab, python3, GPU:

!git clone https://github.com/taki0112/BigGAN-Tensorflow.git
!python main.py --phase train --dataset celebA-HQ --gan_type hinge

And I got an error

##### Information #####
# BigGAN 512
# gan type :  hinge
# dataset :  celebA-HQ
# dataset number :  0
# batch_size :  2048
# epoch :  50
# iteration per epoch :  10000

##### Generator #####
# spectral normalization :  True
# learning rate :  5e-05

##### Discriminator #####
# the number of critic :  2
# spectral normalization :  True
# learning rate :  0.0002
WARNING:tensorflow:From /content/BigGAN-Tensorflow/BigGAN_512.py:219: shuffle_and_repeat (from tensorflow.contrib.data.python.ops.shuffle_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.experimental.shuffle_and_repeat(...)`.
WARNING:tensorflow:From /content/BigGAN-Tensorflow/BigGAN_512.py:220: map_and_batch (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.experimental.map_and_batch(...)`.
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py", line 510, in _apply_op_helper
    preferred_dtype=default_dtype)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 1146, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 983, in _TensorTensorConversionFunction
    (dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype string for Tensor with dtype float32: 'Tensor("arg0:0", shape=(), dtype=float32)'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "main.py", line 132, in <module>
    main()
  File "main.py", line 113, in main
    gan.build_model()
  File "/content/BigGAN-Tensorflow/BigGAN_512.py", line 220, in build_model
    apply(map_and_batch(Image_Data_Class.image_processing, self.batch_size, num_parallel_batches=16, drop_remainder=True)).\
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 1190, in apply
    dataset = transformation_func(self)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/experimental/ops/batching.py", line 667, in _apply_fn
    num_parallel_calls, drop_remainder)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/experimental/ops/batching.py", line 578, in __init__
    super(_MapAndBatchDataset, self).__init__(input_dataset, map_func)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 2611, in __init__
    map_func, "Dataset.map()", input_dataset)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 1860, in __init__
    self._function.add_to_graph(ops.get_default_graph())
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/function.py", line 479, in add_to_graph
    self._create_definition_if_needed()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/function.py", line 335, in _create_definition_if_needed
    self._create_definition_if_needed_impl()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/function.py", line 344, in _create_definition_if_needed_impl
    self._capture_by_value, self._caller_device)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/function.py", line 864, in func_graph_from_py_func
    outputs = func(*func_graph.inputs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 1794, in tf_data_structured_function_wrapper
    ret = func(*nested_args)
  File "/content/BigGAN-Tensorflow/utils.py", line 22, in image_processing
    x = tf.read_file(filename)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 531, in read_file
    "ReadFile", filename=filename, name=name)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py", line 533, in _apply_op_helper
    (prefix, dtypes.as_dtype(input_arg.type).name))
TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string.

input image size

Hi @taki0112 thank you for the wonderful project. I am fairly new to GAN training and I was wondering if the input images have to be in a certain size? For example, I have 15k 1024x1024 images, do I have to scale them down to 512x512 to use BigGAN512?

About your change to original paper

After reading your code,I found you have changed more than two points that you listed in 'Issue'. For example, your resblock_down is:
image
But the structure in paper is:
image
It seems you did not write 'average pooling' in your code.

Since I am a green hand in gan. I want to ask according to what you changed the structure. And after your change, is the result same as that in paper?

Tensorflow v2

Hello, I use Tensorflow v2, and the tensorflow.contrib doesn't exist on Tensorflow v2.

Can you help me ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.