manicman1999 / stylegan2-tensorflow-2.0 Goto Github PK
View Code? Open in Web Editor NEWStyleGAN 2 in Tensorflow 2.0
License: MIT License
StyleGAN 2 in Tensorflow 2.0
License: MIT License
After a few minutes of training.
model.load(28)
n1 = noiseList(64)
n2 = nImage(64)
for i in range(50):
print(i, end = '\r')
model.generateTruncated(n1, noi = n2, trunc = i / 50, outImage = True, num = i)
error
2020-01-27 14:14:23.735268: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Invalid argument: input and filter must have the same depth: 1536 vs 96
[[{{node conv2d_mod_23/Conv2D}}]]
Traceback (most recent call last):
File "/Users/pig/PycharmProjects/StyleGAN2-Tensorflow-2.0/stylegan_two.py", line 646, in
model.generateTruncated(n1, noi = n2, trunc = i / 50, outImage = True, num = i)
File "/Users/pig/PycharmProjects/StyleGAN2-Tensorflow-2.0/stylegan_two.py", line 567, in generateTruncated
generated_images = self.GAN.GE.predict(w_space + [noi], batch_size = BATCH_SIZE)
File "/Users/pig/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 909, in predict
use_multiprocessing=use_multiprocessing)
File "/Users/pig/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_arrays.py", line 722, in predict
callbacks=callbacks)
File "/Users/pig/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_arrays.py", line 393, in model_iteration
batch_outs = f(ins_batch)
File "/Users/pig/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py", line 3740, in call
outputs = self._graph_fn(*converted_inputs)
File "/Users/pig/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 1081, in call
return self._call_impl(args, kwargs)
File "/Users/pig/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 1121, in _call_impl
return self._call_flat(args, self.captured_inputs, cancellation_manager)
File "/Users/pig/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 1224, in _call_flat
ctx, args, cancellation_manager=cancellation_manager)
File "/Users/pig/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py", line 511, in call
ctx=ctx)
File "/Users/pig/anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_core/python/eager/execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: input and filter must have the same depth: 1536 vs 96
[[node conv2d_mod_23/Conv2D (defined at /anaconda3/envs/tf2/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1751) ]] [Op:__inference_keras_scratch_graph_20999]
Function call stack:
keras_scratch_graph
Diving into GAN en deepfakes and disneyfying my portret 😄 I stumbled upon your github project..
As I write kids adventures in my spare time and making 360˚ worlds interactive I wondered if one could generate 360˚ landscapes from non 360 images..
any ideas?
Equirectangular images look pretty weird so stitching this is a daunting task for me.. well impossible to say.
could GAN help me?
Hi Mathew,
Thank you for your implementation and share the code. I am trying to understand the StyleGAN generally and your implementation specifically.
Starting from the NVIDIA introduction here: [https://www.youtube.com/watch?v=kSLJriaOumA], I saw a very cool animation that by changing some "inputs" (I assumed) such as Coarse style, Middle style or fine style (somehow represented by the face images) can generate different image's texture. I wonder where take this into account in your code. I saw the input of model.generateTruncated() get n1 and n2 created from random function. How do we interface the generation with Coarse style, Middle style or fine style from a set of image.
I hope my concern makes sense. I am trying to see if StyleGAN can be applied to my application.
Bests,
with open("./Models/gen.json", "r") as json_file:
model_json = json_file.read()
model = model_from_json(model_json, custom_objects={"Conv2DMod": Conv2DMod})
model.load_weights("./Models/gen_19.h5")
ValueError: bad marshal data (unknown type code)
I get this error when trying to load pretrained model gen.json
and genMA.json
.
Versions:
tensorflow = "2.2.3"
tf.keras.__version__ == 2.3.0-tf
Also tried higher version of tensorflow like 2.5.0
and still did not work.
In the styleGAN 2 paper they state using a "non-saturating logistic loss", is there any particular reason you opted for hinge loss in this implementation?
For reference I believe the original loss functions for config-f are G_logistic_ns_pathreg
for the generator and D_logistic_r1
for the discriminator (https://github.com/NVlabs/stylegan2/blob/master/training/loss.py).
I'm doubt that why generator loss is defined as gen_loss = K.mean(fake_output)
, maybe should it be gen_loss = K.mean(real_output)
to confuse discriminator?
nvm
Hello, I was just wondering if instead of this:
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
....
gradients_of_generator = gen_tape.gradient(gen_loss, self.GAN.GM.trainable_variables)
gradients_of_discriminator = disc_tape.gradient(disc_loss, self.GAN.D.trainable_variables)
, you could do this? :
with tf.GradientTape() as tape
....
gradients_of_generator = tape.gradient(gen_loss, self.GAN.GM.trainable_variables)
gradients_of_discriminator = tape.gradient(disc_loss, self.GAN.D.trainable_variables)
Would that work? In this case, is it not a waste of gpu space to use 2 tapes?
Idk y but my images are getting generated like this.
These images are generated with the evaluate function while training. I want to generate fresh images, how do i do that? ive tried calling model.generateTurncated function but it gives error that "ndarray objects are not callable"
Can someone help, im new to all this
the implements of official stylegan2 come to the problem
#error This file requires compiler and library support for the ISO C++ 2011 standard.
This support must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
#error This file requires compiler and library support \
do you meet the same problem?thanks
I got this error when loading the images. The full error is
Traceback (most recent call last):
File "stylegan_two.py", line 643, in <module>
model.evaluate(0)
File "stylegan_two.py", line 517, in evaluate
generated_images = self.GAN.GMA.predict(n1 + [n2, trunc], batch_size = BATCH_SIZE)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py", line 1629, in predict
tmp_batch_outputs = self.predict_function(iterator)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
result = self._call(*args, **kwds)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 871, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 726, in _initialize
*args, **kwds))
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 2969, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 3361, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py", line 3206, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py", line 990, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/def_function.py", line 634, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py", line 977, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1478 predict_function *
return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1468 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1461 run_step **
outputs = model.predict_step(data)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:1434 predict_step
return self(x, training=False)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:998 __call__
input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/input_spec.py:207 assert_input_compatibility
' input tensors. Inputs received: ' + str(inputs))
ValueError: Layer model_3 expects 8 input(s), but it received 9 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(16, 512) dtype=float32>, <tf.Tensor 'IteratorGetNext:1' shape=(16, 512) dtype=float32>, <tf.Tensor 'IteratorGetNext:2' shape=(16, 512) dtype=float32>, <tf.Tensor 'IteratorGetNext:3' shape=(16, 512) dtype=float32>, <tf.Tensor 'IteratorGetNext:4' shape=(16, 512) dtype=float32>, <tf.Tensor 'IteratorGetNext:5' shape=(16, 512) dtype=float32>, <tf.Tensor 'IteratorGetNext:6' shape=(16, 512) dtype=float32>, <tf.Tensor 'IteratorGetNext:7' shape=(16, 256, 256, 1) dtype=float32>, <tf.Tensor 'IteratorGetNext:8' shape=(16, 1) dtype=float32>]
Is it because the blocks are written in functional form while the self.generator
is a sequential model?
When I change the def generator(self)
to
self.S = Dense(512, input_shape = [latent_size])
self.S = LeakyReLU(0.2)(self.S)
self.S = Dense(512)(self.S)
self.S = LeakyReLU(0.2)(self.S)
self.S = Dense(512)(self.S)
self.S = LeakyReLU(0.2)(self.S)
self.S = Dense(512)(self.S)
self.S = LeakyReLU(0.2)(self.S)
it says self.S can't be converted to a tensor
Wondering what this would take to implement?
My guess is I'd need to replace the lambda layers and basically minimize the loss of what the generator creates from latent vector compared to the image?
hi, I am training this source code and i'd like to duplicate your result first.
is that possible i could have your dataset for training it myself?
generated_images = self.GAN.GM.predict(n1 + [n2], batch_size = BATCH_SIZE)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 909, in predict
use_multiprocessing=use_multiprocessing)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_arrays.py", line 722, in predict
callbacks=callbacks)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_arrays.py", line 393, in model_iteration
batch_outs = f(ins_batch)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/backend.py", line 3740, in call
outputs = self._graph_fn(*converted_inputs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/function.py", line 1081, in call
return self._call_impl(args, kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/function.py", line 1121, in _call_impl
return self._call_flat(args, self.captured_inputs, cancellation_manager)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/function.py", line 1224, in _call_flat
ctx, args, cancellation_manager=cancellation_manager)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/function.py", line 511, in call
ctx=ctx)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/eager/execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "", line 3, in raise_from
tensorflow.python.framework.errors_impl.UnimplementedError: The Conv2D op currently only supports the NHWC tensor format on the CPU. The op was given the format: NCHW
[[node model_1/conv2d_mod/Conv2D (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1751) ]] [Op:__inference_keras_scratch_graph_11413]
Function call stack:
keras_scratch_graph
Seems conv2d does not take NCHW data format. I tried to force to run on gpu (with tf.device('/gpu:1'):...), it did not work.
I also tried different tf versions (2.0, 2.3), even with docker image for tf2.0, all got into the same issue.
Anyone knows how to get around this issue?
Thanks
Is there way to dump the official model weights and load into this version?
I tried running this on Google Colab, and it gives this error when I try to run the model.
TypeError: Dimension value must be integer or None or have an index method, got 256.0
I almost give up that it keep posing error after error running with tf.distribute.MirroredStrategy().
ValueError: in user code:
<ipython-input-43-fa18d8117b40>:554 train_step *
gradients_of_generator = gen_tape.gradient(gen_loss, self.GAN.GM.trainable_variables)
C:\Users\ThuSi\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\backprop.py:1064 gradient **
flat_sources = [_handle_or_self(x) for x in flat_sources]
C:\Users\ThuSi\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\backprop.py:1064 <listcomp>
flat_sources = [_handle_or_self(x) for x in flat_sources]
C:\Users\ThuSi\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\eager\backprop.py:729 _handle_or_self
return x.handle
C:\Users\ThuSi\AppData\Roaming\Python\Python38\site-packages\tensorflow\python\distribute\values.py:639 handle
raise ValueError("`handle` is not available outside the replica context"
ValueError: `handle` is not available outside the replica context or a `tf.distribute.Strategy.update()` call.
Can you make this work with Tensorflow 2.4.1 and above with tf.distribute.MirroredStrategy()
Thanks,
Steve
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.