Code Monkey home page Code Monkey logo

logo-gen's People

Contributors

alex-sage avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

logo-gen's Issues

Clustering model

Dear alex-sage,

I was wondering whether you could upload your code/pretrained model for the clustering the logos into different subgroups.

Not able to load DCGAN pre-trained weights

Hi @alex-sage,
I am trying to generate Logo dataset using the pre-trained weights of DCGAN and WGAN. When I am running the main.py(dcgan), it gives tensor mismatch error
Assign requires shapes of both tensors to match. lhs shape= [5,5,3,64] rhs shape= [5,5,3,456]
[[Node: save/Assign_47 = Assign[T=DT_FLOAT, _class=["loc:@generator/g_h4/w"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](generator/g_h4/w, save/RestoreV2:47)]]
I have also tried to change the few parameters to match the dimension of model parameters and checkpoint parameters, still not working.
Can you please provide me exact configuration on which you have trained your model.

GPU Reliance

I am trying to plan out possible points of failure in the future.

At the moment I am running the code on a High Performance Computer as I do not have access to a GPU. The HPC is command line only, which means the GUI development would not be feasible. Within the GUI the ability to generate new images would most definitely be needed, and in this case, a GPU would then be needed. Is there a way to remove this GPU reliance?

Or alternatively (and perhaps a better phrased question), what would be the best interface for a GUI with the GAN model?

Run inception_score.py and got an ValueError problem

Traceback (most recent call last):
File "/anaconda2/envs/untitled/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1626, in _create_c_op
c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Shape must be rank 2 but is rank 1 for 'MatMul' (op: 'MatMul') with input shapes: [2048], [2048,1008].

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/kawsorandy/TensorFlow/WGAN/tflib/inception_score.py", line 96, in
_init_inception()
File "/Users/kawsorandy/TensorFlow/WGAN/tflib/inception_score.py", line 92, in _init_inception
logits = tf.matmul(tf.squeeze(pool3), w)
File "/anaconda2/envs/untitled/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 2053, in matmul
a, b, transpose_a=transpose_a, transpose_b=transpose_b, name=name)
File "/anaconda2/envs/untitled/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 4560, in mat_mul
name=name)
File "/anaconda2/envs/untitled/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/anaconda2/envs/untitled/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/anaconda2/envs/untitled/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3272, in create_op
op_def=op_def)
File "/anaconda2/envs/untitled/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1790, in init
control_input_ops)
File "/anaconda2/envs/untitled/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1629, in _create_c_op
raise ValueError(str(e))
ValueError: Shape must be rank 2 but is rank 1 for 'MatMul' (op: 'MatMul') with input shapes: [2048], [2048,1008].

Process finished with exit code 1

Using the interpolate function in Vector.py

Hi,

I am trying to get my head round how the interpolate function works exactly.

z_start and z_stop I understand, these are the two vectors we are interpolating between.

The y_start and y_stop functions however, less so. If I just want to interpolate between the latent space and ignore the class labels, I would have thought the command is:

z = vec.gen_z()
vec.interpolate(z_start=z[0],z_stop=[1])

However I end up raising the error in the sample_z function.:

 No constant label set, please set first or call this function with parameter y

I've experimented about a bit but I cannot seem to get the function to work without a y parameter, and even then I am not sure what format this should be in.

Do you mind explaining the correct use of this?

Possible bug with __init__ in model.py

In the param declaration of ___init___ in model.py on line 16 onwards, I think sample_dir is missing which causes an error when i run. Not sure if this is in my set up or not.

Love the paper and the code btw!

missing file

[warren@cernet dcgan]$ python main.py
Traceback (most recent call last):
File "main.py", line 4, in
from model import DCGAN
File "/home/warren/logo-gen-master/dcgan/model.py", line 11, in
from utils import *
File "/home/warren/logo-gen-master/dcgan/utils.py", line 22, in
import file_handling as fh
ImportError: No module named file_handling

There are some files missing from your project. I find them are listed in the .gitignore, could you please upload those files? thanks.

Unable to load checkpoints to DCGAN architecture for inference purposes

Hello there, it seems that whatever I try to change in the code I can't find out why the checkpoints could not be loaded properly.

discriminator/d_h3_lin/bias:0 (float32_ref 1) [1, bytes: 4]
Total size of variables: 9451908
Total bytes of variables: 37807632
[] Reading checkpoints...
[
] Failed to find a checkpoint
[*] 0
Traceback (most recent call last):
File "C:\Users\Joao Marcelo\Anaconda3\envs\oficinaintegracao\lib\site-packages\tensorflow\python\client\session.py", line 1356, in _do_call
return fn(*args)
File "C:\Users\Joao Marcelo\Anaconda3\envs\oficinaintegracao\lib\site-packages\tensorflow\python\client\session.py", line 1341, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "C:\Users\Joao Marcelo\Anaconda3\envs\oficinaintegracao\lib\site-packages\tensorflow\python\client\session.py", line 1429, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Attempting to use uninitialized value generator/g_h4/b
[[{{node generator/g_h4/b/read}}]]
[[concat_3/_5]]
(1) Failed precondition: Attempting to use uninitialized value generator/g_h4/b
[[{{node generator/g_h4/b/read}}]]
0 successful operations.
0 derived errors ignored.

hdf5 format

Train myself data set. If I use hdf5 format, where should you change the code.

Using vector.py show_image causes unitialized value Generator.input

Hi,

I am trying to run vector.py to do a whole host of latent space exploration for a dissertation project.

I have been trying to get my head around exactly how to pass the correct arguments, and have eventually reached this:

The last few lines of logo_wgan.py:

    with  tf.Session(config=tf.ConfigProto(log_device_placement=True,allow_soft_placement=True)) as session:
    if args.load_config is not None:
        print ("creating WGAN")
        wgan = WGAN(session, load_config=args.load_config)
        vec = Vector(wgan)
        vec.show_random(save=True)
    else:
        wgan = WGAN(session, config_dict=arg_dict, train=args.train)
    if args.train:
        wgan.train()

The command being run:

python logo_wgan.py --load_config 'settings'

Ive had to put the config.json file in runs/settings/config.json as the logo_wgan.py prefixes runs/ to the load_config parameter - this is probably my own lack of understanding.

I also had to change line 63 in vector.py to:

    if self.cfg.N_LABELS > 0:
            # with h5py.File(self.cfg.DATA) as hf:
                # probs = hf[self.cfg.LABELS].attrs['probs']
            # number = np.random.choice(range(self.cfg.N_LABELS), size=size, replace=True, p=probs)
            number = np.random.randint(0, self.cfg.N_LABELS, size)

As otherwise i got an error in finding the HDF5 data:

    Traceback (most recent call last):
  File "logo_wgan.py", line 657, in <module>
    vec.show_random(save=True)
  File "/data/aca15pk/College/LLD-icon-sharp_rc_128/vector.py", line 41, in show_random
    y = self.gen_y(size=size)
  File "/data/aca15pk/College/LLD-icon-sharp_rc_128/vector.py", line 62, in gen_y
    probs = hf[self.cfg.LABELS].attrs['probs']
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/home/ilan/minonda/conda-bld/h5py_1496871545397/work/h5py/_objects.c:2846)
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/home/ilan/minonda/conda-bld/h5py_1496871545397/work/h5py/_objects.c:2804)
  File "/home/aca15pk/.conda/envs/py2-gpu/lib/python2.7/site-packages/h5py/_hl/group.py", line 169, in __getitem__
    oid = h5o.open(self.id, self._e(name), lapl=self._lapl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/home/ilan/minonda/conda-bld/h5py_1496871545397/work/h5py/_objects.c:2846)
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/home/ilan/minonda/conda-bld/h5py_1496871545397/work/h5py/_objects.c:2804)
  File "h5py/h5o.pyx", line 190, in h5py.h5o.open (/home/ilan/minonda/conda-bld/h5py_1496871545397/work/h5py/h5o.c:3740)
KeyError: 'Unable to open object (Component not found)'

With these changes i get the code to run, and after some time i get this:

  File "logo_wgan.py", line 657, in <module>
    vec.show_random(save=True)
  File "/data/aca15pk/College/LLD-icon-sharp_rc_128/vector.py", line 42, in show_random
    self.show_z(z, y, shape=shape, border=border, enum=enum, res=res, save=save)
  File "/data/aca15pk/College/LLD-icon-sharp_rc_128/vector.py", line 142, in show_z
    self.show(self.sample_z(z, y), shape=shape, enum=enum, border=border, res=res, save=save)
  File "/data/aca15pk/College/LLD-icon-sharp_rc_128/vector.py", line 100, in sample_z
    samples = self.wgan.sample(z_i, y_i)
  File "logo_wgan.py", line 265, in sample
    self._init_sampler()
  File "logo_wgan.py", line 250, in _init_sampler
    self.sampler = self.Generator(self.cfg, n_samples=0, labels=self.y, noise=self.z, is_training=self.t_train)
  File "/data/aca15pk/College/LLD-icon-sharp_rc_128/tflib/architectures.py", line 23, in Generator_Resnet_32
    output = lib.ops.linear.Linear('Generator.Input', 128 + add_dim, 4 * 4 * cfg.DIM_G, noise)
  File "/data/aca15pk/College/LLD-icon-sharp_rc_128/tflib/ops/linear.py", line 110, in Linear
    weight_values
  File "/data/aca15pk/College/LLD-icon-sharp_rc_128/tflib/__init__.py", line 25, in param
    param = tf.Variable(*args, **kwargs)
  File "/home/aca15pk/.conda/envs/py2-gpu/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 199, in __init__
    expected_shape=expected_shape)
  File "/home/aca15pk/.conda/envs/py2-gpu/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 330, in _init_from_args
    self._snapshot = array_ops.identity(self._variable, name="read")
  File "/home/aca15pk/.conda/envs/py2-gpu/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1400, in identity
    result = _op_def_lib.apply_op("Identity", input=input, name=name)
  File "/home/aca15pk/.conda/envs/py2-gpu/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "/home/aca15pk/.conda/envs/py2-gpu/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/aca15pk/.conda/envs/py2-gpu/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1204, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value Generator.Input/Generator.Input.W
	 [[Node: Generator.Input/Generator.Input.W/read = Identity[T=DT_FLOAT, _class=["loc:@Generator.Input/Generator.Input.W"], _device="/job:localhost/replica:0/task:0/gpu:0"](Generator.Input/Generator.Input.W)]]

This is the error I don't understand.

I appreciate i have put a lot of information, just trying to be as clear as possible. If there is a more correct way (there probably is) to use vector.py I would love to know. If i work it out, I'd be happy to write a readme for it to help out.

Add requirements.txt

Hi!

Would you be so kind to add a file with specification of the environment - it would greatly facilitate reproduction. It can be easily generated e.g. with:

pip freeze > requirements.txt

Or, at least, share what version of python and TF you used.

Thanks!

Prepare my own dataset

Hi @alex-sage, I want to use your code to train a certain dataset, the icons are collected from the Noun Project. I want to ask if there is a documentation or codes to show me how to process the icons to the hdf5 file format.

hdf5 format

Train myself data set. If I use hdf5 format, where should I change the code.

ValueError: too many values to unpack (expected 2)in hdf5_images.py

1.download LLD-logo.hdf5(13GB)
2.change the path, python hdf5_images.py
3.error:

Traceback (most recent call last):
  File "hdf5_images.py", line 45, in <module>
    train_gen, valid_gen = load(64)
ValueError: too many values to unpack (expected 2)

4.try a lot, it did't work.why???my hair is gone.

Generator Interface in the paper

In the original paper, figure 2 has a generator interface. I am not sure if this was just conceptual in the end, or if it was developed at all? It would be super useful to ground the project I am working with something similar to this :)

ValueError: Dimensions must be equal, but are 8 and 128 for 'Generator.1.Shortcut/Conv2D' (op: 'Conv2D') with input shapes: [?,128,8,8], [1,1,128,128].

hi, I was using WGAN to run your code with the learned model.but did't success.the error is the title.

`def Generator_Resnet_32(cfg, n_samples, labels, noise=None, is_training=True):

if noise is None:
noise = tf.random_normal([n_samples, 128])
add_dim = 0
if cfg.LAYER_COND:
y = labels
noise = tflib.ops.concat.concat([noise, y], 1)
add_dim = cfg.N_LABELS
output = lib.ops.linear.Linear('Generator.Input', 128 + add_dim, 4 * 4 * cfg.DIM_G, noise)
output = tf.reshape(output, [-1, cfg.DIM_G, 4, 4])`

in conv2d.py, the filters and the inputs had the problem.

def Generator_Resnet_32(cfg, n_samples, labels, noise=None, is_training=True):
if noise is None:
noise = tf.random_normal([n_samples, 128])
add_dim = 0
if cfg.LAYER_COND:
y = labels
noise = tflib.ops.concat.concat([noise, y], 1)
add_dim = cfg.N_LABELS
output = lib.ops.linear.Linear('Generator.Input', 128 + add_dim, 4 * 4 * cfg.DIM_G, noise)
output = tf.reshape(output, [-1, cfg.DIM_G, 4, 4])
print(output)
output = ResidualBlock(cfg, 'Generator.1', cfg.DIM_G, cfg.DIM_G, 3, output, resample='up', labels=labels,
is_training=is_training)
print('Generator_Resnet_32 !!!')

the ResidualBlock can't go on.what should I do something? Can you give me some advice?

hdf

Train myself data set. If I use hdf5 format, where should I change the code.

What hardware need to perform this task of logo-gen

What computer power is needed to learn this network (CPU, VideoCard, RAM)
How much PC
What soft is needed (for example CUDA.......... and so on)?
How many time this network must be learn (day, two, month, 2 months)?

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.