vdumoulin / discgen Goto Github PK
View Code? Open in Web Editor NEWCode for the "Discriminative Regularization for Generative Models" paper.
License: MIT License
Code for the "Discriminative Regularization for Generative Models" paper.
License: MIT License
File "/usr/local/bin/fuel-convert", line 11, in <module> load_entry_point('fuel==0.2.0', 'console_scripts', 'fuel-convert')() File "/usr/local/lib/python2.7/dist-packages/fuel/bin/fuel_convert.py", line 69, in main output_paths = convert_function(**args_dict) File "/usr/local/lib/python2.7/dist-packages/fuel/converters/celeba.py", line 198, in convert_celeba directory, output_directory, output_filename) File "/usr/local/lib/python2.7/dist-packages/fuel/converters/base.py", line 45, in wrapped return f(directory, *args, **kwargs) File "/usr/local/lib/python2.7/dist-packages/fuel/converters/celeba.py", line 139, in convert_celeba_64 h5file = _initialize_conversion(directory, output_path, (64, 64)) File "/usr/local/lib/python2.7/dist-packages/fuel/converters/celeba.py", line 41, in _initialize_conversion skiprows=2, usecols=tuple(range(1, 41))) + File "/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 806, in loadtxt vals = [vals[i] for i in usecols] IndexError: list index out of range
My gut felling is that - this is a problem with fuel. But, may you please help me by mentioning a proper workaround?
Thanks,
Arghya
I wanna ask why you set log_sigma to be zero rather than learn it through the net?
And you set mu_theta to be the reconstructed or sampled image directly, why don't you sample from the gaussian distribution with mu_theta and log_sigma?
This code has been working great for me. Currently, the discriminative regularization in this codebase uses the loss from the batch norm layers of the ConvolutionalSequence, but the paper covers the more general case of taking loss from elsewhere in the classifier, as shown below:
I'd like to add losses at the higher layers - MLP and y-hat. I'm curious if anyone else is interested in working on this with me - the discriminative regularization parts are some of trickier parts of building the training computation graph, and y-hat currently doesn't have a named node in the classifier graph.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.