Comments (6)
@MarlinSchaefer Thanks for raising the issue. I'll see if I get time today to take a look at this. Will send another message when I've figured out what's going on.
from vitamin_c.
@MarlinSchaefer Right so it's probably a shape issue you're dealing with. I checked my code and the I think you may have the detector and sample rate switched around. Your x_data should end up having a shape of (number of training samples, number of parameters to infer) and your y_data_noisy/y_data_noisefree arrays should have the shape (number of training samples, sample rate, number of detectors).
Also, I'm not sure if this will break the code, but it's probably safe to also make sure that your test data has an extra dimension at the beginning for the number of test samples (even if you're using just 1 test sample). i.e. (number of test samples, number of parameters to infer) for x_data_test and (number of test samples, sample rate, number of detectors) for y_data_test_noisy/y_data_test_noisefree.
from vitamin_c.
@hagabbar I've tested switching the channels. So for 1000 signals I now have the shape (1000, 256, 3)
. However, I still get the same error.
I've also tested using an additional dimension for the test-data. Doing so causes the code to crash when re-shaping the y-data (line 722 in run_vitamin). It seems to be assembling the data assuming that it is a single sample in load_data.
from vitamin_c.
I've looked a bit more into this, but I'm having a hard time understanding everything your code does.
The error occurs on line 618 in CVAE_model.py, where you try to reshape some tensor to [batch_size]. I'm not familiar with tensorflow, so what should this code do and why use resahpe instead of flatten?
Could this maybe just be a version-issue?
from vitamin_c.
@hagabbar I've had a closer look at the errors/code again. So I've found one problem and was able to resolve it. However, that caused further problems.
So in line 618 of CVAE_model.py there is the line:
con = tf.reshape(tf.math.reciprocal(temp_var_r2_sky),[bs_ph])
I'm not sure exactly why this reshape is needed and what it is aiming to achieve. However, the shapes don't match, as temp_var_r2_sky
is of shape (batch-size, number of inferred sky parameters)
. In my case it would be of shape (128, 2)
. So I would guess that the reshape should be either
con = tf.reshape(tf.math.reciprocal(temp_var_r2_sky),[bs_ph, sky_len])
or
con = tf.reshape(tf.math.reciprocal(temp_var_r2_sky),[bs_ph * sky_len])
.
Either of them passes the reshape but crashes on line 626
reconstr_loss_sky = von_mises_fisher.log_prob(tf.math.l2_normalize(xyz_unit,axis=1))
This is due to a shape mismatch between loc_sky
and scale_sky
from VI_decoder_r2.py. And this is where I can't understand the shapes anymore. In my mind the scale and the loc of a normal distribution should be of the same shape. But in the code loc_sky
is explicitly set to have at least 1 more dimension than scale_sky
. The comments say that this is due to the 3rd sky parameter (polarization or distance, I'm guessing) but I'm not sure what the reason for this is.
from vitamin_c.
Marlin, I think this is because the sky parameters output from the decoder are designed to be 3D in the sense that they are modelled using the Fisher Von Mises distribution which describes a Gaussian-like blob of probability on the 2-sphere (sky). The Tensorflow probability functions model this with a single variance parameter (so a single blob-width on the 2D sky) but it uses 3 location parameters to define a 3D unit vector pointing towards the centre of the blob. I think we have the decoder output 3 numbers for the location and then we either normalise it to be a unit vector or it normalises it inside the Fisher Von Mises function itself. Does this make sense?
from vitamin_c.
Related Issues (20)
- condor bounds
- Reference/Minimum frequency
- M_max in gen_mass
- Genpar / genmass to bilby prior gen
- if condor issues
- Mass ratio constraint
- Any Sine and Cosine priors
- r1 weights and biases naming redundancy HOT 2
- Batchnorm in r1
- Remove sample_from_gaussian_dist in q network
- batchnorm in q network
- hardcoded number of samples per file
- In CVAE_model.py do check on data and bounds
- CVAE_model.py in train batch_norm never used
- CVAE_model.py - von mises reconstruction loss
- Initialise gaussian reconstruction loss in CVAE_model.py
- In CVAE_model.py the truncated normal reconstruction loss is ugly
- CVAE_model.py repeated/overwritten calculation
- VICI variable names still present
- In CVAE_model.py - in run() the final session is run twice
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from vitamin_c.