Code Monkey home page Code Monkey logo

mlforhealthlabpub's People

Contributors

bcebere avatar drshushen avatar robsdavis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mlforhealthlabpub's Issues

PATE-GAN: Teachers

Description

Upon inspecting PATE-GAN, I believe there might be an issue with the teachers-discriminators. Algorithm 1 on page 7 from the paper, states that the teachers are supposed to be classifiers which are meant to be continuously trained over the whole training procedure (lines 9-10 in Algorithm 1). Instead, L203 defines classifiers (Logistic Regressions) which are (re-)fit from scratch at each iteration; this might prevent them from continuously learning.

No data for RadialGAN

Hello, I'm trying to run Radial Gan on your example to better understand the algorithm and perhaps use in my paper,
Could you please provide the relevant data? I couldn't find in the drive...

'RandomSurvivalForest' object has no attribute 'event_times_'

Excuse me, when I use survivalquilts, I get an error message: 'RandomSurvivalForest' object has no attribute 'event_times_', the error location is on line 45 of the class_underlyingmodels. And, could have please provide the example data ( preprocessed METABRIC dataset), thank you

Code with MIMIC-III data

Hi, thank you for providing the CartPole example in terms of using the AVRIL algorithm.

I wonder could you also provide the code for testing MIMIC-III data using the AVRIL algorithm? I am curious in what way you generated "inputs,targets,a_dim,s_dim" just like the CartPole example usage.

And also, in what way you merged different data frames and conducted a final version for training. I know the dataset is credential but I am really interested in the coding portion since I already got the dataset by application.

I believe your sharing will be beneficial for many of us! Thank you very much!

How can this algorithm be used for classification?--attentivess

I have read the article: Attentive State-Space Modeling of Disease Progression and I think the algorithm is very valuable. But I am still not sure how the author applies this probabilistic model to classification? Is it possible to use Bayes' theorem to classify and select the categories with high probability? I was wondering if the model.get_likelihood function was used to calculate the probability of obtaining the observation sequence for each category separately?
Looking forward to your reply! Thanks!

PATE-GAN: Data Partition

Description

Upon inspecting PATE-GAN, I believe there is an indexing bug on L190, which means that the first data partition is being fed to every teacher. This naturally causes a privacy issue.

PATE-GAN: Processing and Metadata

Description

Upon inspecting PATE-GAN, I noticed a couple of potential issues:

  • The model expects already processed/scaled data (the processing is done outside of the model, i.e., here), and as a consequence the model doesn’t return synthetic data in the original scale.
  • The data bounds (min/max values) are directly extracted from the data in a non-DP way (here), which might lead to privacy vulnerabilities as shown in previous work.

Supervised Loss Computation

I have a question regarding the implementation of the supervised loss training. This is computed in the code as follows:

# 2. Supervised loss

G_loss_S = tf.losses.mean_squared_error(H[:,1:,:], H_hat_supervise[:,1:,:]), where H_hat_supervise is computed as:

H_hat_supervise = supervisor(H, T)

Should not the supervised loss be computed on the latent representation coming from the generator (at least this is how I understood the paper):

E_hat = generator(Z, T)

H_hat = supervisor(E_hat, T) ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.