Code Monkey home page Code Monkey logo

gamenet's People

Contributors

sjy1203 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

gamenet's Issues

about the LEAP baseline

i also get the opensource original theano version of LEAP kdd2017:
https://github.com/neozhangthe1/AutoPrescribe
I have noticed that you two papers have the same cooperation authors. Can i just use your LEAP implementaion version instead of the theano version to test the leap model on my data?
Wait for your response, thank you!

about reproduce the performance

i want to reproduce LEAP and GAMENet, my scripts:
%run train_Leap.py
%run train_Leap.py --resume_path final.model --eval

%run train_GAMENet.py --model_name GAMENet-ddi --ddi
%run train_GAMENet.py --model_name GAMENet-ddi --ddi --resume_path final.model --eval

i choose final.model to do the test predict. Here is my performances:

  • LEAP重现Test DDI Rate: 0.0699, Jaccard: 0.4438, PRAUC: 0.6362, AVG_PRC: 0.6364, AVG_RECALL: 0.6158, AVG_F1: 0.6064 avg med 19.
  • GAMENET Tets(no DDI) DDI Rate: 0.0778, Jaccard: 0.5151, PRAUC: 0.7644, AVG_PRC: 0.6748, AVG_RECALL: 0.6943, AVG_F1: 0.6704
  • GAMENET Tets(with DDI)DDI Rate: 0.0775, Jaccard: 0.5153, PRAUC: 0.7678, AVG_PRC: 0.6765, AVG_RECALL: 0.6923, AVG_F1: 0.6705

They are not same as your paper, but have the similar trend, is it normal or have some problems?

Wait for your responses, Thank you!

code error

I am confused about the train and test process, why do you use c^t_d, c^t_p, c^t_m to predict c^t_m?

for adm_idx, adm in enumerate(input):
target_output1 = model(input[:adm_idx+1])
y_gt_tmp = np.zeros(voc_size[2])
y_gt_tmp[adm[2]] = 1
y_gt.append(y_gt_tmp)

I guess you got the wrong number when you sliced them.

Preprocessing data error

@sjy1203
EDA.ipynb
I tried running EDA.ipynb but getting this error 'ndc_rxnorm_file' is not defined. Upon further investigation I figured out there is a mismatch in file name. Please note that.

about tuning the hyper parameters

  1. have you tried mini-batch in your code?
  2. is the LEAP baseline implement the no-reinforcement learning version?
  3. when tuning the LEAP baseline, what are the important hyper parameters ? can you list the grid-search range for LR and EPOCH?

Thank you!

can't find the dnc.py file

Hello!Thank you for sharing the code. When reading your implementation code of Retain and DMNC methods, I find that dnc.py file is missing. Can you send me a copy of this file? My email is [email protected]. Thank you very much! Looking forward for your replay. With best regards!

about the loss

In your paper, the coefficient of the multi-label margin loss is 0.1, however, in the code it's 0.01. I am wondering which one is the best parameters.

About data records_final.pkl

The length of the file I generated by running EDA.py is inconsistent with that given records_final.pkl in the data folder. May I ask how to generate a file with the same length as the file?Thank you for your advice.

about drug code mapping files

Thank you very much for the code!
Regarding the data source, I am a bit confused. [Drug code mapping files] already exists under the [data] directory. Can the source of these data be provided?

data_gamenet.pkl

I can find this file data_gamenet.pkl
Please help me in this one

MIMIC dataset

Hi I am unable to get mimic dataset. Please help me on this

about comparing to top-k baselines

  1. i read your paper and code, is the measures: average precision + average recall + F1 actually the same as the traditional "precision" "recall" "F1" in top-k item recommendation scenario?

  2. i want to run the LEAP baseline to compare with other top-k medicine recommendation method(similar to the traditional top-k item recommendation design). In LEAP and your paper, the prediction medicine length is automatically decided by the model, how should i compare the results between the LEAP kind of models and the fixed top-K length models?
    Do you think it is fair to just choose the best top-k results to compare with the LEAP kind of models?

Wait for your response and thank you!

Regarding selection of top-40 severity drug drug interactions

Why has been this line of code in data/EDA.ipynb:

ddi_most_pd = ddi_most_pd.iloc[-40:,:]

been used for getting the top-40 severity DDIs?
This is the code for getting the side effects with the least number of occurrences in the DDI file. I wanted to know how it corresponds to the top severity DDIs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.