Code Monkey home page Code Monkey logo

membership-inference-evaluation's People

Contributors

lwsong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

membership-inference-evaluation's Issues

Cannot reproduce the accuracy reported in the paper with the code on Texas100

Hi Liwei,

I have run the texas_undefend.py in the adversarial regularization/training_code and got 97.68% training acc and 47.47% testing acc. My own implementation also achieves similar numbers with 20 training epochs. I am wondering how to achieve the numbers reported in the paper (81.0% training acc and 52.3% testing acc)?

Many thanks!

Difficulty training model with adversarial regularization follow-up

Hi @lwsong,

Thank you for your suggestions. I tried them out but am still struggling to replicate the results. I have some follow-up questions that I'd like to ask you.

  1. What is the adversarial regularization term that you used for your reported results in Table 2?

  2. The loss used for the inference model in the published code is the MSE between predictions in the range [0, 1] and the binary feature (training data/reference data). My loss converges immediately to 0.25 and does not decrease in any further epochs. Did you also find this to be the case, or did the inference model perform better at some point during training?

  3. In Algorithm 1 from the Nasr et al. paper, the classifier learns from a training set (D) and the inference model learns from the same training set (D) and a disjoint reference set from the same population (D’). Later in the paper though, the authors state that D^A (number of known members of the training set) and D’^A are used for training the inference model. Could you please help me understand how D^A and D’^A are used during training?

picture-1

  1. Are the attack accuracies for a model trained with adversarial regularization reported in Table 2 (Purchase100 – 67.6% undefended, 51.6% defended and Texas100 63.0% undefended, 51.0% defended) computed from your own experiments, or are these numbers taken from the Nasr et al. paper?

  2. Were you able to replicate the results of Nasr et al. for Cifar100?

Thanks again for your time!

Difficulty training model with adversarial regularization

Hi,

I really enjoyed your paper. In particular, I think that your method of quantifying the membership inference vulnerability per data point is an excellent step towards understanding practical privacy guarantees.

I am trying to replicate your results using tensorflow, but am unable to train a model using adversarial regularization. During training, my inference model quickly converges to predicting 0.5 for both train and reference data points, and does not learn any more in later epochs. I have tried a setup where I train the classifier for a few epochs in the beginning, before training the inference model once the classifier is more stable, but it still does not work.

After experiencing these issues, I wanted to train a model using the released code to make sure that I can replicate the results of the original paper. I tried to use the following repository (https://github.com/SPIN-UMass/ML-Privacy-Regulization), but the code doesn’t run off-the-shelf and I’m not sure of the correct modifications to make.

Could you please advise me on how you were able to train the models for your paper with adversarial regularization? Did you use the released code or did you re-implement the procedure yourself?

Thank you for your time!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.