Hi,
I really enjoyed your paper. In particular, I think that your method of quantifying the membership inference vulnerability per data point is an excellent step towards understanding practical privacy guarantees.
I am trying to replicate your results using tensorflow, but am unable to train a model using adversarial regularization. During training, my inference model quickly converges to predicting 0.5 for both train and reference data points, and does not learn any more in later epochs. I have tried a setup where I train the classifier for a few epochs in the beginning, before training the inference model once the classifier is more stable, but it still does not work.
After experiencing these issues, I wanted to train a model using the released code to make sure that I can replicate the results of the original paper. I tried to use the following repository (https://github.com/SPIN-UMass/ML-Privacy-Regulization), but the code doesn’t run off-the-shelf and I’m not sure of the correct modifications to make.
Could you please advise me on how you were able to train the models for your paper with adversarial regularization? Did you use the released code or did you re-implement the procedure yourself?
Thank you for your time!