Code Monkey home page Code Monkey logo

batchbald_redux's Introduction

Hi there 👋

I have recently finished my PhD ("DPhil" with AIMS CDT) in Machine Learning at OATML (at the University of Oxford). Here is my quick online CV 🤗

https://www.blackhc.net @blackhc LinkedIn Google Scholar


🎓 Education & 💼 Industry Experience

  1. DPhil Computer Science

    University of Oxford, supervised by Prof Yarin Gal, Oxford, UK, Oct 2018 -- Summer 2023_
    Deep active learning and data subset selection using information theory and Bayesian neural networks.

  2. Research Engineer (Intern)

    Opal Camera, Remote, Oct 2022 -- Dec 2022
    Validation Pipeline for Gesture Control System.

  3. Resident Fellow

    Newspeak House, London, UK, Jan 2018 -- Jul 2018
    AI & Politics event series, science communication.

  4. Performance Research Engineer

    DeepMind, London, UK, Oct 2016 -- Aug 2017
    TensorFlow performance improvements (custom CUDA kernels) & profiling (i.a. “Neural Episodic Control”); automated agent regression testing.

  5. Software Engineer

    Google, Zürich, CH, Jul 2013 -- Sep 2016
    App & testing infrastructure; latency optimization; front-end development (Dart/GWT).

  6. MSc Computer Science

    Technische Universität München, München, DE, Sep 2009 -- Oct 2012
    Thesis “Assisted Object Placement”.

  7. BSc Mathematics

    Technische Universität München, München, DE, Sep 2009 -- Mar 2012
    Thesis “Discrete Elastic Rods”.

  8. BSc Computer Science

    Technische Universität München, München, DE, Sep 2007 -- Sep 2009
    Thesis “Multi-Tile Terrain Rendering with OGL/Equalizer”.

🧑‍🔬 Research

📚 Publications

Conference Proceedings

[1] J. Mukhoti*, A. Kirsch*, J. van Amersfoort, P. H. Torr, and Y. Gal, "Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty," CVPR 2023, 2023.

[2] F. Bickford Smith*, A. Kirsch*, S. Farquhar, Y. Gal, A. Foster, and T. Rainforth, "Prediction-Oriented Bayesian Active Learning," AISTATS, 2023.

[3] S. Mindermann*, J. M. Brauner*, M. T. Razzak*, A. Kirsch, et al., "Prioritized Training on Points that are Learnable, Worth Learning, and not yet Learnt," ICML, 2022.

[4] A. Jesson*, P. Tigas*, J. van Amersfoort, A. Kirsch, U. Shalit, and Y. Gal, "Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data," NeurIPS, 2021.

[5] A. Kirsch*, J. van Amersfoort*, and Y. Gal, "BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning," NeurIPS, 2019.

Journal Articles

[6] A. Kirsch, "Black-Box Batch Active Learning for Regression", TMLR, 2023.

[7] A. Kirsch, "Does ‘Deep Learning on a Data Diet’ reproduce? Overall yes, but GraNd at Initialization does not", TMLR, 2023.

[8] A. Kirsch*, S. Farquhar*, P. Atighehchian, A. Jesson, F. Branchaud-Charron, Y. Gal, "Stochastic Batch Acquisition: A Simple Baseline for Deep Active Learning", TMLR, 2023.

[9] A. Kirsch and Y. Gal, "A Note on "Assessing Generalization of SGD via Disagreement"," TMLR, 2022.

[10] A. Kirsch and Y. Gal, "Unifying Approaches in Data Subset Selection via Fisher Information and Information-Theoretic Quantities," TMLR, 2022.

Workshop Papers

[11] D. Tran, J. Liu, M. W. Dusenberry, et al., "Plex: Towards Reliability using Pretrained Large Model Extensions," Principles of Distribution Shifts & First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward, ICML 2022.

[12] A. Kirsch, J. Kossen, and Y. Gal, "Marginal and Joint Cross-Entropies & Predictives for Online Bayesian Inference, Active Learning, and Active Sampling," Updatable Machine Learning, ICML 2022, 2022.

[13] A. Kirsch, J. Mukhoti, J. van Amersfoort, P. H. Torr, and Y. Gal, "On Pitfalls in OoD Detection: Entropy Considered Harmful," Uncertainty in Deep Learning, 2021.

[14] A. Kirsch, T. Rainforth, and Y. Gal, "Active Learning under Pool Set Distribution Shift and Noisy Data," SubSetML, 2021.

[15] A. Kirsch*, S. Farquhar*, and Y. Gal, "A Simple Baseline for Batch Active Learning with Stochastic Acquisition Functions," SubSetML, 2021.

[16] A. Kirsch and Y. Gal, "A Practical & Unified Notation for Information-Theoretic Quantities in ML," SubSetML, 2021.

[17] A. Kirsch, C. Lyle, and Y. Gal, "Scalable Training with Information Bottleneck Objectives," Uncertainty in Deep Learning,

[18] A. Kirsch, C. Lyle, and Y. Gal, "Learning CIFAR-10 with a Simple Entropy Estimator Using Information Bottleneck Objectives," Uncertainty in Deep Learning, 2020.


📝 Reviewing

NeurIPS 2019 (Top Reviewer), AAAI 2020, AAAI 2021, ICLR 2021, NeurIPS 2021 (Outstanding Reviewer), NeurIPS 2022, NeurIPS 2022, TMLR, CVPR 2023.


🎯 Interests & Skills

Active Learning, Subset Selection, Information Theory, Information Bottlenecks, Uncertainty Quantification, Python, PyTorch, Jax, C++, CUDA, TensorFlow.


batchbald_redux's People

Contributors

blackhc avatar dependabot[bot] avatar leengit avatar y0ast avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

batchbald_redux's Issues

Question: (active) training loop

Hi @BlackHC,

Thanks a lot for creating this amazing library. I have a question regarding the (active) training loop in the example here:

At the end of each iteration in the while loop we acquire the new data via

active_learning_data.acquire(candidate_batch.indices)

That is, the active_learning_data.training_dataset will have new instances. Here it comes my doubt,

how the acquired instances are then included in the train_loader?

The train_loader is defined outside the while loop and does not change throughout training. It seems to me that the model sees always the same data in the training loop

# Train
for data, target in tqdm(train_loader, desc="Training", leave=False):
    data = data.to(device=device)
    target = target.to(device=device)
     ...

Shouldn't it be able to train on the "new" (extended) training set, i.e. shouldn't the train_loader be redefined at each iteration?

To provide a bit of context: I am trying to write a model leveraging both batchbald_redux and baal. Out of curiosity, can you comment on the differences between the two libraries and why you chose to reimplement some things (e.g., the ActiveLearningData?). I am trying to combine them and get the best of each one. Also, I have seen that they included an implementation of BatchBald taken from the code you provided originally with the paper, is there an advantage on using that implementation vs this one? - Well, I guess these are a bonus questions, feel free to ignore :)

Thanks a lot in advance for your attention.

Best,
Pietro

Unsure whether the behavior is expected

Hi,
I am inspecting the fields of application of BatchBald. I am using the get_batchbald_batch function from https://github.com/BlackHC/batchbald_redux/blob/master/01_batchbald.ipynb

Here is an example: we have 3 samples with 2 mc inferences for each with 4 classes. The first two examples are totally identical, while the third is completely different yet has slightly lower BALD score (due to smaller fluctuation of probabilities across mc runs):

log_probs_N_K_C = torch.Tensor([
[[0.1, 0.2, 0.3, 0.4], [0.15, 0.15, 0.3, 0.4]],
[[0.1, 0.2, 0.3, 0.4], [0.15, 0.15, 0.3, 0.4]],
[[0.4, 0.3, 0.2, 0.1], [0.4, 0.3, 0.16, 0.14]],
]).log()

I hoped BatchBald would be useful in such a case since it queries diverse instances for the batch. Yet, it queries two duplicate examples:

get_batchbald_batch(log_probs_N_K_C, batch_size=2, num_samples=3) outputs [0, 1].

I wonder whether this is the expected behavior of the algorithm or some bugs in the code may be present.
Thank you for your attention!

Change prob to log_prob

Even though the name makes it obvious that probabilities are expected by most functions, it would be better to change the API to use log probabilities.

This would mean that log_softmax outputs can be passed directly without having to call .exp_().

It's too easy to forget that and leads to NaN that take time to debug.

Redundant line of code

Hey! Awesome piece of work. I've just been going through the code and I'm curious what these two statements are for

shared_conditinal_entropies = conditional_entropies_N[candidate_indices].sum()

scores_N -= conditional_entropies_N + shared_conditinal_entropies

It seems to me as if they won't actually change which candidate is selected as the maximum, as it's adding a float to all elements of the vector scores_N

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.