Comments (17)
from pyhf.
from pyhf.
We should be able to request specific GPU architectures for the benchmarking through the CERN TechLab TWiki. So once we have basic GPU functionality and tests then we can book a week and do testing.
from pyhf.
Maxime Reis has followed up with me with regards to how much time we can get for benchmarking:
Most of the nodes with GPUs are shared, and for benchmarking this obviously won't do. Exclusive access can be arranged for short periods of time, and I'd say a day to a week should be manageable. More than that, we'll have to discuss, and it also depends on which GPU you'd like to benchmark.
So hopefully we can do some testing on other GPU machines and then do a full benchmarking run on the TechLab cluster.
from pyhf.
@ivukotic might be able to help give us access to some GPU clusters?
from pyhf.
From the ATLAS Machine Learning Forum mailing list:
IBM has provided a small GPU cluster to CERN OpenLab for ML studies by the different experiments. They are planning to host a training workshop (one full day between May 28 and June 8, excluding June 7) to help people understand the cluster and how to use it. ATLAS is not the main customer here, but we can have a number of slots for ATLAS people.
One of the big benefits of IBM hardware is their NVLink, which provides much higher bandwidth between CPU/GPU and more critically GPU/GPU. Intel has recently improved CPU/GPU bandwidth, but not touched GPU/GPU. As such, IBM seems keen to demonstrate the potential of increased GPU/GPU bandwidth, which would require large-scale networks/etc which exploit multiple GPUs at once.
If you think you might have now, or will have soon an ML application with large enough network which will gain from efficient multi GPU training, then this training workshop is probably of interest to you.
I will write up an application and submit us.
from pyhf.
from pyhf.
I have confirmed with the SMU HPC Admins that I can use M2's (SMU's Tier3) GPUs for testing and development. So we'll have access to up to 36 nodes with NVIDIA GPUs. 👍
from pyhf.
At the moment the environment at SMU that the HPC admins were able to setup is only fully supporting an optimized TensorFlow GPU. So I'll start there and then move to PyTorch.
from pyhf.
I'm getting access to NCSA's Hardware-Accelerated Learning (HAL) cluster, which should be a perfect environment to do hardware acceleration studies at scale (and probably make the BlueWaters team happier the having me mess around there). Thanks to @msneubauer for setting this in motion.
from pyhf.
2020 update: There are two GPU enabled machines that I can use for testing at the moment:
- My laptop (NVIDIA GeForce GTX 1650 Max-Q 4GB)
- The Neubauer Group firmware and deep learning machine (@markusatkinson is the effective sys admin for this) (NVIDIA GeForce RTX 2080 Ti 11GB — memory can be expanded)
For dev work I will be using the GPUs on my laptop, but I will use our dedicated machine for all benchmarks.
from pyhf.
Can we talk with the UChicago folks (./cc @fizisist, @LincolnBryant, @robrwg, @ivukotic) as well for perhaps access to some machines for CI purposes? Or will the Neubauer group allow the DL machine to be used for that?
from pyhf.
Or will the Neubauer group allow the DL machine to be used for that?
I think that the DL machine we have is a great candidate for dedicated benchmarking studies, but I'm not sure if we can guarantee that the GPUs we have in there can be reserved for CI. The primary purpose of this machine is firmware development and testing with FPGAs and then deep learning studies with the GPUs, which gets first priority.
you can create a private JuputerLab instance with a GPU attached to it.
@ivukotic So do I understand you correctly that we can have that GPU indefinitely for hardware acceleration tests with our CI? If so, that's fantastic. I just wan't aware that this was an option.
from pyhf.
from pyhf.
You can’t get it indefinitely. But you can do reasonable scale studies.
Right, okay this make more sense. :) @kratsg's question was about CI, but this still is good as it will give multiple sites to do hardware acceleration tests. Since it doesn't say on the public view of the ATLAS ML Platform but can you give us information on the GPUs that you have available so that we can include that in the studies?
from pyhf.
from pyhf.
Closing as this has been solved given local machines that the pyhf
dev team has access to (in addition to the ATLAS ML Platform).
from pyhf.
Related Issues (20)
- Release v0.7.4 checklist HOT 1
- Default POI name for model construction
- Document / summarize additional tools in the "family" / network
- Allow for disabling the default backend in tests at the command line
- Construction of qmutilde for muhat < 0 HOT 6
- jsonschema cannot find path specified \\1.0.0\\def.json HOT 7
- Add basic testing for Windows
- Use `tmp_path` pytest fixture over `tmpdir`
- Make tests/test_scripts.py that use pytest-console-scripts script_runner fixture OS independent HOT 1
- Release v0.7.5 checklist
- Python 3.12 support tracking HOT 5
- Pruning with logical "and" instead of "or" HOT 2
- Catch 100% normalization uncertainty modifiers
- Remove `InvalidNameReuse` exception in favor of in `InvalidModel`
- Typing for tensor shapes
- `ValueError: The truth value of an array` ... when using `fixed_params` kwarg for `pyhf.infer.hypotest` HOT 4
- Release v0.7.6 checklist
- Using the hessian matrix in optimization HOT 2
- Typo in docs for qmutilde test-statistic
- Implementation of fixed params in scipy minimizer
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pyhf.