Code Monkey home page Code Monkey logo

phase-space-sampling's Introduction

Phase-space sampling of large datasets UIPS-CI UIPS-pyversion UIPS-pypi

Installation for NREL HPC users

  1. module load openmpi/4.1.0/gcc-8.4.0
  2. conda activate /projects/mluq/condaEnvs/uips

Installation for developers

  1. conda create --name uips python=3.10
  2. conda activate uips
  3. git clone https://github.com/NREL/Phase-space-sampling.git
  4. cd Phase-space-sampling
  5. pip install -e .

Test

bash tutorials/run2D.sh

Installation for users

pip install uips

Test

import os

import numpy as np
from prettyPlot.parser import parse_input_file
from prettyPlot.plotting import plt, pretty_labels, pretty_legend

from uips import UIPS_INPUT_DIR
from uips.wrapper import downsample_dataset_from_input
import uips.utils.parallel as par

ndat = int(1e5)
nSampl = 100
ds = np.random.multivariate_normal([0, 0], np.eye(2), size=ndat)
np.save("norm.npy", ds)

inpt = parse_input_file(os.path.join(UIPS_INPUT_DIR, "input2D"))
inpt["dataFile"] = "norm.npy"
inpt["nWorkingData"] = f"{ndat} {ndat}"
inpt["nEpochs"] = f"5 20"
inpt["nSamples"] = f"{nSampl}"

best_files = downsample_dataset_from_input(inpt)


if par.irank == par.iroot:
    downsampled_ds = {}
    for nsamp in best_files:
        downsampled_ds[nsamp] = np.load(best_files[nsamp])["data"]
    fig = plt.figure()
    plt.plot(ds[:, 0], ds[:, 1], "o", color="k", label="full DS")
    plt.plot(
        downsampled_ds[nSampl][:, 0],
        downsampled_ds[nSampl][:, 1],
        "o",
        color="r",
        label="downsampled",
    )
    pretty_labels("", "", 14)
    pretty_legend()
    plt.savefig("normal_downsample.png")

Purpose

The purpose of the tool is to perform a smart downselection of a large number of datapoints. Typically, large numerical simulations generate billions, or even trillions of datapoints. However, there may be redundancy in the dataset which unnecessarily constrains memory and computing requirements. Here, redundancy is defined as closeness in feature space. The method is called phase-space sampling.

Running the example

bash tutorials/run2D.sh: Example of downsampling a 2D combustion dataset. First the downsampling is performed (mpiexec -np 4 python tests/main_from_input.py -i inputs/input2D). Then the loss function for each flow iteration is plotted (python postProcess/plotLoss.py -i inputs/input2D). Finally, the samples are visualized (python postProcess/visualizeDownSampled_subplots.py -i inputs/input2D). All figures are saved under the folder Figures.

Parallelization

The code is GPU+MPI-parallelized: a) the dataset is loaded and shuffled in parallel b) the probability evaluation (the most expensive step) is done in parallel c) downsampling is done in parallel d) only the training is offloaded to a GPU if available. Memory usage of root processor is higher than other since it is the only one in charge of the normalizing flow training and sampling probability adjustment. To run the code in parallel, mpiexec -np num_procs python tests/main_from_input.py -i inputs/input2D.

In the code, arrays with suffix _ denote data distributed over the processors.

The computation of nearest neighbor distance is parallelized using the sklearn implementation. It will be accelerated on systems where hyperthreading is enabled (your laptop, but NOT the Eagle HPC)

When using GPU+MPI-parallelism on Eagle, you need to specify the number of MPI tasks (srun -n 36 python tests/main_from_input.py) When using MPI-parallelism alone on Eagle, you do not need to specify the number of MPI tasks (srun python tests/main_from_input.py)

Running on GPU only accelerate execution by ~30% for the examples provided here. Running with many MPI-tasks linearly decreases the execution time for probability evaluation, as well as memory per core requirements.

Parallelization tested with up to 36 cores on Eagle.

Parallelization tested with up to 4 cores on MacOS Monterey v12.7.1.

Data

We provide the data for running a 2D downsampling example. The data is located at data/combustion2DToDownsampleSmall.npy.

Assumptions

The dataset to downsample has size $N \times d$ where $N \gg d$. The first dimension ($N$) is called the number of samples and the second dimension ($d$) is the dimension of the samples. The code has been successfully tested in up to 89 dimensions and there is no inherent limit for $d$.

Hyperparameters

All hyperparameters can be controlled via an input file (see tutorials/run2D.sh). We recommend fixing the number of flow calculation iteration to 2. When increasing the number of dimensions, we recommend adjusting the hyperparameters. A 2-dimensional example (inputs/input2D) and an 11-dimensional (inputs/highdim/input11D) example are provided to guide the user.

Sanity checks

The nearest neighbor distance

It may not be obvious to evaluate how uniformly distributed are the obtained phase-space samples. During the code execution, a mean dist is displayed. This corresponds to the average distance to the nearest neighbor of each data point. The higher the distance, the more uniformly distributed is the dataset. At first, the distance is shown for a random sampling case. Then it is displayed at every iteration. The mean distance should be higher than for the random sampling case. In addition, the second iteration should lead better mean distance than the first one. A warning message is displayed in case the second flow iteration did not improve the sampling. An error message is displayed in case the last flow iteration did not improve the sampling compared to the random case.

The computational cost associated with the nearest neighbor computations scales as $N^2$ where $N$ is the number of samples. The cost is not prohibitive when computed on the downsampled data since the size of dataset is small. In case where one uses a very large number of phase-space sampled data points, it may be necessary to disable the neighbor distance calculation.

The normalizing flow loss

During training of the normalizing flow, the negative log likelihood is displayed. The user should ensure that the normalizing flow has learned something useful about the distribution by ensuring that the loss is close to being converged. The log of the loss is displayed as a csv file in the folder TrainingLog. The loss of the second training iteration should be higher than the first iteration. If this is not the case or if more iterations are needed, the normalizing flow trained may need to be better converged. A warning message will be issued in that case.

A script is provided to visualize the losses. Execute python plotLoss.py -i inputs/input2D where input2D is the name of the input file used to perform the downsampling.

Example 2D

Suppose one wants to downsample an dataset where $N=10^7$ and $d=2$. First, the code estimates the probability map of the data in order to identify where are located redundant data points. An example dataset (left) and associated probability map (right) are shown below

Next, the code uses the probability map to define a sampling probability which downselect samples that uniformly span the feature space. The probability map is obtained by training a Neural Spline Flow which implementation was obtained from Neural Spline Flow repository. The number of samples in the final dataset can be controlled via the input file.

For comparison, a random sampling gives the following result

Example 11D

Input file is provided in inputs/highdim/input11D

Data efficient ML

The folder data-efficientML is NOT necessary for using the phase-space sampling package. It only contains the code necessary to reproduce the results shown in the paper:

Formatting Code style: black Imports: isort

Code formatting and import sorting are done automatically with black and isort.

Fix imports and format : pip install black isort; bash fixFormat.sh

Spelling is checked but not automatically fixed using codespell

Reference

Published version (open access)
Preprint version (open access)


@article{hassanaly2023UIPS,
  title={Uniform-in-Phase-Space Data Selection with Iterative Normalizing Flows},
  author={Hassanaly, Malik and Perry, Bruce A. and Mueller, Michael E. and Yellapantula, Shashank},
  journal={Data-centric Engineering},
  pages={e11},
  volume={4},
  year={2023},
  publisher={Cambridge University Press}
}

Overview presentation in documentation/methodOverview.pptx

Contact

Malik Hassanaly: (malik.hassanaly!at!nrel!gov)

phase-space-sampling's People

Contributors

dmadisetti avatar malihass avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

phase-space-sampling's Issues

Shuffling is broken on large datasets

Hello,

The shuffling flag causes an int overflow for mpi communication on large datasets. Stack trace:

Traceback (most recent call last):
  File "main_iterative.py", line 75, in <module>
    data_to_downsample_, working_data, nFullData = prepareData(inpt)
  File "utils/dataUtils.py", line 109, in prepareData
    par.parallel_shuffle(data_to_downsample_)
  File "utils/parallel.py", line 341, in parallel_shuffle
    Ashuffled_ = odd_even_sort(Ashuffled_)
  File "utils/parallel.py", line 333, in odd_even_sort
    data = exchange(data, rank, rank + 1)
  File "utils/parallel.py", line 315, in exchange
    comm.send(localdata, dest=recvrank)
  File "mpi4py/MPI/Comm.pyx", line 1406, in mpi4py.MPI.Comm.send
  File "mpi4py/MPI/msgpickle.pxi", line 211, in mpi4py.MPI.PyMPI_send
  File "mpi4py/MPI/msgpickle.pxi", line 147, in mpi4py.MPI.pickle_dump
  File "mpi4py/MPI/msgbuffer.pxi", line 50, in mpi4py.MPI.downcast
OverflowError: integer 2335285847 does not fit in 'int'
Traceback (most recent call last):
  File "main_iterative.py", line 75, in <module>
    data_to_downsample_, working_data, nFullData = prepareData(inpt)
  File "utils/dataUtils.py", line 109, in prepareData
    par.parallel_shuffle(data_to_downsample_)
  File "utils/parallel.py", line 341, in parallel_shuffle
    Ashuffled_ = odd_even_sort(Ashuffled_)
  File "utils/parallel.py", line 333, in odd_even_sort
    data = exchange(data, rank, rank + 1)
  File "utils/parallel.py", line 315, in exchange
    comm.send(localdata, dest=recvrank)
  File "mpi4py/MPI/Comm.pyx", line 1406, in mpi4py.MPI.Comm.send
  File "mpi4py/MPI/msgpickle.pxi", line 211, in mpi4py.MPI.PyMPI_send
  File "mpi4py/MPI/msgpickle.pxi", line 147, in mpi4py.MPI.pickle_dump
  File "mpi4py/MPI/msgbuffer.pxi", line 50, in mpi4py.MPI.downcast
OverflowError: integer 2335285847 does not fit in 'int'

Deleted dependency detected

I'm a Cyber Security researcher and developer of PackjGuard [1] to address open-source software supply chain attacks.

Issue

During my research, I found that this repo is vulnerable to attack due to deleted dependency from the public PyPI registry.

Details

Specifically, file https://github.com/NREL/Phase-space-sampling/blob/328b0a4352f4428f234a2c9f0bf4a0ce8509cfdd/requirements.txt lists prettyplot as one of the dependencies. However, it has been deleted from public PyPI. As such, an external bad actor can claim that name and register a malicious package, which will be then installed with pip install command, resulting in arbitrary remote code execution.

Impact

Not only your apps/services using https://github.com/NREL/Phase-space-sampling repo code are vulnerable to this attack, but the users of your open-source Github repo could also fall victim.

You could read more about such attacks here: https://medium.com/@alex.birsan/dependency-confusion-4a5d60fec610

Remediation

Please manually register a placeholder prettyplot package on PyPI immediately or remove prettyplot dependency from https://github.com/NREL/Phase-space-sampling/blob/328b0a4352f4428f234a2c9f0bf4a0ce8509cfdd/requirements.txt to fix this vulnerability.

To automatically fix such issues in future, please install PackjGuard Github app [1].

Thanks!

  1. PackjGuard is a Github app that monitors your repos 24x7, detects vulnerable/malicious/risky open-source dependencies, and creates pull requests for auto remediation: https://github.com/marketplace/packjguard

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.