Code Monkey home page Code Monkey logo

csprng's Introduction

PyTorch/CSPRNG

CircleCI

torchcsprng is a PyTorch C++/CUDA extension that provides:

Design

torchcsprng generates a random 128-bit key on CPU using one of its generators and runs AES128 in CTR mode either on CPU or on GPU using CUDA to generate a random 128 bit state and apply a transformation function to map it to target tensor values. This approach is based on Parallel Random Numbers: As Easy as 1, 2, 3(John K. Salmon, Mark A. Moraes, Ron O. Dror, and David E. Shaw, D. E. Shaw Research). It makes torchcsprng both crypto-secure and parallel on CUDA and CPU.

CSPRNG architecture

Advantages:

  • The user can choose either seed-based(for testing) or random device based(fully crypto-secure) generators
  • One generator instance for both CPU and CUDA tensors(because the encryption key is always generated on CPU)
  • CPU random number generation is also parallel(unlike the default PyTorch CPU generator)

Features

torchcsprng 0.2.0 exposes new API for tensor encryption/decryption. Tensor encryption/decryption API is dtype agnostic, so a tensor of any dtype can be encrypted and the result can be stored to a tensor of any dtype. An encryption key also can be a tensor of any dtype. Currently torchcsprng supports AES cipher with 128-bit key in two modes: ECB and CTR.

  • torchcsprng.encrypt(input: Tensor, output: Tensor, key: Tensor, cipher: string, mode: string)
  • input tensor can be any CPU or CUDA tensor of any dtype and size in bytes(zero-padding is used to make its size in bytes divisible by block size in bytes)
  • output tensor can have any dtype and the same device as input tensor and the size in bytes rounded up to the block size in bytes(16 bytes for AES 128)
  • key tensor can have any dtype and the same device as input tensor and size in bytes equal to 16 for AES 128
  • cipher currently can be only one supported value "aes128"
  • mode currently can be either "ecb" or "ctr"
  • torchcsprng.decrypt(input: Tensor, output: Tensor, key: Tensor, cipher: string, mode: string)
  • input tensor can be any CPU or CUDA tensor of any dtype with size in bytes divisible by the block size in bytes(16 bytes for AES 128)
  • output tensor can have any dtype but the same device as input tensor and the same size in bytes as input tensor
  • key tensor can have any dtype and the same device as input tensor and size in bytes equal to 16 for AES 128
  • cipher currently can be only one supported value "aes128"
  • mode currently can be either "ecb" or "ctr"

torchcsprng exposes two methods to create crypto-secure and non-crypto-secure PRNGs:

Method to create PRNG Is crypto-secure? Has seed? Underlying implementation
create_random_device_generator(token: string=None) yes no See std::random_device and its constructor. The implementation in libstdc++ expects token to name the source of random bytes. Possible token values include "default", "rand_s", "rdseed", "rdrand", "rdrnd", "/dev/urandom", "/dev/random", "mt19937", and integer string specifying the seed of the mt19937 engine. (Token values other than "default" are only valid for certain targets.) If token=None then constructs a new std::random_device object with an implementation-defined token.
create_mt19937_generator(seed: int=None) no yes See std::mt19937 and its constructor. Constructs a mersenne_twister_engine object, and initializes its internal state sequence to pseudo-random values. If seed=None then seeds the engine with default_seed.

The following list of methods supports all forementioned PRNGs:

Kernel CUDA CPU
random_() yes yes
random_(to) yes yes
random_(from, to) yes yes
uniform_(from, to) yes yes
normal_(mean, std) yes yes
cauchy_(median, sigma) yes yes
log_normal_(mean, std) yes yes
geometric_(p) yes yes
exponential_(lambda) yes yes
randperm(n) yes* yes
  • the calculations are done on CPU and the result is copied to CUDA

Installation

CSPRNG works with Python 3.6-3.9 on the following operating systems and can be used with PyTorch tensors on the following devices:

Tensor Device Type Linux macOS MS Window
CPU Supported Supported Supported
CUDA Supported Not Supported Supported since 0.2.0

The following is the corresponding CSPRNG versions and supported Python versions.

PyTorch CSPRNG Python CUDA
1.8.0 0.2.0 3.7-3.9 10.1, 10.2, 11.1
1.7.1 0.1.4 3.6-3.8 9.2, 10.1, 10.2
1.7.0 0.1.3 3.6-3.8 9.2, 10.1, 10.2
1.6.0 0.1.2 3.6-3.8 9.2, 10.1, 10.2

Binary Installation

Anaconda:

OS CUDA
Linux/Windows 10.1

10.2

11.1

None
conda install torchcsprng cudatoolkit=10.1 -c pytorch -c conda-forge

conda install torchcsprng cudatoolkit=10.2 -c pytorch -c conda-forge

conda install torchcsprng cudatoolkit=11.1 -c pytorch -c conda-forge

conda install torchcsprng cpuonly -c pytorch -c conda-forge
macOS None conda install torchcsprng -c pytorch

pip:

OS CUDA
Linux/Windows 10.1

10.2

11.1

None
pip install torchcsprng==0.2.0+cu101 torch==1.8.0+cu101 -f https://download.pytorch.org/whl/cu101/torch_stable.html

pip install torchcsprng==0.2.0 torch==1.8.0 -f https://download.pytorch.org/whl/cu102/torch_stable.html

pip install torchcsprng==0.2.0+cu111 torch==1.8.0+cu111 -f https://download.pytorch.org/whl/cu111/torch_stable.html

pip install torchcsprng==0.2.0+cpu torch==1.8.0+cpu -f https://download.pytorch.org/whl/cpu/torch_stable.html
macOS None pip install torchcsprng torch

Nightly builds:

Anaconda:

OS CUDA
Linux/Windows 10.1

10.2

11.1

None
conda install torchcsprng cudatoolkit=10.1 -c pytorch-nightly -c conda-forge

conda install torchcsprng cudatoolkit=10.2 -c pytorch-nightly -c conda-forge

conda install torchcsprng cudatoolkit=11.1 -c pytorch-nightly -c conda-forge

conda install torchcsprng cpuonly -c pytorch-nightly -c conda-forge
macOS None conda install torchcsprng -c pytorch-nightly

pip:

OS CUDA
Linux/Windows 10.1

10.2

11.1

None
pip install --pre torchcsprng -f https://download.pytorch.org/whl/nightly/cu101/torch_nightly.html

pip install --pre torchcsprng -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html

pip install --pre torchcsprng -f https://download.pytorch.org/whl/nightly/cu111/torch_nightly.html

pip install --pre torchcsprng -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
macOS None pip install --pre torchcsprng -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html

From Source

torchcsprng is a Python C++/CUDA extension that depends on PyTorch. In order to build CSPRNG from source it is required to have Python(>=3.7) with PyTorch(>=1.8.0) installed and C++ compiler(gcc/clang for Linux, XCode for macOS, Visual Studio for MS Windows). To build torchcsprng you can run the following:

python setup.py install

By default, GPU support is built if CUDA is found and torch.cuda.is_available() is True. Additionally, it is possible to force building GPU support by setting the FORCE_CUDA=1 environment variable, which is useful when building a docker image.

Getting Started

The torchcsprng API is available in torchcsprng module:

import torch
import torchcsprng as csprng

Create crypto-secure PRNG from /dev/urandom:

urandom_gen = csprng.create_random_device_generator('/dev/urandom')

Create empty boolean tensor on CUDA and initialize it with random values from urandom_gen:

torch.empty(10, dtype=torch.bool, device='cuda').random_(generator=urandom_gen)
tensor([ True, False, False,  True, False, False, False,  True, False, False],
       device='cuda:0')

Create empty int16 tensor on CUDA and initialize it with random values in range [0, 100) from urandom_gen:

torch.empty(10, dtype=torch.int16, device='cuda').random_(100, generator=urandom_gen)
tensor([59, 20, 68, 51, 18, 37,  7, 54, 74, 85], device='cuda:0',
       dtype=torch.int16)

Create non-crypto-secure MT19937 PRNG:

mt19937_gen = csprng.create_mt19937_generator()
torch.empty(10, dtype=torch.int64, device='cuda').random_(torch.iinfo(torch.int64).min, to=None, generator=mt19937_gen)
tensor([-7584783661268263470,  2477984957619728163, -3472586837228887516,
        -5174704429717287072,  4125764479102447192, -4763846282056057972,
         -182922600982469112,  -498242863868415842,   728545841957750221,
         7740902737283645074], device='cuda:0')

Create crypto-secure PRNG from default random device:

default_device_gen = csprng.create_random_device_generator()
torch.randn(10, device='cuda', generator=default_device_gen)
tensor([ 1.2885,  0.3240, -1.1813,  0.8629,  0.5714,  2.3720, -0.5627, -0.5551,
        -0.6304,  0.1090], device='cuda:0')

Create non-crypto-secure MT19937 PRNG with seed:

mt19937_gen = csprng.create_mt19937_generator(42)
torch.empty(10, device='cuda').geometric_(p=0.2, generator=mt19937_gen)
tensor([ 7.,  1.,  8.,  1., 11.,  3.,  1.,  1.,  5., 10.], device='cuda:0')

Recreate MT19937 PRNG with the same seed:

mt19937_gen = csprng.create_mt19937_generator(42)
torch.empty(10, device='cuda').geometric_(p=0.2, generator=mt19937_gen)
tensor([ 7.,  1.,  8.,  1., 11.,  3.,  1.,  1.,  5., 10.], device='cuda:0')

Contributing

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us.

License

torchcsprng is BSD 3-clause licensed. See the license file here

Copyright © 2020 Meta Platforms, Inc

csprng's People

Contributors

amyreese avatar anijain2305 avatar facebook-github-bot avatar jspisak avatar kumatea avatar lqf96 avatar malfet avatar pbelevich avatar r-barnes avatar seemethere avatar smessmer avatar thatch avatar timrc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csprng's Issues

Conda channel not found

UnavailableInvalidChannel: HTTP 404 NOT FOUND for channel pytorch/torchcsprng https://conda.anaconda.org/pytorch/torchcsprng

The channel is not accessible or is invalid.

You will need to adjust your conda configuration to proceed.
Use conda config --show channels to view your configuration's current state,
and use conda config --show-sources to view config file locations.

CUDA 11.0 support

Feature

Does csprng support CUDA 11.0?
If not, are you planning to support CUDA 11.0 in the future? If so when?

Alternatives

Can one build scprng manually or install a nightly to get CUDA 11.0 support?

Additional Information

See pytorch/opacus/issues/88 for more information.

Improve test_geometric

Investigate how to test geometric distribution properly, current version based on chi-square is not reliable

Windows CUDA build fails

This was introduced in pytorch/pytorch#40675

[1/1] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\bin\nvcc -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -IC:\tools\miniconda3\envs\env3.8\lib\site-packages\torch\include -IC:\tools\miniconda3\envs\env3.8\lib\site-packages\torch\include\torch\csrc\api\include -IC:\tools\miniconda3\envs\env3.8\lib\site-packages\torch\include\TH -IC:\tools\miniconda3\envs\env3.8\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\include" -IC:\tools\miniconda3\envs\env3.8\include -IC:\tools\miniconda3\envs\env3.8\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.26.28801\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.26.28801\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" -c C:\Users\circleci\project\torch_csprng\csrc\csprng.cu -o C:\Users\circleci\project\build\temp.win-amd64-3.8\Release\Users\circleci\project\torch_csprng\csrc\csprng.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_50,code=compute_50 --expt-extended-lambda -Xcompiler -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=torch_csprng -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: C:/Users/circleci/project/build/temp.win-amd64-3.8/Release/Users/circleci/project/torch_csprng/csrc/csprng.obj 
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\bin\nvcc -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -IC:\tools\miniconda3\envs\env3.8\lib\site-packages\torch\include -IC:\tools\miniconda3\envs\env3.8\lib\site-packages\torch\include\torch\csrc\api\include -IC:\tools\miniconda3\envs\env3.8\lib\site-packages\torch\include\TH -IC:\tools\miniconda3\envs\env3.8\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\include" -IC:\tools\miniconda3\envs\env3.8\include -IC:\tools\miniconda3\envs\env3.8\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.26.28801\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.26.28801\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" -c C:\Users\circleci\project\torch_csprng\csrc\csprng.cu -o C:\Users\circleci\project\build\temp.win-amd64-3.8\Release\Users\circleci\project\torch_csprng\csrc\csprng.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_35,code=sm_35 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_50,code=compute_50 --expt-extended-lambda -Xcompiler -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=torch_csprng -D_GLIBCXX_USE_CXX11_ABI=0
cl : Command line warning D9002 : ignoring unknown option '-fopenmp'
csprng.cu
C:/tools/miniconda3/envs/env3.8/lib/site-packages/torch/include\c10/util/ThreadLocalDebugInfo.h(12): warning: modifier is ignored on an enum specifier

C:/tools/miniconda3/envs/env3.8/lib/site-packages/torch/include\ATen/core/boxing/impl/boxing.h(128): warning: integer conversion resulted in a change of sign

C:/tools/miniconda3/envs/env3.8/lib/site-packages/torch/include\ATen/record_function.h(18): warning: modifier is ignored on an enum specifier

C:/tools/miniconda3/envs/env3.8/lib/site-packages/torch/include\torch/csrc/jit/api/module.h(483): error: a member with an in-class initializer must be const

C:/tools/miniconda3/envs/env3.8/lib/site-packages/torch/include\torch/csrc/jit/api/module.h(496): error: a member with an in-class initializer must be const

C:/tools/miniconda3/envs/env3.8/lib/site-packages/torch/include\torch/csrc/jit/api/module.h(510): error: a member with an in-class initializer must be const

C:/tools/miniconda3/envs/env3.8/lib/site-packages/torch/include\torch/csrc/jit/api/module.h(523): error: a member with an in-class initializer must be const

ZeroDivisionError: float division by zero in test_cpu_parallel

@unittest.skipIf(torch.get_num_threads() < 2, "requires multithreading CPU")
    def test_cpu_parallel(self):
        urandom_gen = csprng.create_random_device_generator('/dev/urandom')
    
        def measure(size):
            t = torch.empty(size, dtype=torch.float32, device='cpu')
            start = time.time()
            for i in range(10):
                t.normal_(generator=urandom_gen)
            finish = time.time()
            return finish - start
    
        time_for_1K = measure(1000)
        time_for_1M = measure(1000000)
        # Pessimistic check that parallel execution gives >= 1.5 performance boost
>       self.assertTrue(time_for_1M/time_for_1K < 1000 / min(1.5, torch.get_num_threads()))
E       ZeroDivisionError: float division by zero

test\test_csprng.py:308: ZeroDivisionError

A grab bag of nits :)

  1. What is the argument to create_random_device_generator_with_token? Is it just any string? Maybe good to document that in README.md

  2. In Design section of README.md, " on CPU or CUDA to" -> "on CPU or on GPU using CUDA"

  3. In csprng.cu - my_random_kernel_cuda is not a good token

  4. Line 93 of aes.cuh - I don't know if the C/C++ standards says that unsigned always means unsigned int - maybe good to be explicit

  5. test_geometric has some commented out lines - this maybe tracked in another issue?

[Feature Request] Add ECB mode for AES

AES in CTR mode is great, but for some applications ECB mode would be very useful.
I know part of this work is inspired from https://github.com/kokke/tiny-AES-c which implements CTR, ECB and CBC, so I don't know how hard it would be to have support for ECB.

I'm not sure how the exact torch api would look like, I don't even know if it's ok to add a new functionto torch, it could also probably be a csprng function instead, with a name inspired from kokke/tiny-AES-c syntax.

[Feature Request] Add API support for getting the intial 128bit key used by AES

In CrypTen, we would need multiple parties to share the same source of randomness. In seed based generators, this is done by parties sharing the same seed. In a crypto-secure RNG, this would require obtaining the initial 128 bit key passed to the AES.

It would be helpful to have an API to retrieve the 128-bit key:

urandom_gen = csprng.create_random_device_generator('/dev/urandom')
initial_key = urandom_gen.get_aes_key()
gen2 = csprng.create_random_device_generator()
gen2.set_aes_key(intial_key)

Package aes128_key_tensor and create_const_generator

Previously there is a commit that supports generating aes key explicitly and distributing them to other machine to ensure all machine uses the same source of randomness. It seems like this commit is not packaged into the current 0.2.0 version of torchcsprng. One has to build from source in order to use this functionality. Are there any plans to support this functionality in later packaged version of torchcsprng? Thanks!

OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.

Windows + py39 + pip:

pip install --no-cache-dir --pre torchcsprng -f https://download.pytorch.org/whl/test/cu111/torch_test.html
python test_csprng.py -v
test_exponential_kstest (__main__.TestCSPRNG) ... OMP: Error #15: Initializing libiomp5md.dll, but found libiomp5md.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.

PyPI Windows 0.2.1 Wheels are not CPU

Is it expected that the builds for 0.2.1 are GPU, where as the 0.2.0 ones are CPU on PyPI?

0.2.1 on PyPI broken:

PS C:\dev\SyMPC> pip install torchcsprng==0.2.1
PS C:\dev\SyMPC> python
Python 3.8.6 (tags/v3.8.6:db45529, Sep 23 2020, 15:52:53) [MSC v.1927 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torchcsprng
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\me\.virtualenvs\SyMPC-pSsChgge\lib\site-packages\torchcsprng\__init__.py", line 9, in <module>
    from torchcsprng._C import *
ImportError: DLL load failed while importing _C: The specified module could not be found.

0.2.0 on PyPI working:

PS C:\dev\SyMPC> pip install torchcsprng==0.2.0
PS C:\dev\SyMPC> python
Python 3.8.6 (tags/v3.8.6:db45529, Sep 23 2020, 15:52:53) [MSC v.1927 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torchcsprng

torchcsprng==0.2.1+cpu on torch repo Working:

PS C:\dev\SyMPC> pip install torchcsprng==0.2.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
Looking in links: https://download.pytorch.org/whl/torch_stable.html
Collecting torchcsprng==0.2.1+cpu
  Using cached https://download.pytorch.org/whl/cpu/torchcsprng-0.2.1%2Bcpu-cp37-cp37m-win_amd64.whl (167 kB)
Requirement already satisfied: torch==1.8.1 in c:\users\me\.virtualenvs\sympc-psschgge\lib\site-packages (from torchcsprng==0.2.1+cpu) (1.8.1)
Requirement already satisfied: typing-extensions in c:\users\me\.virtualenvs\sympc-psschgge\lib\site-packages (from torch==1.8.1->torchcsprng==0.2.1+cpu) (3.10.0.0)
Requirement already satisfied: numpy in c:\users\me\.virtualenvs\sympc-psschgge\lib\site-packages (from torch==1.8.1->torchcsprng==0.2.1+cpu) (1.20.3)
Installing collected packages: torchcsprng
Successfully installed torchcsprng-0.2.1+cpu
PS C:\dev\SyMPC> python
Python 3.7.9 (tags/v3.7.9:13c94747c7, Aug 17 2020, 18:58:18) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torchcsprng

Compilation issue pulling latest pytorch nightly

Nightly broken on cu92.
Seem slike nvcc on cu92 cannot correctly compile c10::optional - see comparison between
working version (https://app.circleci.com/pipelines/github/pytorch/csprng/528/workflows/b34c1fe5-be2f-4d6d-8a81-c7f081cebc5a)
and
broken version (https://app.circleci.com/pipelines/github/pytorch/csprng/527/workflows/9726d14f-9e63-4bc4-989e-2fddcc24c936)

Possibly due to the change in pytorch/pytorch#47015

See fix in: pytorch/pytorch#48257

pip install with set_offset not implemented error

/tmp/pip-req-build-_41_wxtb/torchcsprng/csrc/cpu/../kernels_commons.h:42:8: error: ‘void CSPRNGGeneratorImpl::set_offset(uint64_t)’ marked ‘override’, but does not override
42 | void set_offset(uint64_t offset) override { throw std::runtime_error("not implemented"); }
| ^~~~~~~~~~
/tmp/pip-req-build-_41_wxtb/torchcsprng/csrc/cpu/../kernels_commons.h:43:12: error: ‘uint64_t CSPRNGGeneratorImpl::get_offset() const’ marked ‘override’, but does not override
43 | uint64_t get_offset() const override { throw std::runtime_error("not implenented"); }

Setup CMake build

Setup CMake build and build cpu code with cpu compiler and only cuda code with nvcc

Support randperm

To provide real DP guarantees, we need to certify that batches are also shuffled with a CSPRNG. This means supporting passing a torchcsprng generator to the DataLoader. If you try to do that, it will try calling randperm and die.

To repro:

import torchvision
import torchvision.transforms as tfms

train_ds = torchvision.datasets.CIFAR10('.', train=True, download=True, transform=tfms.ToTensor())

from torch.utils.data import DataLoader

train_dl = DataLoader(train_ds, batch_size=8, shuffle=True)

import torchcsprng as prng
generator = prng.create_random_device_generator("/dev/urandom")

train_dl = DataLoader(train_ds, batch_size=8, shuffle=True, generator=generator)

x, y = next(iter(train_dl))

Error message:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-18-22755f335b09> in <module>()
----> 1 x, y = next(iter(train_dl))
      2 x.shape

4 frames
/usr/local/lib/python3.6/dist-packages/torch/utils/data/sampler.py in __iter__(self)
    108             rand_tensor = torch.randint(high=n, size=(self.num_samples,), dtype=torch.int64, generator=self.generator)
    109             return iter(rand_tensor.tolist())
--> 110         return iter(torch.randperm(n, generator=self.generator).tolist())
    111 
    112     def __len__(self):

RuntimeError: Could not run 'aten::randperm.generator_out' with arguments from the 'UNKNOWN_TENSOR_TYPE_ID' backend. 'aten::randperm.generator_out' is only available for these backends: [CPU, CUDA, Autograd, Profiler, Tracer].

Use scalar_t as input of aes::encrypt

Hi!
I'm trying to see what's need to be done for #77 [ECB mode for AES].
One key thing is that I need to feed aes::encrypt with data from the input tensor accessed through scalar_t* data, instead of a idx counter.

So concretely I need to change:

aes::block_t block;
memset(&block, 0, aes::block_t_size);
block.x = idx;
aes::encrypt(reinterpret_cast<uint8_t*>(&block), key);

And instead of idx, I should provide the correct pointer to a part of data and be able to cast this scalar_t* in a uint8_t*.

Any hints on how I could to this? I'm not familiar with the PyTorch C codebase. Thanks!

Symbol not found during dlopen (Python3.7)

I've been following the instructions for building pytext documentation with Python3.7 (and 3.8) on a Mac (Catalina 10.15.6)
at https://pytext.readthedocs.io/en/master/hacking_pytext.html#creating-documentation and I'm running into the following error during "make html"

Creating file /Users/mikekg/pytext/pytext/docs/source/modules/modules.rst.
WARNING:root:This caffe2 python run failed to load cuda module:No module named 'caffe2.python.caffe2_pybind11_state_gpu',and AMD hip module:No module named 'caffe2.python.caffe2_pybind11_state_hip'.Will run in CPU only mode.
Install apex from https://github.com/NVIDIA/apex/.

Exception occurred:
File "/Users/mikekg/Library/Python/3.7/lib/python/site-packages/torchcsprng/init.py", line 10, in
from torchcsprng._C import *
ImportError: dlopen(/Users/mikekg/Library/Python/3.7/lib/python/site-packages/torchcsprng/_C.cpython-37m-darwin.so, 2): Symbol not found: __ZN3c104impl23ExcludeDispatchKeyGuardC1ENS_11DispatchKeyE
Referenced from: /Users/mikekg/Library/Python/3.7/lib/python/site-packages/torchcsprng/_C.cpython-37m-darwin.so
Expected in: /Users/mikekg/Library/Python/3.7/lib/python/site-packages/caffe2/python/../../torch/lib/libc10.dylib
in /Users/mikekg/Library/Python/3.7/lib/python/site-packages/torchcsprng/_C.cpython-37m-darwin.so
The full traceback has been saved in /var/folders/_4/5prdm06n7_xcqvmrxj4gt69m0000gn/T/sphinx-err-kx0fy1j6.log, if you want to report the issue to the developers.

Support torch==1.9

Hi!
Do you have any plans to support PyTorch 1.9?

We at opacus use csprng to generate cryptographically secure noise and ideally we want to make it available to people using the latest pytorch version.
Additionally, it creates conflicts with the latest versions of packages like torchvision, which makes testing quite tricky

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.