zama-ai / concrete-numpy Goto Github PK
View Code? Open in Web Editor NEWConcrete-Numpy: A library to turn programs into their homomorphic equivalent.
License: Other
Concrete-Numpy: A library to turn programs into their homomorphic equivalent.
License: Other
ERROR: Could not find a version that satisfies the requirement concrete-numpy (from versions: none)
ERROR: No matching distribution found for concrete-ml
What happened/what you expected to happen?
Step by step procedure someone should follow to trigger the bug:
print("Minimal POC to reproduce the bug")
Attach all generated artifacts here (generated in the .artifacts
directory by default, see documentation for more detailed instructions).
Hello, after reading the operation manual and experiment, I want to implement the. Sum and. Array functions in numpy in ciphertext, is it feasible.
python3 sanity_check.py gives Fatal Python error: Illegal instruction
Valgrind output
0xC5 0xF8 0x77 is AVX vzeroupper
at 0xEF86AB9: concrete_core_ffi::utils::get_mut_checked (in /usr/local/lib/python3.10/dist-packages/concrete_compiler.libs/libConcretelangRuntime-aaaa6abd.so)
https://developer.apple.com/documentation/apple-silicon/about-the-rosetta-translation-environment
AVX not supported in Docker with rosetta emulation
Make Docker M1 variant available with concrete_compiler built without AVX
When being compiled, large models that would produce an executable object requiring more than 2GB of virtual memory fail during linking with following error (example Concrete-ML 0.6.1, Concrete-numpy 0.9.0):
File /usr/local/lib/python3.8/dist-packages/concrete/compiler/library_support.py:155, in LibrarySupport.compile(self, mlir_program, options)
150 if not isinstance(options, CompilationOptions):
151 raise TypeError(
152 f"options must be of type CompilationOptions, not {type(options)}"
153 )
154 return LibraryCompilationResult.wrap(
--> 155 self.cpp().compile(mlir_program, options.cpp())
156 )
RuntimeError: Can't emit artifacts: Command failed:ld --shared -o /tmp/tmpXXXXXXXX/sharedlib.so /tmp/tmpXXXXXXXX.module-0.mlir.o /usr/local/lib/python3.8/dist-packages/concrete_compiler.libs/libConcretelangRuntime-14f67b9a.so -rpath=/usr/local/lib/python3.8/dist-packages/concrete_compiler.libs --disable-new-dtags 2>&1
Code:256
/tmp/tmpXXXXXXXX.module-0.mlir.o: in function `main':
LLVMDialectModule:(.text+0x65): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0x8dc9): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0x8e06): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0x8fa9): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0xb0e4): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0xb121): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0xd69a): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0xd87c): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0xddf7): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0x100f3): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0x10130): additional relocation overflows omitted from the output
/tmp/tmpXXXXXXXX/sharedlib.so: PC-relative offset overflow in PLT entry for `_dfr_start'
Enable compilation of models exceeding 2GB virtual memory address limit.
According to man ld
, following flags might help to solve this issue:
--no-keep-memory
ld normally optimizes for speed over memory usage by caching the symbol tables of input files in memory.
This option tells ld to instead optimize for memory usage, by rereading the symbol tables as necessary.
This may be required if ld runs out of memory space while linking a large executable.
--large-address-aware
If given, the appropriate bit in the "Characteristics" field of the COFF header is set to indicate
that this executable supports virtual addresses greater than 2 gigabytes. This should be used
in conjunction with the /3GB or /USERVA=value megabytes switch in the "[operating systems]"
section of the BOOT.INI. Otherwise, this bit has no effect. [This option is specific to PE targeted
ports of the linker]
The flags have a performance cost and might not be a suitable default as large models are not necessarily the target for Concrete libraries.
From user's point of view, it could be possible to make the flag(s) available through options/arguments when calling the compiler in Concrete-numpy (and Concrete-ML).
error: 'FHELinalg.dot_eint_int' op operand #0 must be , but got 'tensor<3x4x!FHE.eint<12>>'
loc("-":5:10): error: 'FHELinalg.dot_eint_int' op operand #0 must be , but got 'tensor<3x4x!FHE.eint<12>>'
Traceback (most recent call last):
File "/Users/sbhamad/github.com/sbhamad/poetry-demo/poetry_demo/main.py", line 79, in <module>
circuit = compiler.compile(inputset)
File "/Users/sbhamad/Library/Caches/pypoetry/virtualenvs/poetry-demo-7E09WmKy-py3.9/lib/python3.9/site-packages/concrete/numpy/compilation/compiler.py", line 515, in compile
circuit = Circuit(self.graph, mlir, self.configuration)
File "/Users/sbhamad/Library/Caches/pypoetry/virtualenvs/poetry-demo-7E09WmKy-py3.9/lib/python3.9/site-packages/concrete/numpy/compilation/circuit.py", line 55, in __init__
self.server = Server.create(mlir, input_signs, output_signs, self.configuration)
File "/Users/sbhamad/Library/Caches/pypoetry/virtualenvs/poetry-demo-7E09WmKy-py3.9/lib/python3.9/site-packages/concrete/numpy/compilation/server.py", line 149, in create
compilation_result = support.compile(mlir, options)
File "/Users/sbhamad/Library/Caches/pypoetry/virtualenvs/poetry-demo-7E09WmKy-py3.9/lib/python3.9/site-packages/concrete/compiler/library_support.py", line 155, in compile
self.cpp().compile(mlir_program, options.cpp())
RuntimeError: Caught an unknown exception!
im trying to simply multiply w by i then add b to the resultant numpy ndarray. where i is encrypted.
#!/usr/bin/env python
import concrete.numpy as cnp
import numpy as np
import math
n_bits = 5
inputs = [
[1.0, 2.0, 3.0, 2.5],
[2.0, 5.0, -1.0, 2.0],
[-1.5, 2.7, 3.3, -0.8],
]
weights = [
[0.2, 0.8, -0.5, 1.0],
[0.5, -0.91, 0.26, -0.5],
[-0.26, -0.27, 0.17, 0.87],
]
biases = [2.0, 3.0, 0.5]
layer_outputs = np.dot(inputs, np.array(weights).T) + biases
print(np.array(inputs).shape)
print(np.array(weights).T.shape)
print(np.array(biases).shape)
print(layer_outputs)
def quantize_matrix(matrix):
# Set the number of bits over which data will be quantized
max_X = np.max(matrix)
# Output: 0.9507
min_X = np.min(matrix)
# Output: 0.0581
max_q_value = 2**n_bits - 1
# Output: 127
range = max_X - min_X
# Output: 0.8926
scale = range / max_q_value
# Output: 0.2975
Zp = np.round((-min_X * max_q_value) / range)
# Output: 0
q_X = np.round(matrix / scale) + Zp
print("quantized matrix is : ", np.rint(q_X).astype(np.int64))
return np.rint(q_X).astype(np.int64)
def multiply_weight_with_encrypted_input(i, w, b):
return np.dot(i, w) + b
# return
compiler = cnp.Compiler(
multiply_weight_with_encrypted_input,
{"i": "encrypted", "w": "clear", "b": "clear"},
)
q_inputs = quantize_matrix(inputs)
q_weights_transposed = quantize_matrix(weights).T
q_biases = quantize_matrix(biases)
print(q_inputs.shape, q_weights_transposed.shape, q_biases.shape)
inputset = [
(
np.random.randint(0, 2**n_bits, size=q_inputs.shape),
np.random.randint(0, 2**n_bits, size=q_weights_transposed.shape),
np.random.randint(0, 2**n_bits, size=q_biases.shape),
)
for _ in range(30)
]
# print("inputset looks like this:::::: ", inputset)
circuit = compiler.compile(inputset)
result = circuit.encrypt_run_decrypt(q_inputs, q_weights_transposed, q_biases)
print("homomorphically evaluated final result is: ", result)
This would be very because it's very hard to extract the problematic MLIR if compiler crashes during compilation.
Hi,
I'm interested in the float constraint (floats cannot be used as input/output, only in intermediate state). Do you plan on eventually removing the constraint, or is it here to stay, because I have looked at concrete lib, and you can use floats. Additional question is how soon will it be possible to perform operations with encrypted constants, dot with both encrypted variables, etc.?
Thank you, great work you're doing!
Pycharm cannot be installed by PIP command ERROR: Cannot install concrete-numpy==0.2.0, concrete-numpy==0.2.1, concrete-numpy==0.3.0, Concrete-numpy ==0.4.0 and concrete-Numpy ==0.5.0 because these package versions have conflicting
dependencies.
Depending on the function evaluated using FHE, the following warning is printed multiple times per evaluation :
WARNING: You are currently using the software variant of
concrete-csprng
which does not have access to a hardware source of randomness. To ensure the security of your application, please arrange to provide a secret by using theconcrete_csprng::set_soft_rdseed_secret
function.
Is it possible to disable it when using the PyPi package concrete-numpy
as it floods the notebook cells output ? I don't see any reference to a fix in the documentation.
Ipykernel dies when .compile() method runs/completes
On MacOS with Docker.
Ran docker pull of latest concrete-numpy image.
Ran docker from terminal with docker run --rm -it -p 8888:8888 <image_id>
When running first concrete-numpy example kernel dies and restarts when .compile()
method is called
import concrete.numpy as cnp
def add(x, y):
return x + y
compiler = cnp.Compiler(add, {"x": "encrypted", "y": "encrypted"})
inputset = [(2, 3), (0, 0), (1, 6), (7, 7), (7, 1), (3, 2), (6, 1), (1, 7), (4, 5), (5, 4)]
print(f"Compiling...")
circuit = compiler.compile(inputset)
print(f"Generating keys...") #Never printed, kernel crashes and restarts
circuit.keygen()
examples = [(3, 4), (1, 2), (7, 7), (0, 0)]
for example in examples:
encrypted_example = circuit.encrypt(*example)
encrypted_result = circuit.run(encrypted_example)
result = circuit.decrypt(encrypted_result)
print(f"Evaluation of {' + '.join(map(str, example))} homomorphically = {result}")
Notebook throws this
The kernel appears to have died. It will restart automatically.
Unfortunately jupyter server logs from the running docker container don't seem to show anything useful beyond this:
2023-03-10 22:54:58 [I 07:54:58.381 NotebookApp] KernelRestarter: restarting kernel (1/5), keep random ports
2023-03-10 22:54:58 WARNING:root:kernel 2f2a2a20-f70b-467d-add5-50da66cc979f restarted
$docker pull zamafhe/concrete-numpy:v1.0.0
$docker run --rm -it -p 8888:8888 zamafhe/concrete-numpy:v1.0.0
This is an awesome project and I want to try develop a few unsupervised algorithms not yet in concreteML and tackle some of the bounties you guys have
Followed the steps to install concrete-numpy but getting error.
-> pip install concrete-numpy==0.8.0
Output:
Collecting concrete-numpy==0.8.0 Using cached concrete_numpy-0.8.0-py3-none-any.whl (68 kB) Collecting matplotlib<4.0.0,>=3.5.1 Using cached matplotlib-3.6.2-cp39-cp39-macosx_10_12_x86_64.whl (7.3 MB) Collecting torch<2.0.0,>=1.10.2 Using cached torch-1.13.0-cp39-none-macosx_10_9_x86_64.whl (137.9 MB) Collecting networkx<3.0.0,>=2.6.3 Using cached networkx-2.8.8-py3-none-any.whl (2.0 MB) Collecting numpy<2.0.0,>=1.21.0 Using cached numpy-1.23.5-cp39-cp39-macosx_10_9_x86_64.whl (18.1 MB) ERROR: Could not find a version that satisfies the requirement concrete-compiler<0.20.0,>=0.19.0 (from concrete-numpy) (from versions: 0.1.1, 0.1.2, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 0.5.0, 0.6.0, 0.7.0, 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.14.0, 0.15.0, 0.16.0) ERROR: No matching distribution found for concrete-compiler<0.20.0,>=0.19.0
Hello,
I am trying to get the table lookups tutorial working, but I think I have a problem with the inputset, or there is something else that I am doing wrong. This is what I do:
squared = hnp.LookupTable([i ** 2 for i in range(4)])
cubed = hnp.LookupTable([i ** 3 for i in range(4)])
table = hnp.MultiLookupTable([
[squared, cubed],
[squared, cubed],
[squared, cubed],
])
def f(x):
return table[x]
t_lookup_inputset = [np.array([[0, 0], [0, 1], [0, 2]], dtype=np.uint8), np.array([[1, 1], [1, 2], [1, 3]], dtype=np.uint8),
np.array([[2, 1], [2, 2], [2, 3]], dtype=np.uint8)]
compiler = hnp.NPFHECompiler(
f, {"x": x}
)
print('compiling the circuit')
table_circuit = compiler.compile_on_inputset(t_lookup_inputset)
print('compile done')
I understand that the inputset should have 10 elements, but for the sake of simplicity I do just these 3. And after I run:
inp = np.array([[0, 1], [1, 1], [1, 2]], dtype=np.uint8)
print(circuit.encrypt_run_decrypt(inp))
I get the "RuntimeError: argument #0is not a tensor" during the encryption step. Could you help me a bit with understanding how the inputset works for these table lookups and advise what the best inputset for this case would be?
Thank you!
What happened/what you expected to happen? running of example program
import numpy as np
import concrete.numpy as hnp
def f(x):
return np.fabs(50 * (2 * np.sin(x) * np.cos(x))).astype(np.uint32)
# astype is to go back to the integer world
compiler = hnp.NPFHECompiler(f, {"x": "encrypted"})
circuit = compiler.compile_on_inputset(range(64))
print(circuit.encrypt_run_decrypt(3) == f(3))
print(circuit.encrypt_run_decrypt(0) == f(0))
print(circuit.encrypt_run_decrypt(1) == f(1))
print(circuit.encrypt_run_decrypt(10) == f(10))
print(circuit.encrypt_run_decrypt(60) == f(60))
print("All good!")
Step by step procedure someone should follow to trigger the bug:
print("Minimal POC to reproduce the bug")
Attach all generated artifacts here (generated in the .artifacts
directory by default, see documentation for more detailed instructions).
I noticed that the link to the table lookups tutorial in the README file was broken. Here's the fix:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.