Code Monkey home page Code Monkey logo

concrete-numpy's Issues

Not installing with pip in googlecloab

ERROR: Could not find a version that satisfies the requirement concrete-numpy (from versions: none)
ERROR: No matching distribution found for concrete-ml

Summary

What happened/what you expected to happen?

Description

  • versions affected:
  • python version:
  • config (optional: HW, OS):
  • workaround (optional): if you’ve a way to workaround the issue
  • proposed fix (optional): if you’ve a way to fix the issue

Step by step procedure someone should follow to trigger the bug:

minimal POC to trigger the bug

print("Minimal POC to reproduce the bug")

Artifacts

Attach all generated artifacts here (generated in the .artifacts directory by default, see documentation for more detailed instructions).

Logs or output

Docker M1 Illegal instruction

Summary

python3 sanity_check.py gives Fatal Python error: Illegal instruction

Description

  • versions affected: concrete-numpy>0.8.0
  • python version: python3.9-3.11
  • config (optional: HW, OS): M1

Details

Valgrind output

  • unhandled instruction bytes: 0xC5 0xF8 0x77 0xC3 0xBF 0x27 0x0 0x0 0x0 0xBE

0xC5 0xF8 0x77 is AVX vzeroupper

at 0xEF86AB9: concrete_core_ffi::utils::get_mut_checked (in /usr/local/lib/python3.10/dist-packages/concrete_compiler.libs/libConcretelangRuntime-aaaa6abd.so)

Environment

https://developer.apple.com/documentation/apple-silicon/about-the-rosetta-translation-environment
AVX not supported in Docker with rosetta emulation

Suggestion

Make Docker M1 variant available with concrete_compiler built without AVX

Allow large models to be compiled, avoiding `RuntimeError: Can't emit artifacts`

Summary

When being compiled, large models that would produce an executable object requiring more than 2GB of virtual memory fail during linking with following error (example Concrete-ML 0.6.1, Concrete-numpy 0.9.0):

File /usr/local/lib/python3.8/dist-packages/concrete/compiler/library_support.py:155, in LibrarySupport.compile(self, mlir_program, options)
	150 if not isinstance(options, CompilationOptions):
	151     raise TypeError(
	152         f"options must be of type CompilationOptions, not {type(options)}"
	153     )
	154 return LibraryCompilationResult.wrap(
--> 155     self.cpp().compile(mlir_program, options.cpp())
	156 )

RuntimeError: Can't emit artifacts: Command failed:ld --shared -o /tmp/tmpXXXXXXXX/sharedlib.so /tmp/tmpXXXXXXXX.module-0.mlir.o /usr/local/lib/python3.8/dist-packages/concrete_compiler.libs/libConcretelangRuntime-14f67b9a.so -rpath=/usr/local/lib/python3.8/dist-packages/concrete_compiler.libs --disable-new-dtags 2>&1
Code:256
/tmp/tmpXXXXXXXX.module-0.mlir.o: in function `main':
LLVMDialectModule:(.text+0x65): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0x8dc9): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0x8e06): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0x8fa9): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0xb0e4): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0xb121): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0xd69a): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0xd87c): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0xddf7): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0x100f3): relocation truncated to fit: R_X86_64_PC32 against `.data.rel.ro'
LLVMDialectModule:(.text+0x10130): additional relocation overflows omitted from the output
/tmp/tmpXXXXXXXX/sharedlib.so: PC-relative offset overflow in PLT entry for `_dfr_start'

Problem to solve

Enable compilation of models exceeding 2GB virtual memory address limit.

Proposals

According to man ld, following flags might help to solve this issue:

       --no-keep-memory
           ld normally optimizes for speed over memory usage by caching the symbol tables of input files in memory. 
           This option tells ld to instead optimize for memory usage, by rereading the symbol tables as necessary.
           This may be required if ld runs out of memory space while linking a large executable.

       --large-address-aware
           If given, the appropriate bit in the "Characteristics" field of the COFF header is set to indicate
           that this executable supports virtual addresses greater than 2 gigabytes.  This should be used
           in conjunction with the /3GB or /USERVA=value megabytes switch in the "[operating systems]"
           section of the BOOT.INI. Otherwise, this bit has no effect.  [This option is specific to PE targeted
           ports of the linker]

The flags have a performance cost and might not be a suitable default as large models are not necessarily the target for Concrete libraries.

From user's point of view, it could be possible to make the flag(s) available through options/arguments when calling the compiler in Concrete-numpy (and Concrete-ML).

np.dot between matrices results in compilation error

error: 'FHELinalg.dot_eint_int' op operand #0 must be , but got 'tensor<3x4x!FHE.eint<12>>'
loc("-":5:10): error: 'FHELinalg.dot_eint_int' op operand #0 must be , but got 'tensor<3x4x!FHE.eint<12>>'
Traceback (most recent call last):
  File "/Users/sbhamad/github.com/sbhamad/poetry-demo/poetry_demo/main.py", line 79, in <module>
    circuit = compiler.compile(inputset)
  File "/Users/sbhamad/Library/Caches/pypoetry/virtualenvs/poetry-demo-7E09WmKy-py3.9/lib/python3.9/site-packages/concrete/numpy/compilation/compiler.py", line 515, in compile
    circuit = Circuit(self.graph, mlir, self.configuration)
  File "/Users/sbhamad/Library/Caches/pypoetry/virtualenvs/poetry-demo-7E09WmKy-py3.9/lib/python3.9/site-packages/concrete/numpy/compilation/circuit.py", line 55, in __init__
    self.server = Server.create(mlir, input_signs, output_signs, self.configuration)
  File "/Users/sbhamad/Library/Caches/pypoetry/virtualenvs/poetry-demo-7E09WmKy-py3.9/lib/python3.9/site-packages/concrete/numpy/compilation/server.py", line 149, in create
    compilation_result = support.compile(mlir, options)
  File "/Users/sbhamad/Library/Caches/pypoetry/virtualenvs/poetry-demo-7E09WmKy-py3.9/lib/python3.9/site-packages/concrete/compiler/library_support.py", line 155, in compile
    self.cpp().compile(mlir_program, options.cpp())
RuntimeError: Caught an unknown exception!

im trying to simply multiply w by i then add b to the resultant numpy ndarray. where i is encrypted.

#!/usr/bin/env python

import concrete.numpy as cnp
import numpy as np
import math

n_bits = 5

inputs = [
    [1.0, 2.0, 3.0, 2.5],
    [2.0, 5.0, -1.0, 2.0],
    [-1.5, 2.7, 3.3, -0.8],
]
weights = [
    [0.2, 0.8, -0.5, 1.0],
    [0.5, -0.91, 0.26, -0.5],
    [-0.26, -0.27, 0.17, 0.87],
]
biases = [2.0, 3.0, 0.5]
layer_outputs = np.dot(inputs, np.array(weights).T) + biases
print(np.array(inputs).shape)
print(np.array(weights).T.shape)

print(np.array(biases).shape)


print(layer_outputs)


def quantize_matrix(matrix):
    # Set the number of bits over which data will be quantized

    max_X = np.max(matrix)
    # Output: 0.9507
    min_X = np.min(matrix)
    # Output: 0.0581

    max_q_value = 2**n_bits - 1
    # Output: 127
    range = max_X - min_X
    # Output: 0.8926
    scale = range / max_q_value
    # Output: 0.2975
    Zp = np.round((-min_X * max_q_value) / range)
    # Output: 0
    q_X = np.round(matrix / scale) + Zp
    print("quantized matrix is : ", np.rint(q_X).astype(np.int64))
    return np.rint(q_X).astype(np.int64)


def multiply_weight_with_encrypted_input(i, w, b):
    return np.dot(i, w) + b
    # return


compiler = cnp.Compiler(
    multiply_weight_with_encrypted_input,
    {"i": "encrypted", "w": "clear", "b": "clear"},
)

q_inputs = quantize_matrix(inputs)
q_weights_transposed = quantize_matrix(weights).T
q_biases = quantize_matrix(biases)

print(q_inputs.shape, q_weights_transposed.shape, q_biases.shape)


inputset = [
    (
        np.random.randint(0, 2**n_bits, size=q_inputs.shape),
        np.random.randint(0, 2**n_bits, size=q_weights_transposed.shape),
        np.random.randint(0, 2**n_bits, size=q_biases.shape),
    )
    for _ in range(30)
]


# print("inputset looks like this:::::: ", inputset)
circuit = compiler.compile(inputset)


result = circuit.encrypt_run_decrypt(q_inputs, q_weights_transposed, q_biases)
print("homomorphically evaluated final result is: ", result)

Float usage

Hi,

I'm interested in the float constraint (floats cannot be used as input/output, only in intermediate state). Do you plan on eventually removing the constraint, or is it here to stay, because I have looked at concrete lib, and you can use floats. Additional question is how soon will it be possible to perform operations with encrypted constants, dot with both encrypted variables, etc.?

Thank you, great work you're doing!

Unable to install through PIP

Pycharm cannot be installed by PIP command ERROR: Cannot install concrete-numpy==0.2.0, concrete-numpy==0.2.1, concrete-numpy==0.3.0, Concrete-numpy ==0.4.0 and concrete-Numpy ==0.5.0 because these package versions have conflicting
dependencies.

[Question]: Disabling concrete-csprng warning

Depending on the function evaluated using FHE, the following warning is printed multiple times per evaluation :

WARNING: You are currently using the software variant of concrete-csprng which does not have access to a hardware source of randomness. To ensure the security of your application, please arrange to provide a secret by using the concrete_csprng::set_soft_rdseed_secret function.

Is it possible to disable it when using the PyPi package concrete-numpy as it floods the notebook cells output ? I don't see any reference to a fix in the documentation.

Ipykernel dies when .compile() method runs/completes

Summary

Ipykernel dies when .compile() method runs/completes

Description

On MacOS with Docker.
Ran docker pull of latest concrete-numpy image.
Ran docker from terminal with docker run --rm -it -p 8888:8888 <image_id>

When running first concrete-numpy example kernel dies and restarts when .compile() method is called

import concrete.numpy as cnp

def add(x, y):
    return x + y

compiler = cnp.Compiler(add, {"x": "encrypted", "y": "encrypted"})
inputset = [(2, 3), (0, 0), (1, 6), (7, 7), (7, 1), (3, 2), (6, 1), (1, 7), (4, 5), (5, 4)]

print(f"Compiling...")
circuit = compiler.compile(inputset)

print(f"Generating keys...") #Never printed, kernel crashes and restarts
circuit.keygen()

examples = [(3, 4), (1, 2), (7, 7), (0, 0)]
for example in examples:
    encrypted_example = circuit.encrypt(*example)
    encrypted_result = circuit.run(encrypted_example)
    result = circuit.decrypt(encrypted_result)
    print(f"Evaluation of {' + '.join(map(str, example))} homomorphically = {result}")

Notebook throws this
The kernel appears to have died. It will restart automatically.

Unfortunately jupyter server logs from the running docker container don't seem to show anything useful beyond this:

2023-03-10 22:54:58 [I 07:54:58.381 NotebookApp] KernelRestarter: restarting kernel (1/5), keep random ports
2023-03-10 22:54:58 WARNING:root:kernel 2f2a2a20-f70b-467d-add5-50da66cc979f restarted

Steps to reproduce

  1. $docker pull zamafhe/concrete-numpy:v1.0.0
  2. $docker run --rm -it -p 8888:8888 zamafhe/concrete-numpy:v1.0.0
  3. Open Jupyter Notebook from token authenticated link provided in docker log
  4. Run example python code above.

Misc Details

This is an awesome project and I want to try develop a few unsupervised algorithms not yet in concreteML and tackle some of the bounties you guys have

Could not find a version that satisfies the requirement concrete-compiler

Summary

Followed the steps to install concrete-numpy but getting error.

Description

  • versions affected: 0.8.0
  • python version: tried on 3.9
  • config (optional: HW, OS): Mac Monterey

Step by step procedure someone should follow to trigger the bug:

-> pip install concrete-numpy==0.8.0

Output:

Collecting concrete-numpy==0.8.0 Using cached concrete_numpy-0.8.0-py3-none-any.whl (68 kB) Collecting matplotlib<4.0.0,>=3.5.1 Using cached matplotlib-3.6.2-cp39-cp39-macosx_10_12_x86_64.whl (7.3 MB) Collecting torch<2.0.0,>=1.10.2 Using cached torch-1.13.0-cp39-none-macosx_10_9_x86_64.whl (137.9 MB) Collecting networkx<3.0.0,>=2.6.3 Using cached networkx-2.8.8-py3-none-any.whl (2.0 MB) Collecting numpy<2.0.0,>=1.21.0 Using cached numpy-1.23.5-cp39-cp39-macosx_10_9_x86_64.whl (18.1 MB) ERROR: Could not find a version that satisfies the requirement concrete-compiler<0.20.0,>=0.19.0 (from concrete-numpy) (from versions: 0.1.1, 0.1.2, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 0.5.0, 0.6.0, 0.7.0, 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.14.0, 0.15.0, 0.16.0) ERROR: No matching distribution found for concrete-compiler<0.20.0,>=0.19.0

Inputset for table lookups

Hello,

I am trying to get the table lookups tutorial working, but I think I have a problem with the inputset, or there is something else that I am doing wrong. This is what I do:

squared = hnp.LookupTable([i ** 2 for i in range(4)])
cubed = hnp.LookupTable([i ** 3 for i in range(4)])

table = hnp.MultiLookupTable([
    [squared, cubed],
    [squared, cubed],
    [squared, cubed],
])

def f(x):
    return table[x]

t_lookup_inputset = [np.array([[0, 0], [0, 1], [0, 2]], dtype=np.uint8), np.array([[1, 1], [1, 2], [1, 3]], dtype=np.uint8), 
np.array([[2, 1], [2, 2], [2, 3]], dtype=np.uint8)]

compiler = hnp.NPFHECompiler(
    f, {"x": x}
)
print('compiling the circuit')
table_circuit = compiler.compile_on_inputset(t_lookup_inputset)
print('compile done')

I understand that the inputset should have 10 elements, but for the sake of simplicity I do just these 3. And after I run:

inp = np.array([[0, 1], [1, 1], [1, 2]], dtype=np.uint8)
print(circuit.encrypt_run_decrypt(inp))

I get the "RuntimeError: argument #0is not a tensor" during the encryption step. Could you help me a bit with understanding how the inputset works for these table lookups and advise what the best inputset for this case would be?

Thank you!

AttributeError: 'FHECircuit' object has no attribute 'encrypt_run_decrypt'

Summary

What happened/what you expected to happen? running of example program

Description

import numpy as np
import concrete.numpy as hnp

Function using floating points values converted back to integers at the end

def f(x):
return np.fabs(50 * (2 * np.sin(x) * np.cos(x))).astype(np.uint32)
# astype is to go back to the integer world

Compiling with x encrypted

compiler = hnp.NPFHECompiler(f, {"x": "encrypted"})
circuit = compiler.compile_on_inputset(range(64))

print(circuit.encrypt_run_decrypt(3) == f(3))
print(circuit.encrypt_run_decrypt(0) == f(0))
print(circuit.encrypt_run_decrypt(1) == f(1))
print(circuit.encrypt_run_decrypt(10) == f(10))
print(circuit.encrypt_run_decrypt(60) == f(60))

print("All good!")

  • versions affected:
  • python version:
  • config (optional: HW, OS): ubuntu
  • workaround (optional): if you’ve a way to workaround the issue
  • proposed fix (optional): if you’ve a way to fix the issue

Step by step procedure someone should follow to trigger the bug:

minimal POC to trigger the bug

print("Minimal POC to reproduce the bug")

Artifacts

Attach all generated artifacts here (generated in the .artifacts directory by default, see documentation for more detailed instructions).

Logs or output

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.