Code Monkey home page Code Monkey logo

he-transformer's People

Contributors

adam-dziedzic avatar bidisha94 avatar dnat112 avatar fboemer avatar jopasserat avatar lnguyen-nvn avatar lorepieri8 avatar mlayou avatar r-kellerm avatar rsandler00 avatar sfblackl-intel avatar yxlao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

he-transformer's Issues

Closing socket in client fails

Double-close bug: it seems that a socket is closed more than once.

[INFO]2020-09-21T22:37:53z src/aby/aby_client_executor.cpp 149 Client executing circuit took 12370us[INFO]2020-09-21T22:37:53z src/aby/aby_client_executor.cpp 149 Client executing circuit took 13556us[INFO]2020-09-21T22:37:53z src/seal/he_seal_client.cpp 408     Client handling message[INFO]2020-09-21T22:37:53z src/seal/he_seal_client.cpp 225     Client handling result[INFO]2020-09-21T22:37:53z src/tcp/tcp_client.cpp 47  Closing socket
[INFO]2020-09-21T22:37:53z src/seal/he_seal_client.cpp 458     Client waiting for results

raw results:  [-5.89267436e+10  4.44481962e+11 -4.42048643e+11-5.75075647e+11 -1.04892478e+12  5.67507878e+11  7.58654632e+11 -4.30618116e+11 -8.90601275e+11  1.05293565e+12]

[INFO]2020-09-21T22:37:55z src/tcp/tcp_client.cpp 47  Closing socketExceptionoccurred:  shutdown: Bad file descriptor Segmentation fault (core dumped)

"Bad file descriptor" on close usually means the descriptor has already been closed. This is often because of a double-close bug in some completely unrelated section of the program.

https://stackoverflow.com/questions/7732726/bad-file-descriptor-closing-boost-socket

It can be recreated by running examples in he-transformer/examples ax.py and pyclient.py:
Client:

(venv-tf-py3) $ python $HE_TRANSFORMER/examples/pyclient.py --port 35000
Namespace(hostname='localhost', port=35000)
[WARN] 2020-09-23T14:12:33z src/seal/seal_util.cpp 39   Parameter selection does not enforce minimum security level
[WARN] 2020-09-23T14:12:33z src/seal/seal_util.cpp 39   Parameter selection does not enforce minimum security level
[WARN] 2020-09-23T14:12:33z src/seal/seal_util.cpp 39   Parameter selection does not enforce minimum security level
[INFO] 2020-09-23T14:12:33z src/seal/he_seal_client.cpp 458     Client waiting for results
results [2.999999761581421, 6.0, 9.0, 12.0]
Segmentation fault (core dumped)

Server:

(venv-tf-py3) :~/code/he-transformer/examples$ python $HE_TRANSFORMER/examples/ax.py \
>   --backend=HE_SEAL \
>   --enable_client=yes \
>   --port 35000
/home/dockuser/code/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/dockuser/code/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/dockuser/code/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/dockuser/code/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/dockuser/code/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/dockuser/code/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/dockuser/code/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/dockuser/code/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/dockuser/code/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/dockuser/code/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/dockuser/code/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/dockuser/code/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
config graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
    min_graph_nodes: -1
    custom_optimizers {
      name: "ngraph-optimizer"
      parameter_map {
        key: "client_parameter_name:0"
        value {
          s: "client_input"
        }
      }
      parameter_map {
        key: "device_id"
        value {
          s: ""
        }
      }
      parameter_map {
        key: "enable_client"
        value {
          s: "True"
        }
      }
      parameter_map {
        key: "encryption_parameters"
        value {
          s: ""
        }
      }
      parameter_map {
        key: "ngraph_backend"
        value {
          s: "HE_SEAL"
        }
      }
      parameter_map {
        key: "port"
        value {
          s: "35000"
        }
      }
    }
  }
}

2020-09-23 14:12:19.941460: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200000000 Hz
2020-09-23 14:12:19.943009: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x30509d0 executing computations on platform Host. Devices:
2020-09-23 14:12:19.943056: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
[WARN] 2020-09-23T14:12:19z src/seal/seal_util.cpp 39   Parameter selection does not enforce minimum security level
[WARN] 2020-09-23T14:12:19z src/seal/seal_util.cpp 39   Parameter selection does not enforce minimum security level
2020-09-23 14:12:19.989515: I /home/dockuser/code/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/ngraph_bridge/grappler/ngraph_optimizer.cc:239] NGraph using backend: HE_SEAL
2020-09-23 14:12:20.014063: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
[WARN] 2020-09-23T14:12:20z src/seal/seal_util.cpp 39   Parameter selection does not enforce minimum security level
[WARN] 2020-09-23T14:12:20z src/seal/seal_util.cpp 39   Parameter selection does not enforce minimum security level
[WARN] 2020-09-23T14:12:20z src/seal/seal_util.cpp 39   Parameter selection does not enforce minimum security level
Result:  [[ 2.8052179e+17  4.1158964e+17 -4.8061976e+17 -3.6207904e+17]]

Segment fault of gc in relu uint-test

92d831d22c227942dd7d572373556d3

When running C++ unit-tests, the program works fine if we use the contents of box 2 (i.e., no gc) for ReLU operations.
However, if we use the contents of box 1 (i.e., gc exists), the program has a segment fault.

What should I do?

Dependencies

Additional dependencies that I think need to be added to the "Dependencies" section:

  • git
  • openssl
  • some subset of: autoconf, autogen, libtool

Also, I think g++ needs to be listed above cmake

MNIST client-server example not working

Hi,

I am testing the client-server MNIST example. What I'm seeing is one of two behaviours. Either both the client and server hung up without returning any answers, or one of them is killed. For more details, please see the logs below.

I have also tested the basic client-server example for matrix addition and multiplication (defined here), and it worked fine. Any thoughts on what's wrong with the MNIST client-server example? Is the system running out of RAM and killing the client process?

python test.py --backend=HE_SEAL \
               --model_file=models/cryptonets.pb \
               --enable_client=true \
               --encryption_parameters=$HE_TRANSFORMER/configs/he_seal_ckks_config_N13_L8.json

(venv-tf-py3) user1@ubuntu:~/nGraph/he-transformer/examples/MNIST$ python test.py --backend=HE_SEAL --model_file=models/cryptonets.pb --enable_client=true --encryption_parameters=$HE_TRANSFORMER/configs/he_seal_ckks_config_N13_L8.json
[...]
Model restored
loaded model
nodes ['import/input', 'import/convd1_1/kernel', 'import/convd1_1/bias', 'import/convd1_1/Conv2D', 'import/convd1_1/BiasAdd', 'import/activation/mul', 'import/Reshape/shape', 'import/Reshape', 'import/squash_fc_1/kernel', 'import/squash_fc_1/bias', 'import/squash_fc_1/MatMul', 'import/squash_fc_1/BiasAdd', 'import/activation_1/mul', 'import/output/kernel', 'import/output/bias', 'import/output/MatMul', 'import/output/BiasAdd']
2020-04-07 15:09:38.781680: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2496095000 Hz
2020-04-07 15:09:38.781958: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1693e40 executing computations on platform Host. Devices:
2020-04-07 15:09:38.782003: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
[WARN] 2020-04-07T19:09:38z src/seal/seal_util.cpp 39	Parameter selection does not enforce minimum security level
[WARN] 2020-04-07T19:09:38z src/seal/seal_util.cpp 39	Parameter selection does not enforce minimum security level
2020-04-07 15:09:38.830897: I /home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/ngraph_bridge/grappler/ngraph_optimizer.cc:239] NGraph using backend: HE_SEAL
2020-04-07 15:09:38.858285: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
[WARN] 2020-04-07T19:09:38z src/seal/seal_util.cpp 39	Parameter selection does not enforce minimum security level
[WARN] 2020-04-07T19:09:38z src/seal/seal_util.cpp 39	Parameter selection does not enforce minimum security level
2020-04-07 15:09:38.916244: I /home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/ngraph_bridge/grappler/ngraph_optimizer.cc:239] NGraph using backend: HE_SEAL
[WARN] 2020-04-07T19:09:38z src/seal/seal_util.cpp 39	Parameter selection does not enforce minimum security level
[WARN] 2020-04-07T19:09:38z src/seal/seal_util.cpp 39	Parameter selection does not enforce minimum security level
python pyclient_mnist.py --batch_size=1024 \
                         --encrypt_data_str=encrypt
(venv-tf-py3) user1@ubuntu:~/nGraph/he-transformer/examples/MNIST$ python pyclient_mnist.py --batch_size=1024 --encrypt_data_str=encrypt
[...]
[WARN] 2020-04-07T19:09:47z src/seal/seal_util.cpp 39	Parameter selection does not enforce minimum security level
Killed

When I start the client before the server, it complains that the connection is refused. Once the server is started, that error is not displayed anymore. But the client either keeps waiting endlessly for the result from the server, or is killed.

[INFO] 2020-04-07T19:32:36z src/tcp/tcp_client.cpp 76	error connecting to server: Connection refused
[INFO] 2020-04-07T19:32:36z src/tcp/tcp_client.cpp 82	Trying to connect again
[INFO] 2020-04-07T19:32:36z src/tcp/tcp_client.cpp 76	error connecting to server: Connection refused
[INFO] 2020-04-07T19:32:38z src/tcp/tcp_client.cpp 82	Trying to connect again

Inconsistent clang versions

The dockerfile Dockerfile.he_transformer.ubuntu1804 installs clang-9. The installation script build-he-transformer-and-test.sh, however, configures the cmake environment variables for the use of clang-6.0/clang++-6.0. This leads to an error when trying to build he-transformer using the Makefile.

This can be reproduced as follow:

git clone https://github.com/IntelAI/he-transformer`
cd he-transformer/contrib/docker
make build_clang OS_ID=ubuntu1804

bazel version check error

hi all, when I compile the project, at very start of "make -j install", bazel version check error occured,my bazel version is 0.25.0, suggested version is between 0.24.1~0.25.2,and maybe the bug is at the line like below:
he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_ngtf.py:41
image

Understanding Convolution Implementation

I am trying to understand how the memory allocations are performed.
Specifically, why is there a line
seal::MemoryPoolHandle pool = seal::MemoryPoolHandle::ThreadLocal()
defined in seal/kernel/convolution_seal.cpp? This pool is never utilised. Is this just an artifact?

when building ext_seal,it occurs no type named 'index_type'

Hi,
I install the library on ubuntu20.04. I run cmake .. -DCMAKE_CXX_COMPILER=clang++-6.0 -DNGRAPH_HE_ABY_ENABLE=ON, then run make install,when it comes to build ext_seal, it occurs the following error, I have installed GSL into the default location(/usr/local/) .
image
The details are as follows

[ 14%] Performing configure step for 'ext_seal'
-- The CXX compiler identification is Clang 6.0.1
-- The C compiler identification is GNU 9.3.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/clang++-6.0 - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Build type (CMAKE_BUILD_TYPE): RelWithDebInfo
-- Microsoft SEAL debug mode: OFF
-- Library build type (SEAL_LIB_BUILD_TYPE): Static_PIC
-- Looking for C++ include x86intrin.h
-- Looking for C++ include x86intrin.h - found
-- Found MSGSL: /usr/local/include
-- Found ZLIB: /usr/lib/x86_64-linux-gnu/libz.so (found suitable exact version "1.2.11")
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - yes
-- Found Threads: TRUE
-- Configuring done
-- Generating done
-- Build files have been written to: /mnt/e/GitHub/he-transformer-master/build/ext_seal/src/ext_seal-build
[ 14%] Performing build step for 'ext_seal'
Scanning dependencies of target seal
[  2%] Building CXX object CMakeFiles/seal.dir/seal/batchencoder.cpp.o
/mnt/e/GitHub/he-transformer-master/build/ext_seal/src/ext_seal/native/src/seal/batchencoder.cpp:235:53: error: no type named 'index_type' in
      'gsl::span<const unsigned long, 18446744073709551615>'
        using index_type = decltype(values_matrix)::index_type;
                           ~~~~~~~~~~~~~~~~~~~~~~~~~^
/mnt/e/GitHub/he-transformer-master/build/ext_seal/src/ext_seal/native/src/seal/batchencoder.cpp:239:43: error: unknown type name 'index_type'
                values_matrix[static_cast<index_type>(i)];
                                          ^
/mnt/e/GitHub/he-transformer-master/build/ext_seal/src/ext_seal/native/src/seal/batchencoder.cpp:280:53: error: no type named 'index_type' in
      'gsl::span<const long, 18446744073709551615>'
        using index_type = decltype(values_matrix)::index_type;
                           ~~~~~~~~~~~~~~~~~~~~~~~~~^
/mnt/e/GitHub/he-transformer-master/build/ext_seal/src/ext_seal/native/src/seal/batchencoder.cpp:284:44: error: unknown type name 'index_type'
                (values_matrix[static_cast<index_type>(i)] < 0) ?
                                           ^
/mnt/e/GitHub/he-transformer-master/build/ext_seal/src/ext_seal/native/src/seal/batchencoder.cpp:285:76: error: unknown type name 'index_type'
                (modulus + static_cast<uint64_t>(values_matrix[static_cast<index_type>(i)])) :
                                                                           ^
/mnt/e/GitHub/he-transformer-master/build/ext_seal/src/ext_seal/native/src/seal/batchencoder.cpp:286:65: error: unknown type name 'index_type'
                static_cast<uint64_t>(values_matrix[static_cast<index_type>(i)]);
                                                                ^
/mnt/e/GitHub/he-transformer-master/build/ext_seal/src/ext_seal/native/src/seal/batchencoder.cpp:452:51: error: no type named 'index_type' in
      'gsl::span<unsigned long, 18446744073709551615>'
        using index_type = decltype(destination)::index_type;
                           ~~~~~~~~~~~~~~~~~~~~~~~^
/mnt/e/GitHub/he-transformer-master/build/ext_seal/src/ext_seal/native/src/seal/batchencoder.cpp:474:37: error: unknown type name 'index_type'
            destination[static_cast<index_type>(i)] = temp_dest[matrix_reps_index_map_[i]];
                                    ^
/mnt/e/GitHub/he-transformer-master/build/ext_seal/src/ext_seal/native/src/seal/batchencoder.cpp:497:51: error: no type named 'index_type' in
      'gsl::span<long, 18446744073709551615>'
        using index_type = decltype(destination)::index_type;
                           ~~~~~~~~~~~~~~~~~~~~~~~^
/mnt/e/GitHub/he-transformer-master/build/ext_seal/src/ext_seal/native/src/seal/batchencoder.cpp:521:37: error: unknown type name 'index_type'
            destination[static_cast<index_type>(i)] = (curr_value > plain_modulus_div_two) ?
                                    ^
10 errors generated.

Hope for your guidance!

Invalid config setting false

I am trying to run the examples to test the MNIST network. The CPU backend works but the HE-SEAL backend outputs the following:

terminate called after throwing an instance of 'ngraph::CheckFailure'
  what():  Check 'valid_config_settings.find(lower_setting) != valid_config_settings.end()' failed at /home/d/Documents/he-transformer/src/seal/he_seal_backend.cpp:123:
Invalid config setting false

Aborted (core dumped)

for both the "Plaintext" and "Encrypted" cases.

Cannot encrypt self-trained MobileNet model.

I try to train a MobileNet V2 model and want to run it in encrypted domain.

First, I used train_image_classifier.py in Tensorflow/models/research/slim to train a model.
The input is a 224 * 224 * 3 image.

Secondly, I used freeze_graph.py to get a pb file.

All related files can be downloaded below: https://drive.google.com/drive/folders/1e0Ix0oj_sAv4PzxBVyNoTVcMMU1pF5ZP?usp=sharing

However, the execution time of running model in encrypted domain are the same as the model running in plaintext domain.

In addition, even I set NGRAPH_HE_LOG_LEVEL=3, it didn't show any log such as encryption parameters.

So, I doubt that he-transformer doesn't run model in encryption domain correctly.

Is there any way to fix this problem?

Thanks!

NgraphOptimizer failed: Could not create backend

I'm trying to run the python examples included with the library. I got the error below complaining about the NgraphOptimizer not being found, although the file ngraph_bridge/libcpu_backend.so being searched is stored at the referenced path. Please see logs below.

I get a similar error when running the example with the HE_SEAL backend.

Just wanted to check if there is an easy way to solve this, without having to rebuild the project from scratch.

(venv-tf-py3) user1@ubuntu:~/nGraph/he-transformer/build$ python $HE_TRANSFORMER/examples/ax.py --backend=CPU
[...]
config graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
    min_graph_nodes: -1
    custom_optimizers {
      name: "ngraph-optimizer"
      parameter_map {
        key: "device_id"
        value {
          s: ""
        }
      }
      parameter_map {
        key: "enable_client"
        value {
          s: "False"
        }
      }
      parameter_map {
        key: "encryption_parameters"
        value {
          s: ""
        }
      }
      parameter_map {
        key: "ngraph_backend"
        value {
          s: "CPU"
        }
      }
    }
  }
}

2020-04-01 15:40:19.915572: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2496085000 Hz
2020-04-01 15:40:19.916403: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x17e5d40 executing computations on platform Host. Devices:
2020-04-01 15:40:19.916483: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2020-04-01 15:40:19.920397: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] NgraphOptimizer failed: Internal: Could not create backend of type CPU. Got exception: Unable to find backend 'CPU' as file '/home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/ngraph_bridge/libcpu_backend.so'
Open error message '/home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/ngraph_bridge/libmklml_intel.so: cannot read file data'
2020-04-01 15:40:19.920878: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
Result:  [[2. 3. 4. 5.]]
(venv-tf-py3) user1@ubuntu:~/nGraph/he-transformer/build$ ls -lastrh /home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/ngraph_bridge/libcpu_backend.so
23M -rw-r--r-- 1 user1 user1 23M Mar 29 14:03 /home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/ngraph_bridge/libcpu_backend.so
(venv-tf-py3) user1@ubuntu:~/nGraph/he-transformer/build$ 

Logs for the HE_SEAL backend example.

(venv-tf-py3) user1@ubuntu:~/nGraph/he-transformer/build$ python $HE_TRANSFORMER/examples/ax.py --backend=HE_SEAL
[...]
config graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
    min_graph_nodes: -1
    custom_optimizers {
      name: "ngraph-optimizer"
      parameter_map {
        key: "device_id"
        value {
          s: ""
        }
      }
      parameter_map {
        key: "enable_client"
        value {
          s: "False"
        }
      }
      parameter_map {
        key: "encryption_parameters"
        value {
          s: ""
        }
      }
      parameter_map {
        key: "ngraph_backend"
        value {
          s: "HE_SEAL"
        }
      }
    }
  }
}

2020-04-01 16:00:24.728277: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2496085000 Hz
2020-04-01 16:00:24.728886: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x1b40a00 executing computations on platform Host. Devices:
2020-04-01 16:00:24.728972: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
[WARN] 2020-04-01T20:00:24z src/seal/seal_util.cpp 39	Parameter selection does not enforce minimum security level
[WARN] 2020-04-01T20:00:24z src/seal/seal_util.cpp 39	Parameter selection does not enforce minimum security level
2020-04-01 16:00:24.785613: I /home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/ngraph_bridge/grappler/ngraph_optimizer.cc:239] NGraph using backend: HE_SEAL
2020-04-01 16:00:24.788364: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] NgraphOptimizer failed: Internal: Could not create backend of type CPU. Got exception: Unable to find backend 'CPU' as file '/home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/ngraph_bridge/libcpu_backend.so'
Open error message '/home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/ngraph_bridge/libmklml_intel.so: cannot read file data'
2020-04-01 16:00:24.788931: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
Result:  [[2. 3. 4. 5.]]
(venv-tf-py3) user1@ubuntu:~/nGraph/he-transformer/build$ 

Building Docker image with Ubuntu 16.04 fails

I receive the following error messages about missing packages when trying to build he-transformer by calling make build_clang in contrib/docker/:

E: Unable to locate package yapf3
E: Unable to locate package python3-yapf

support for armgax operation via MPC

Would it be possible to add the argmax operation so that it works in the same way as ReLU and MaxPool via MPC? Other operations that are frequently used and require execution via MPC are sigmoid and softmax.

Building ngraph-tf fails

When trying to build he-transformer, I received the following error:

...
ERROR: /home/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/python/BUILD:336:1: C++ compilation of rule '//tensorflow/python:bfloat16_lib' failed (Exit 1)
tensorflow/python/lib/core/bfloat16.cc: In lambda function:
tensorflow/python/lib/core/bfloat16.cc:615:32: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
     if (types.size() != ufunc->nargs) {
                                ^
tensorflow/python/lib/core/bfloat16.cc: In function 'bool tensorflow::{anonymous}::Initialize()':
tensorflow/python/lib/core/bfloat16.cc:634:36: error: no match for call to '(tensorflow::{anonymous}::Initialize()::__lambda0) (const char [6], <unresolved overloaded function type>, const std::array<int, 3ul>&)'
                       compare_types)) {
                                    ^
tensorflow/python/lib/core/bfloat16.cc:607:27: note: candidate is:
   auto register_ufunc = [&](const char* name, PyUFuncGenericFunction fn,
                           ^
tensorflow/python/lib/core/bfloat16.cc:608:60: note: tensorflow::{anonymous}::Initialize()::__lambda0
                             const std::array<int, 3>& types) {
                                                            ^
tensorflow/python/lib/core/bfloat16.cc:608:60: note:   no known conversion for argument 2 from '<unresolved overloaded function type>' to 'PyUFuncGenericFunction {aka void (*)(char**, const long int*, const long int*, void*)}'
tensorflow/python/lib/core/bfloat16.cc:638:36: error: no match for call to '(tensorflow::{anonymous}::Initialize()::__lambda0) (const char [10], <unresolved overloaded function type>, const std::array<int, 3ul>&)'
                       compare_types)) {
                                    ^
tensorflow/python/lib/core/bfloat16.cc:607:27: note: candidate is:
   auto register_ufunc = [&](const char* name, PyUFuncGenericFunction fn,
                           ^
tensorflow/python/lib/core/bfloat16.cc:608:60: note: tensorflow::{anonymous}::Initialize()::__lambda0
                             const std::array<int, 3>& types) {
                                                            ^
tensorflow/python/lib/core/bfloat16.cc:608:60: note:   no known conversion for argument 2 from '<unresolved overloaded function type>' to 'PyUFuncGenericFunction {aka void (*)(char**, const long int*, const long int*, void*)}'
tensorflow/python/lib/core/bfloat16.cc:641:77: error: no match for call to '(tensorflow::{anonymous}::Initialize()::__lambda0) (const char [5], <unresolved overloaded function type>, const std::array<int, 3ul>&)'
   if (!register_ufunc("less", CompareUFunc<Bfloat16LtFunctor>, compare_types)) {
                                                                             ^
tensorflow/python/lib/core/bfloat16.cc:607:27: note: candidate is:
   auto register_ufunc = [&](const char* name, PyUFuncGenericFunction fn,
                           ^
tensorflow/python/lib/core/bfloat16.cc:608:60: note: tensorflow::{anonymous}::Initialize()::__lambda0
                             const std::array<int, 3>& types) {
                                                            ^
tensorflow/python/lib/core/bfloat16.cc:608:60: note:   no known conversion for argument 2 from '<unresolved overloaded function type>' to 'PyUFuncGenericFunction {aka void (*)(char**, const long int*, const long int*, void*)}'
tensorflow/python/lib/core/bfloat16.cc:645:36: error: no match for call to '(tensorflow::{anonymous}::Initialize()::__lambda0) (const char [8], <unresolved overloaded function type>, const std::array<int, 3ul>&)'
                       compare_types)) {
                                    ^
tensorflow/python/lib/core/bfloat16.cc:607:27: note: candidate is:
   auto register_ufunc = [&](const char* name, PyUFuncGenericFunction fn,
                           ^
tensorflow/python/lib/core/bfloat16.cc:608:60: note: tensorflow::{anonymous}::Initialize()::__lambda0
                             const std::array<int, 3>& types) {
                                                            ^
tensorflow/python/lib/core/bfloat16.cc:608:60: note:   no known conversion for argument 2 from '<unresolved overloaded function type>' to 'PyUFuncGenericFunction {aka void (*)(char**, const long int*, const long int*, void*)}'
tensorflow/python/lib/core/bfloat16.cc:649:36: error: no match for call to '(tensorflow::{anonymous}::Initialize()::__lambda0) (const char [11], <unresolved overloaded function type>, const std::array<int, 3ul>&)'
                       compare_types)) {
                                    ^
tensorflow/python/lib/core/bfloat16.cc:607:27: note: candidate is:
   auto register_ufunc = [&](const char* name, PyUFuncGenericFunction fn,
                           ^
tensorflow/python/lib/core/bfloat16.cc:608:60: note: tensorflow::{anonymous}::Initialize()::__lambda0
                             const std::array<int, 3>& types) {
                                                            ^
tensorflow/python/lib/core/bfloat16.cc:608:60: note:   no known conversion for argument 2 from '<unresolved overloaded function type>' to 'PyUFuncGenericFunction {aka void (*)(char**, const long int*, const long int*, void*)}'
tensorflow/python/lib/core/bfloat16.cc:653:36: error: no match for call to '(tensorflow::{anonymous}::Initialize()::__lambda0) (const char [14], <unresolved overloaded function type>, const std::array<int, 3ul>&)'
                       compare_types)) {
                                    ^
tensorflow/python/lib/core/bfloat16.cc:607:27: note: candidate is:
   auto register_ufunc = [&](const char* name, PyUFuncGenericFunction fn,
                           ^
tensorflow/python/lib/core/bfloat16.cc:608:60: note: tensorflow::{anonymous}::Initialize()::__lambda0
                             const std::array<int, 3>& types) {
                                                            ^
tensorflow/python/lib/core/bfloat16.cc:608:60: note:   no known conversion for argument 2 from '<unresolved overloaded function type>' to 'PyUFuncGenericFunction {aka void (*)(char**, const long int*, const long int*, void*)}'
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 400.244s, Critical Path: 123.78s
INFO: 6738 processes: 6738 local.
FAILED: Build did NOT complete successfully
FAILED: Build did NOT complete successfully

As the provided Dockerfiles (see contrib/docker) also failed building he-transformer, I built my own Dockerfile:

# ******************************************************************************
# Copyright 2018-2020 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ******************************************************************************

# Source: https://github.com/IntelAI/he-transformer/blob/master/contrib/docker/Dockerfile.he_transformer.ubuntu1804

# Environment to build and unit-test he-transformer
# with g++ 7.4.0
# with clang++ 9.0.1
# with python 3.6.8
# with cmake 3.14.4

FROM ubuntu:18.04

RUN apt-get update && apt-get install -y \
    python3-pip virtualenv python3-dev \
    git \
    unzip wget \
    sudo \
    bash-completion \
    build-essential cmake \
    software-properties-common \
    git \
    wget patch diffutils libtinfo-dev \
    autoconf libtool \
    doxygen graphviz \
    yapf3 python3-yapf \
    python python-dev python3 python3-dev \
    libomp-dev

RUN python3.6 -m pip install pip --upgrade && \ 
    pip3 install -U --user pip six 'numpy<1.19.0' wheel setuptools mock 'future>=0.17.1' && \ 
    pip3 install -U --user keras_applications --no-deps && \ 
    pip3 install -U --user keras_preprocessing --no-deps && \ 
    rm -rf /usr/bin/python && \
    ln -s /usr/bin/python3.6 /usr/bin/python

# Install clang-9
RUN wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add -
RUN apt-add-repository "deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic-9 main"
RUN apt-get update && apt install -y clang-9 clang-tidy-9 clang-format-9

RUN apt-get update && apt-get install -y gcc-4.8 g++-4.8
RUN update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 60 --slave /usr/bin/g++ g++ /usr/bin/g++-4.8 && \
  update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 60 --slave /usr/bin/g++ g++ /usr/bin/g++-7

RUN apt-get clean autoclean && apt-get autoremove -y

# For ngraph-tf integration testing
RUN pip3 install --upgrade pip setuptools virtualenv==16.1.0

# SEAL requires newer version of CMake
RUN pip3 install cmake --upgrade

# Get bazel for ng-tf
RUN wget https://github.com/bazelbuild/bazel/releases/download/0.25.2/bazel-0.25.2-installer-linux-x86_64.sh
RUN chmod +x ./bazel-0.25.2-installer-linux-x86_64.sh
RUN bash ./bazel-0.25.2-installer-linux-x86_64.sh
WORKDIR /home

# *** end of Dockerfile from IntelAI/he-transformer repository ****************

ENV HE_TRANSFORMER /home/he-transformer

# Build HE-Transformer
# https://github.com/IntelAI/he-transformer#1-build-he-transformer
WORKDIR /home
RUN git clone https://github.com/IntelAI/he-transformer.git he-transformer

WORKDIR $HE_TRANSFORMER
# this is the same as the original cmake file but adds --verbose_build to the bazel build comand
COPY ngraph-tf.cmake /home/he-transformer/cmake/ngraph-tf.cmake

RUN mkdir build && \
    cd build && \ 
    cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_COMPILER=clang++-9 -DCMAKE_C_COMPILER=clang-9 -Werror
WORKDIR $HE_TRANSFORMER/build
RUN env VERBOSE=1 make -j 144 install

# Build the Python bindings for client
# https://github.com/IntelAI/he-transformer#1c-python-bindings-for-client
# RUN cd $HE_TRANSFORMER/build && \
#     source external/venv-tf-py3/bin/activate && \
#     make install python_client && \
#     pip install python/dist/pyhe_client-*.whl && \
#     python3 -c "import pyhe_client"

CMD ["/bin/bash"]

As you can see in the Dockerfile, it uses the master's latest commit to build he-transformer.

@fboemer Do you have any suggestions what could be the reason? Many thanks for your help!

the problem with make install

Using local nGraph source in directory /home/ubuntu/he-transformer/build/ext_ngraph/src/ext_ngraph
Source location: /home/ubuntu/he-transformer/build/ext_ngraph/src/ext_ngraph
Running COMMAND: cmake -DNGRAPH_INSTALL_PREFIX=/home/ubuntu/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/artifacts -DNGRAPH_USE_CXX_ABI=1 -DNGRAPH_DEX_ONLY=TRUE -DNGRAPH_DEBUG_ENABLE=NO -DNGRAPH_UNIT_TEST_ENABLE=NO -DNGRAPH_TARGET_ARCH=native -DNGRAPH_TUNE_ARCH=native -DNGRAPH_TBB_ENABLE=FALSE -DNGRAPH_DISTRIBUTED_ENABLE=OFF -DNGRAPH_TOOLS_ENABLE=YES -DNGRAPH_GPU_ENABLE=NO -DNGRAPH_PLAIDML_ENABLE=NO -DNGRAPH_INTELGPU_ENABLE=NO /home/ubuntu/he-transformer/build/ext_ngraph/src/ext_ngraph
CMake Error: The source directory "/home/ubuntu/he-transformer/build/ext_ngraph/src/ext_ngraph" does not appear to contain CMakeLists.txt.
Specify --help for usage, or press the help button on the CMake GUI.
Traceback (most recent call last):
File "/home/ubuntu/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_ngtf.py", line 525, in
main()
File "/home/ubuntu/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_ngtf.py", line 402, in main
build_ngraph(build_dir, ngraph_src_dir, ngraph_cmake_flags, verbosity)
File "/home/ubuntu/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/tools/build_utils.py", line 86, in build_ngraph
command_executor(cmake_cmd, verbose=True)
File "/home/ubuntu/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/tools/build_utils.py", line 60, in command_executor
raise Exception("Error running command: " + cmd)
Exception: Error running command: cmake -DNGRAPH_INSTALL_PREFIX=/home/ubuntu/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/artifacts -DNGRAPH_USE_CXX_ABI=1 -DNGRAPH_DEX_ONLY=TRUE -DNGRAPH_DEBUG_ENABLE=NO -DNGRAPH_UNIT_TEST_ENABLE=NO -DNGRAPH_TARGET_ARCH=native -DNGRAPH_TUNE_ARCH=native -DNGRAPH_TBB_ENABLE=FALSE -DNGRAPH_DISTRIBUTED_ENABLE=OFF -DNGRAPH_TOOLS_ENABLE=YES -DNGRAPH_GPU_ENABLE=NO -DNGRAPH_PLAIDML_ENABLE=NO -DNGRAPH_INTELGPU_ENABLE=NO /home/ubuntu/he-transformer/build/ext_ngraph/src/ext_ngraph
CMakeFiles/ext_ngraph_tf.dir/build.make:112: recipe for target 'ext_ngraph_tf/src/ext_ngraph_tf-stamp/ext_ngraph_tf-build' failed
make[2]: *** [ext_ngraph_tf/src/ext_ngraph_tf-stamp/ext_ngraph_tf-build] Error 1
CMakeFiles/Makefile2:104: recipe for target 'CMakeFiles/ext_ngraph_tf.dir/all' failed
make[1]: *** [CMakeFiles/ext_ngraph_tf.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

what should I do to solve this problem

the problem with make check_gcc

OS_ID=ubuntu1804
DOCKERFILE=Dockerfile.he_transformer.ubuntu1804
RUN_AS_USER_SCRIPT=/home/dockuser/he-transformer-test/contrib/docker/run_as_ubuntu_user.sh
RM_CONTAINER=true
cd "/home/ubuntu/he-transformer"/contrib/docker
mkdir "/home/ubuntu/he-transformer/contrib/docker/.build-02f908ff7ea565679487255cc5db855e8a11bdbe_2_ubuntu1804" || true
mkdir: 无法创建目录"/home/ubuntu/he-transformer/contrib/docker/.build-02f908ff7ea565679487255cc5db855e8a11bdbe_2_ubuntu1804": 文件已存在
sed -e 's/(FROM he_transformer.*)/\1:02f908ff7ea565679487255cc5db855e8a11bdbe_2/' Dockerfile.he_transformer.ubuntu1804 > "/home/ubuntu/he-transformer/contrib/docker/.build-02f908ff7ea565679487255cc5db855e8a11bdbe_2_ubuntu1804/Dockerfile.build_he_transformer_ubuntu1804"
OS_ID=ubuntu1804
/home/ubuntu/he-transformer/contrib/docker/.build-02f908ff7ea565679487255cc5db855e8a11bdbe_2_ubuntu1804
export CONTEXTDIR=/home/ubuntu/he-transformer/contrib/docker/.build-02f908ff7ea565679487255cc5db855e8a11bdbe_2_ubuntu1804;export DOCKER_TAG=build_he_transformer_ubuntu1804;./make-dimage.sh
CONTEXTDIR=/home/ubuntu/he-transformer/contrib/docker/.build-02f908ff7ea565679487255cc5db855e8a11bdbe_2_ubuntu1804
CONTEXTDIR=/home/ubuntu/he-transformer/contrib/docker/.build-02f908ff7ea565679487255cc5db855e8a11bdbe_2_ubuntu1804

Building docker image build_he_transformer_ubuntu1804:2021-11-14T13-51-16-08-00 from Dockerfile /home/ubuntu/he-transformer/contrib/docker/.build-02f908ff7ea565679487255cc5db855e8a11bdbe_2_ubuntu1804/Dockerfile.build_he_transformer_ubuntu1804, context .

Sending build context to Docker daemon 4.608kB
Step 1/21 : FROM ubuntu:18.04
---> 5a214d77f5d7
Step 2/21 : ARG DEBIAN_FRONTEND=noninteractive
---> Using cache
---> e4709e2439ef
Step 3/21 : RUN apt-get update && apt-get install -y python3-pip virtualenv python3-numpy python3-dev python3-wheel git unzip wget sudo bash-completion build-essential make cmake software-properties-common wget patch diffutils libtinfo-dev autoconf libtool doxygen graphviz yapf3 python3-yapf libmpfr-dev libgmp-dev libssl-dev
---> Using cache
---> 3bd42b6ff72c
Step 4/21 : RUN apt-get update && apt-get install -y software-properties-common && add-apt-repository -y ppa:ubuntu-toolchain-r/test && apt-get update && apt-get install -y vim vim-gnome && apt-get install -y gcc-8 g++-8 gcc-8-base && update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-8 100 && update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-8 100
---> Using cache
---> 544b8b2ff44e
Step 5/21 : RUN wget -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add -
---> Using cache
---> aa27dca361a1
Step 6/21 : RUN apt-add-repository "deb http://apt.llvm.org/bionic/ llvm-toolchain-bionic-9 main"
---> Using cache
---> 830c3f4c0ba7
Step 7/21 : RUN apt-get update && apt install -y clang-9 clang-tidy-9 clang-format-9
---> Using cache
---> 3ff7ac7f85b6
Step 8/21 : RUN apt-get clean autoclean && apt-get autoremove -y
---> Using cache
---> 84b53e7ba784
Step 9/21 : RUN pip3 install --upgrade pip setuptools virtualenv==16.1.0
---> Using cache
---> a783b33abb40
Step 10/21 : RUN pip3 install cmake --upgrade
---> Using cache
---> 535ef6606acc
Step 11/21 : RUN cmake --version
---> Using cache
---> b027128298de
Step 12/21 : RUN make --version
---> Using cache
---> 1429c10fe9d7
Step 13/21 : RUN gcc --version
---> Using cache
---> 216875042b9c
Step 14/21 : RUN clang++-9 --version
---> Using cache
---> 4b15bd64f0da
Step 15/21 : RUN c++ --version
---> Using cache
---> 4edb1c78cdc4
Step 16/21 : RUN python3 --version
---> Using cache
---> d3a933e64118
Step 17/21 : RUN virtualenv --version
---> Using cache
---> 02c6aec250d5
Step 18/21 : RUN wget https://github.com/bazelbuild/bazel/releases/download/0.25.2/bazel-0.25.2-installer-linux-x86_64.sh
---> Using cache
---> f4c430dde777
Step 19/21 : RUN chmod +x ./bazel-0.25.2-installer-linux-x86_64.sh
---> Using cache
---> 972573e98d81
Step 20/21 : RUN bash ./bazel-0.25.2-installer-linux-x86_64.sh
---> Using cache
---> e28495effe39
Step 21/21 : WORKDIR /home
---> Using cache
---> 5111450e260e
Successfully built 5111450e260e
Successfully tagged build_he_transformer_ubuntu1804:2021-11-14T13-51-16-08-00

Docker image build completed

docker tag build_he_transformer_ubuntu1804:latest build_he_transformer_ubuntu1804:02f908ff7ea565679487255cc5db855e8a11bdbe_2

Building for CPU support only.

docker run --rm=true --tty
-v "/home/ubuntu/he-transformer:/home/dockuser/he-transformer-test"

--env BUILD_SUBDIR=BUILD-GCC
--env CMAKE_OPTIONS_EXTRA=""
--env OS_ID="ubuntu1804"
--env PARALLEL=22
--env THIRD_PARTY_CACHE_DIR=
--env CMD_TO_RUN='build_gcc'
--env RUN_UID="0"
--env RUN_CMD="/home/dockuser/he-transformer-test/contrib/docker/build-he-transformer-and-test.sh"
"build_he_transformer_ubuntu1804:02f908ff7ea565679487255cc5db855e8a11bdbe_2"
sh -c "cd /home/dockuser; /home/dockuser/he-transformer-test/contrib/docker/run_as_ubuntu_user.sh"
adduser: The UID 0 is already in use.
Makefile:154: recipe for target 'build_gcc' failed
make: *** [build_gcc] Error 1

Question about mechanics of the he-transformer library

Hi,

The purpose of this question is to understand the overall structure of the he-transformer library, and specifically how he-transformer modifies a regular tensor into one that is capable of computing homomorphically on encrypted data.

In line 51-52 in code below, a new config is defined (possibly with a HE-SEAL backend to enable homomorphic encryption), and a tensor flow session is constructed from that config.

It is not clear, however, how the new tensor (from line 52) interacts with the SEAL library to replace regular operations with homomorphic ones. Can you indicate what parts of the he-transformer library are used to do that?

# Get input / output tensors
x_input = tf.compat.v1.get_default_graph().get_tensor_by_name(
FLAGS.input_node)
y_output = tf.compat.v1.get_default_graph().get_tensor_by_name(
FLAGS.output_node)
# Create configuration to encrypt input
FLAGS, unparsed = server_argument_parser().parse_known_args()
config = server_config_from_flags(FLAGS, x_input.name)
with tf.compat.v1.Session(config=config) as sess:
sess.run(tf.compat.v1.global_variables_initializer())
start_time = time.time()
y_hat = y_output.eval(feed_dict={x_input: x_test})

he transformer docker image build error on MacOS

When I try to build docker image with make check_gcc,

E: Unable to locate package yapf3
#5 33.57 E: Unable to locate package python3-yapf

this error comes out. does anyone know how to fix this issue??

Could not create backend of type HE_SEAL

I tried to run Cryptonets-Relu in encrypted domain and my command is

python test.py --batch_size=100 \
               --backend=HE_SEAL \
               --model_file=Cryptonets-Relu/models/cryptonets-relu.pb \
               --encrypt_server_data=true \
               --encryption_parameters=$HE_TRANSFORMER/configs/he_seal_ckks_config_N13_L7.json

However, I got an error about could not create backend of type HE_SEAL.

The following are the output.

/auto/phd/07/whcjimmy/workspace/he-transformer-intel/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/auto/phd/07/whcjimmy/workspace/he-transformer-intel/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/auto/phd/07/whcjimmy/workspace/he-transformer-intel/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/auto/phd/07/whcjimmy/workspace/he-transformer-intel/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/auto/phd/07/whcjimmy/workspace/he-transformer-intel/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/auto/phd/07/whcjimmy/workspace/he-transformer-intel/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/auto/phd/07/whcjimmy/workspace/he-transformer-intel/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/auto/phd/07/whcjimmy/workspace/he-transformer-intel/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/auto/phd/07/whcjimmy/workspace/he-transformer-intel/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/auto/phd/07/whcjimmy/workspace/he-transformer-intel/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/auto/phd/07/whcjimmy/workspace/he-transformer-intel/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_qint32 = np.dtype([("qint32", np.int32, 1)])
/auto/phd/07/whcjimmy/workspace/he-transformer-intel/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Model restored
2019-12-24 10:38:51.822072: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2299840000 Hz
2019-12-24 10:38:51.826112: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55f036c3e220 executing computations on platform Host. Devices:
2019-12-24 10:38:51.826163: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): ,
2019-12-24 10:38:51.886259: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] NgraphOptimizer failed: Internal: Could not create backend of type HE_SEAL. Got exception: Unable to find backend 'HE_SEAL' as file '/auto/phd/07/whcjimmy/workspace/he-transformer-intel/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/ngraph_bridge/libhe_seal_backend.so'
Open error message '/auto/phd/07/whcjimmy/workspace/he-transformer-intel/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.7/site-packages/ngraph_bridge/libhe_seal_backend.so: undefined symbol: deflateInit
'
2019-12-24 10:38:51.888014: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
total time(s) 0.089
Error count 1 of 100 elements.
Accuracy: 0.99

It seems there is an undefined symbol deflateInit_.

Is there any way to fix this problem?

Thanks.

Install/Build failure -- tensorflow rule compilation error

Hi,

When running make install I get the error below. Any idea what's causing it and how to fix it?

Thanks.

ERROR: /home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/core/kernels/BUILD:3255:1: C++ compilation of rule '//tensorflow/core/kernels:matrix_square_root_op' failed (Exit 4)
gcc: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-7/README.Bugs> for instructions.
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 4483.868s, Critical Path: 238.51s
INFO: 3757 processes: 3757 local.
FAILED: Build did NOT complete successfully
Traceback (most recent call last):
  File "/home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_ngtf.py", line 525, in <module>
    main()
  File "/home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_ngtf.py", line 328, in main
    target_arch, verbosity)
  File "/home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/tools/build_utils.py", line 232, in build_tensorflow
    command_executor(cmd)
  File "/home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/tools/build_utils.py", line 60, in command_executor
    raise Exception("Error running command: " + cmd)
Exception: Error running command: bazel build --config=opt --config=noaws --config=nohdfs --config=noignite --config=nokafka --config=nonccl //tensorflow/tools/pip_package:build_pip_package
CMakeFiles/ext_ngraph_tf.dir/build.make:112: recipe for target 'ext_ngraph_tf/src/ext_ngraph_tf-stamp/ext_ngraph_tf-build' failed
make[2]: *** [ext_ngraph_tf/src/ext_ngraph_tf-stamp/ext_ngraph_tf-build] Error 1
CMakeFiles/Makefile2:464: recipe for target 'CMakeFiles/ext_ngraph_tf.dir/all' failed
make[1]: *** [CMakeFiles/ext_ngraph_tf.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

Minimum security level?

For all the examples I try to run, including the simple matrix multiply example in the "examples" folder, I get an output which says

[WARN] 2020-02-23T08:18:04z src/seal/seal_util.cpp 36 Parameter selection does not enforce minimum security level

I have tried using every parameter setting in the "configs" folder.

Is this the expected behavior?

Support of softmax operation only for plain text and not for encrypted data.

The tf.softmax is supported by he-transformer but it gives incorrect results when a client sends encrypted data to a server. The implementation of softmax on the server side does not contact the client to compute it.

tf.softmax as underpinned by he-transformer: the logs from he-transformer show that softmax is not executed collaboratively with the client and then the final result on the client side is incorrect.

We would like softmax to be supported in the same way as ReLU and MaxPool (via MPC).

Log from the he-transformer:
[WARN] 2020-09-17T17:26:03z src/seal/kernel/divide_seal.cpp 44 Dividing ciphertext / ciphertext without client is not privacy-preserving

Logs:
he_client_softmax.log
he_client_with_client_encryption.log
he_client_without_client_encryption.log
he_server_logits.log
he_server_softmax.log
he_server_softmax_with_client_encryption.log
he_server_softmax_without_client_encryption.log

Building from Docker on clean machine fails

In trying to build this project using the docker images, with the following commands:

make check_gss OS_ID=ubuntu1804 RM_CONTAINER=false
and
make check_gss OS_ID=ubuntu1804

on a clean matching with ubuntu 20 OS and docker version 19.03.12

I run into the error pasted below. I tried using the details in #45 to resolve this issue but have not been able to install the docker image to our clean machine.

`Installing collected packages: ngraph-tensorflow-bridge
Successfully installed ngraph-tensorflow-bridge-0.22.0rc3
/home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/
python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1
type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (
1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/
python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1
type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (
1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/
python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1
type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (
1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/
python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1
type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (
1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/
python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1
type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (
1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/
python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1
type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (
1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/
python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1
) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (
type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/
python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1
) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (
type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/
python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1
) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (
type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/
python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1
) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (
type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/
python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1
) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (
type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/
python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1
) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (
type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
ARTIFACTS location: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cm
ake/artifacts
Loading virtual environment from: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngra
ph_tf/build_cmake/venv-tf-py3
Loading virtual environment from: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngra
ph_tf/build_cmake/venv-tf-py3
PIP location
Target Arch: native
Building TensorFlow from source
PYTHON_BIN_PATH: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake
/venv-tf-py3/bin/python
SOURCE DIR: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tens
orflow
ARTIFACTS DIR: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/a
rtifacts/tensorflow
TF Wheel: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/artifa
cts/tensorflow/tensorflow-1.14.0-cp36-cp36m-linux_x86_64.whl
PYTHON_BIN_PATH: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake
/venv-tf-py3/bin/python
SOURCE DIR: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tens
orflow
ARTIFACTS DIR: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/a
rtifacts/tensorflow
Cannot remove: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/a
rtifacts/tensorflow/libtensorflow_cc.so.1
Copying bazel-bin/tensorflow/libtensorflow_cc.so.1 to /home/dockuser/he-transformer-test/BUILD-GCC/ext_ng
raph_tf/src/ext_ngraph_tf/build_cmake/artifacts/tensorflow
Loading virtual environment from: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngra
ph_tf/build_cmake/venv-tf-py3
LIB: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3
/lib/python3.6/site-packages/tensorflow
CXX_ABI: 1
Using local nGraph source in directory /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph/src/ext_n
graph
Source location: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph/src/ext_ngraph
Running COMMAND: cmake -DNGRAPH_INSTALL_PREFIX=/home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf
/src/ext_ngraph_tf/build_cmake/artifacts -DNGRAPH_USE_CXX_ABI=1 -DNGRAPH_DEX_ONLY=TRUE -DNGRAPH_DEBUG_ENA
BLE=NO -DNGRAPH_UNIT_TEST_ENABLE=NO -DNGRAPH_TARGET_ARCH=native -DNGRAPH_TUNE_ARCH=native -DNGRAPH_TBB_EN
ABLE=FALSE -DNGRAPH_DISTRIBUTED_ENABLE=OFF -DNGRAPH_TOOLS_ENABLE=YES -DNGRAPH_GPU_ENABLE=NO -DNGRAPH_PLAI
DML_ENABLE=NO -DNGRAPH_INTELGPU_ENABLE=NO /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph/src/ext
_ngraph
Running COMMAND: make -j20
Running COMMAND: make install
TF_SRC_DIR: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/ten
sorflow
Loading virtual environment from: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngra
ph_tf/build_cmake/venv-tf-py3
Source location: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf
OUTPUT WHL FILE: ngraph_tensorflow_bridge-0.22.0rc3-py2.py3-none-manylinux1_x86_64.whl
OUTPUT WHL DST: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/
artifacts/ngraph_tensorflow_bridge-0.22.0rc3-py2.py3-none-manylinux1_x86_64.whl
SUCCESSFULLY generated wheel: ngraph_tensorflow_bridge-0.22.0rc3-py2.py3-none-manylinux1_x86_64.whl
PWD: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake
Running COMMAND: cp -r /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build
_cmake/tensorflow/tensorflow/python /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ng
raph_tf/build_cmake/artifacts/tensorflow
Loading virtual environment from: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngra
ph_tf/build_cmake/venv-tf-py3
Version information
TensorFlow version: 1.14.0
C Compiler version used in building TensorFlow: 7.5.0
nGraph bridge version: b'0.22.0-rc3'
nGraph version used for this build: b'0.28.0-rc.1+d2cd873'
TensorFlow version used for this build: v1.14.0-0-g87989f6959
CXX11_ABI flag used for this build: 1
nGraph bridge built with Grappler: True
nGraph bridge built with Variables and Optimizers Enablement: False
nGraph bridge built with Distributed Build: 0
Build successful
cd /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf && /usr/local/lib/python3
.6/dist-packages/cmake/data/bin/cmake -E touch /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf
/src/ext_ngraph_tf-stamp/ext_ngraph_tf-build
[100%] Performing install step for 'ext_ngraph_tf'
cd /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf && ln -fs /home/dockuser/
he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3 /home/dockuser/he-t
ransformer-test/BUILD-GCC/external
cd /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf && /usr/local/lib/python3
.6/dist-packages/cmake/data/bin/cmake -E touch /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf
/src/ext_ngraph_tf-stamp/ext_ngraph_tf-install
[100%] Completed 'ext_ngraph_tf'
/usr/local/lib/python3.6/dist-packages/cmake/data/bin/cmake -E make_directory /home/dockuser/he-transform
er-test/BUILD-GCC/CMakeFiles
/usr/local/lib/python3.6/dist-packages/cmake/data/bin/cmake -E touch /home/dockuser/he-transformer-test/B
UILD-GCC/CMakeFiles/ext_ngraph_tf-complete
/usr/local/lib/python3.6/dist-packages/cmake/data/bin/cmake -E touch /home/dockuser/he-transformer-test/B
UILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf-stamp/ext_ngraph_tf-done
make[2]: Leaving directory '/home/dockuser/he-transformer-test/BUILD-GCC'
[100%] Built target ext_ngraph_tf
make[1]: Leaving directory '/home/dockuser/he-transformer-test/BUILD-GCC'
/usr/local/lib/python3.6/dist-packages/cmake/data/bin/cmake -E cmake_progress_start /home/dockuser/he-tra
nsformer-test/BUILD-GCC/CMakeFiles 0
make -f CMakeFiles/Makefile2 preinstall
make[1]: Entering directory '/home/dockuser/he-transformer-test/BUILD-GCC'
make[1]: Nothing to be done for 'preinstall'.
make[1]: Leaving directory '/home/dockuser/he-transformer-test/BUILD-GCC'
Install the project...
/usr/local/lib/python3.6/dist-packages/cmake/data/bin/cmake -P cmake_install.cmake
-- Install configuration: "RelWithDebInfo"
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libdnnl.so
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libngraph_test_util.a
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libngraph.so
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libmklml_intel.so
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libcpu_backend.so
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libiomp5.so
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libngraph_bridge.so
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libnop_backend.so
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libinterpreter_backend.so
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libngraph_bridge_device.so
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libgcpu_backend.so
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/include/ngraph_backend_manager.h
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/include/ngraph_log.h
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/include/ngraph/version.hpp
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libz.so.1.2.11
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libz.so.1
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libz.so
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles/zlibstatic.dir
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles/minigzip.dir
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles/minigzip.dir/test
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles/CheckTypeSize
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles/example.dir
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles/example.dir/test
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles/minigzip64.dir
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles/minigzip64.dir/test
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles/3.18.2
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles/3.18.2/CompilerIdC
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles/3.18.2/CompilerIdC/tm
p
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles/CMakeTmp
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles/zlib.dir
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles/example64.dir
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/CMakeFiles/example64.dir/test
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libz.a
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libhe_seal_backend.so
-- Set runtime path of "/home/dockuser/he-transformer-test/BUILD-GCC/external/lib/libhe_seal_backend.so"
to "$ORIGIN"
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/v
env-tf-py3/lib/python3.6/site-packages/ngraph_bridge/libhe_seal_backend.so
-- Set runtime path of "/home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/buil
d_cmake/venv-tf-py3/lib/python3.6/site-packages/ngraph_bridge/libhe_seal_backend.so" to "$ORIGIN"
-- Installing: /home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/a
rtifacts/lib/libhe_seal_backend.so
-- Set runtime path of "/home/dockuser/he-transformer-test/BUILD-GCC/ext_ngraph_tf/src/ext_ngraph_tf/buil
d_cmake/artifacts/lib/libhe_seal_backend.so" to "$ORIGIN"
-- Installing: /usr/local/lib/libhe_seal_backend.so
CMake Error at src/cmake_install.cmake:153 (file):
file INSTALL cannot copy file
"/home/dockuser/he-transformer-test/BUILD-GCC/src/libhe_seal_backend.so" to
"/usr/local/lib/libhe_seal_backend.so": Success.
Call Stack (most recent call first):
cmake_install.cmake:83 (include)

Makefile:104: recipe for target 'install' failed
make: *** [install] Error 1
make: *** [Makefile:157: build_gcc] Error 2
$USER@equus:~/Documents/he-transformer/contrib/docker$ ls
build_docker_image.sh Dockerfile.he_transformer.fedora28 Makefile
build-he-transformer-and-test.sh Dockerfile.he_transformer.ubuntu1604 ngraph-tf.cmake
CMakeLists.txt Dockerfile.he_transformer.ubuntu1804 README.md
docker_cleanup.sh fix_numpy_for_tf.patch run_as_centos_user.sh
Dockerfile make-dimage.sh run_as_fedora_user.sh
Dockerfile.he_transformer.centos74 make_docker_image.sh run_as_ubuntu_user.sh
$USER@equus:~/Documents/he-transformer/contrib/docker$`

There was a problem when I installed,what should i do to solove it ?

Starting local Bazel server and connecting to it...
INFO: An error occurred during the fetch of repository 'curl'
INFO: Call stack for the definition of repository 'curl':

  • /home/lol/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/workspace.bzl:474:5
  • /home/lol/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/WORKSPACE:94:1
    ERROR: /home/lol/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/tensorflow/tensorflow/core/platform/cloud/BUILD:108:1: no such package '@curl//': index out of range (index is 0, but sequence has 0 elements) and referenced by '//tensorflow/core/platform/cloud:curl_http_request'
    ERROR: Analysis of target '//tensorflow/tools/pip_package:build_pip_package' failed; build aborted: no such package '@curl//': index out of range (index is 0, but sequence has 0 elements)
    INFO: Elapsed time: 10.983s
    INFO: 0 processes.
    FAILED: Build did NOT complete successfully (299 packages loaded, 7016 targets
    configured)
    Traceback (most recent call last):
    File "/home/lol/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_ngtf.py", line 524, in
    main()
    File "/home/lol/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_ngtf.py", line 328, in main
    target_arch, verbosity)
    File "/home/lol/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/tools/build_utils.py", line 231, in build_tensorflow
    command_executor(cmd)
    File "/home/lol/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/tools/build_utils.py", line 60, in command_executor
    raise Exception("Error running command: " + cmd)
    Exception: Error running command: bazel build --config=opt --config=noaws --config=nohdfs --config=noignite --config=nokafka --config=nonccl //tensorflow/tools/pip_package:build_pip_package
    CMakeFiles/ext_ngraph_tf.dir/build.make:85: recipe for target 'ext_ngraph_tf/src/ext_ngraph_tf-stamp/ext_ngraph_tf-build' failed
    make[2]: *** [ext_ngraph_tf/src/ext_ngraph_tf-stamp/ext_ngraph_tf-build] Error 1
    CMakeFiles/Makefile2:339: recipe for target 'CMakeFiles/ext_ngraph_tf.dir/all' failed
    make[1]: *** [CMakeFiles/ext_ngraph_tf.dir/all] Error 2

Build Docker with ABY

To build ABY, g++ version >= 8.4 is required. However, by default, the docker version of Ubuntu 18.04 comes with g++ version 7.5 (as of now). Thus, we should add g++8 (install g++8) to the build of the docker image.

Moreover, I encountered a few problems while building the library inside docker. The main issue was that it seemed as if not the whole boost library was built and then ABY failed to build. I built the boost library fully manually and then was able to build ABY. Finally, I ran the tests for ABY and all passed. The basic python example examples/ax.py also worked.

Build failure when building the docker images

Hi All,

I'm trying to build the docker images in https://github.com/IntelAI/he-transformer/tree/master/contrib/docker on an Ubuntu 16.04 machine. When I run the command make check_gcc, it gives the following error.

Building docker image build_he_transformer_ubuntu1604:2020-05-31T12-42-17+05-30 from Dockerfile /home/kasun/Documents/MSc/Research/he-transformer/contrib/docker/.build-49879416a5b659edc1ca5b31493bc659685ae483_2_ubuntu1604/Dockerfile.build_he_transformer_ubuntu1604, context .
 
invalid argument "build_he_transformer_ubuntu1604:2020-05-31T12-42-17+05-30" for t=build_he_transformer_ubuntu1604:2020-05-31T12-42-17+05-30: Error parsing reference: "build_he_transformer_ubuntu1604:2020-05-31T12-42-17+05-30" is not a valid repository/tag: invalid reference format
See 'docker build --help'.
Makefile:141: recipe for target 'build_docker_image' failed
make: *** [build_docker_image] Error 125

Any idea on a possible cause for this?

Problem compiling HE-Transformer

Hello,

I am trying to build HE-transformer for several days now, but I fail to do so.

I tried using several approaches and every approach seems to take around 5 hours, can you please advise me on the best and fastest way to build HE-Transformer?

What I did?

  1. On a Linux Ubuntu 19, x86-64, platform: I followed the steps listed on the README file directly. I tried using different compilers GCC, clang-9/10/12 but the compilation of (usually Tensorflow) always failed.
  2. I tried using the make check_gcc and make check_clang scripts. However, both failed to compile.
  3. I tried using the dockers of MarbleHE/SoK, these are artifacts of the SoK: Fully Homomorphic Encryption Tools & Compilers paper - no luck. After carefully reading their docker code, I observed that they hacked the HE-Transformer Cmake system. Later on, I noticed in their wiki that -

nGraphHE... does not compile ;)

  1. I tried building the Ubuntu18 docker on a Linux laptop and also on a Windows laptop with docker support for Linux through VMM - success on both.
  2. Then I tried again following the README instructions. At first, I received:

CMakeFiles/ext_boost.dir/build.make:97: recipe for target 'boost/src/ext_boost-stamp/ext_boost-download' failed
make[2]: *** [boost/src/ext_boost-stamp/ext_boost-download] Error 1
CMakeFiles/Makefile2:420: recipe for target 'CMakeFiles/ext_boost.dir/all' failed
make[1]: *** [CMakeFiles/ext_boost.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
Switched to a new branch '3.4.5'

I reran make install and received

CXX google/protobuf/compiler/csharp/csharp_field_base.lo
CXX google/protobuf/compiler/csharp/csharp_repeated_message_field.lo
error: RPC failed; curl 56 GnuTLS recv error (-9): A TLS packet with unexpected length was received.
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed
Traceback (most recent call last):
File "/home/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_ngtf.py", line 525, in
main()
File "/home/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_ngtf.py", line 324, in main
tf_version)
File "/home/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/tools/build_utils.py", line 484, in download_repo
os.chdir(target_name)
FileNotFoundError: [Errno 2] No such file or directory: 'tensorflow'
CMakeFiles/ext_ngraph_tf.dir/build.make:85: recipe for target 'ext_ngraph_tf/src/ext_ngraph_tf-stamp/ext_ngraph_tf-build' failed
make[2]: *** [ext_ngraph_tf/src/ext_ngraph_tf-stamp/ext_ngraph_tf-build] Error 1
CMakeFiles/Makefile2:212: recipe for target 'CMakeFiles/ext_ngraph_tf.dir/all' failed
make[1]: *** [CMakeFiles/ext_ngraph_tf.dir/all] Error 2

Rerunning make install again results in:

[ 40%] Completed 'ext_ngraph'
[ 40%] Built target ext_ngraph
Makefile:135: recipe for target 'all' failed
make: *** [all] Error 2

In the next attempts I always got the same error:

ALPN, server did not agree to a protocol
Server certificate:
subject: CN=.bintray.com
start date: Sep 26 00:00:00 2019 GMT
expire date: Nov 9 12:00:00 2021 GMT
subjectAltName: host "dl.bintray.com" matched cert's "
.bintray.com"
issuer: C=US; O=DigiCert Inc; OU=www.digicert.com; CN=GeoTrust RSA CA 2018
SSL certificate verify ok.
[5 bytes data]
GET /boostorg/release/1.69.0/source/boost_1_69_0.tar.gz HTTP/1.1
Host: dl.bintray.com
User-Agent: curl/7.75.0
Accept: /
[5 bytes data]
Mark bundle as not supporting multiuse
HTTP/1.1 403 Forbidden
Server: nginx
Date: Mon, 24 May 2021 16:32:21 GMT
Content-Type: text/plain
Content-Length: 10
Connection: keep-alive
ETag: "5c408590-a"
The requested URL returned error: 403
Closing connection 0
CMakeFiles/ext_boost.dir/build.make:97: recipe for target 'boost/src/ext_boost-stamp/ext_boost-download' failed
make[2]: *** [boost/src/ext_boost-stamp/ext_boost-download] Error 1
CMakeFiles/Makefile2:420: recipe for target 'CMakeFiles/ext_boost.dir/all' failed
make[1]: *** [CMakeFiles/ext_boost.dir/all] Error 2
Makefile:135: recipe for target 'all' failed
make: *** [all] Error 2

I would really appreciate help on how to proceed.

Thanks,
Nir Drucker

Build failure because of the 'ext_protobuf' extension

I got the following error while building the he-transformer. It seems to affect the 'ext_protobuf' extension. Any suggestions on how to fix it.

user1@ubuntu:~/nGraph/he-transformer/build$ make install
[  5%] Built target ext_ngraph
[  9%] Built target ext_zlib
[ 13%] Built target ext_ngraph_tf
[ 18%] Built target ext_seal
[ 18%] Performing configure step for 'ext_protobuf'
+ mkdir -p third_party/googletest/m4
+ autoreconf -f -i -Wall,no-obsolete
./autogen.sh: 37: ./autogen.sh: autoreconf: not found
CMakeFiles/ext_protobuf.dir/build.make:107: recipe for target 'protobuf/stamp/ext_protobuf-configure' failed
make[2]: *** [protobuf/stamp/ext_protobuf-configure] Error 127
CMakeFiles/Makefile2:194: recipe for target 'CMakeFiles/ext_protobuf.dir/all' failed
make[1]: *** [CMakeFiles/ext_protobuf.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2

I have tried to ignore the problem above, but it's preventing the examples from running.
I get the same error when using the --backend=HE_SEAL option.

(venv-tf-py3) user1@ubuntu:~/nGraph/he-transformer$ python examples/ax.py --backend=CPU
[...]
config graph_options {
  rewrite_options {
    meta_optimizer_iterations: ONE
    min_graph_nodes: -1
    custom_optimizers {
      name: "ngraph-optimizer"
      parameter_map {
        key: "device_id"
        value {
          s: ""
        }
      }
      parameter_map {
        key: "enable_client"
        value {
          s: "False"
        }
      }
      parameter_map {
        key: "encryption_parameters"
        value {
          s: ""
        }
      }
      parameter_map {
        key: "ngraph_backend"
        value {
          s: "CPU"
        }
      }
    }
  }
}

2020-03-29 19:39:10.337638: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2496085000 Hz
2020-03-29 19:39:10.339407: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x29fc300 executing computations on platform Host. Devices:
2020-03-29 19:39:10.339488: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
2020-03-29 19:39:10.346982: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] NgraphOptimizer failed: Internal: Could not create backend of type CPU. Got exception: Unable to find backend 'CPU' as file '/home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/ngraph_bridge/libcpu_backend.so'
Open error message '/home/user1/nGraph/he-transformer/build/ext_ngraph_tf/src/ext_ngraph_tf/build_cmake/venv-tf-py3/lib/python3.6/site-packages/ngraph_bridge/libmklml_intel.so: cannot read file data'
2020-03-29 19:39:10.347551: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
Result:  [[2. 3. 4. 5.]]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.