Code Monkey home page Code Monkey logo

tensorflow-on-arm's Introduction

Tensorflow-on-arm

Inspired by tensorflow-on-raspberry-pi. Tool to compile tensorflow for ARM.

Dependencies

apt-get install openjdk-8-jdk automake autoconf
apt-get install curl zip unzip libtool swig libpng-dev zlib1g-dev pkg-config git g++ wget xz-utils

# For python2.7
apt-get install python-numpy python-dev python-pip python-mock

# If using a virtual environment, omit the --user argument
pip install -U --user keras_applications==1.0.8 --no-deps
pip install -U --user keras_preprocessing==1.1.0 --no-deps

# For python3
apt-get install python3-numpy python3-dev python3-pip python3-mock

# If using a virtual environment, omit the --user argument
pip3 install -U --user keras_applications==1.0.8 --no-deps
pip3 install -U --user keras_preprocessing==1.1.0 --no-deps
pip3 install portpicker

TensorFlow on Raspberry Pi

It's officially supported!

Python wheels for TensorFlow are officially supported. This repository also maintains up-to-date TensorFlow wheels for Raspberry Pi.

Installation

Check out the official TensorFlow website for more information.

Cross-compilation

Make you sure add the ARM architecture to your package manager, see how to add it in Debian flavors:

dpkg --add-architecture armhf
echo "deb [arch=armhf] http://httpredir.debian.org/debian/ buster main contrib non-free" >> /etc/apt/sources.list

If you want compile Python support:

# For python2.7
apt-get install libpython-all-dev:armhf

# For python3
apt-get install libpython3-all-dev:armhf

Using Docker

Python 3.7

cd build_tensorflow/
docker build -t tf-arm -f Dockerfile .
docker run -it -v /tmp/tensorflow_pkg/:/tmp/tensorflow_pkg/ --env TF_PYTHON_VERSION=3.7 tf-arm ./build_tensorflow.sh configs/<conf-name> # rpi.conf, rk3399.conf ...

Python 3.8

cd build_tensorflow/
docker build -t tf-arm -f Dockerfile.bullseye .
docker run -it -v /tmp/tensorflow_pkg/:/tmp/tensorflow_pkg/ --env TF_PYTHON_VERSION=3.8 tf-arm ./build_tensorflow.sh configs/<conf-name> # rpi.conf, rk3399.conf ...

Edit tweaks like Bazel resources, board model, and others.

See configuration file examples in: build_tensorflow/configs/

Finally, compile TensorFlow.

cd build_tensorflow/
chmod +x build_tensorflow.sh
TF_PYTHON_VERSION=3.5 ./build_tensorflow.sh <path-of-config> [noclean]
# The optional [noclean] argument omits 'bazel clean' before building for debugging purposes.
# If no output errors, the pip package will be in the directory: /tmp/tensorflow_pkg/

tensorflow-on-arm's People

Contributors

adrianghc avatar clemensvonschwerin avatar droidicus avatar lhelontra avatar mattn avatar photoszzt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow-on-arm's Issues

Better SoC to run Tensorflow

Hi Leonardo:
Might be RK3399 will be better SoC to run Tensorflow? And as I know, RK3399 feature with NPU will be ready soon. So will you move to develop on RK3399?

Failed to build bazel on VMware mechine with 2GB RAM

The following bash code doesn't work when I try to build bazel-0.10.0 with only 2GB RAM or less:

./compile.sh || {
log_failure_msg "error when compile bazel"
exit 1
}
chmod +x output/bazel
mv output/bazel $BAZEL_BIN

Output is:

Building Bazel from scratch../usr/lib/jvm/java-8-openjdk-amd64/bin/javac -classpath third_party/gson/gson-2.2.4.jar:third_party/plexus_utils/plexus-utils-3.0.21.jar:third_party/hazelcast/hazelcast-client-3.6.4.jar:third_party/hazelcast/hazelcast-3.6.4.jar:third_party/.......
...
The system is out of resources.
Consult the following stack trace for details.
java.lang.OutOfMemoryError: GC overhead limit exceeded
	at com.sun.tools.javac.code.Types.freshTypeVariables(Types.java:4120)
	at com.sun.tools.javac.code.Types.capture(Types.java:4068)
	at com.sun.tools.javac.comp.Infer$InferenceContext.cachedCapture(Infer.java:2318)
	at com.sun.tools.javac.comp.Resolve$MethodResultInfo.check(Resolve.java:1014)
	at com.sun.tools.javac.comp.Resolve$4.checkArg(Resolve.java:835)
	at com.sun.tools.javac.comp.Resolve$AbstractMethodCheck.argumentsAcceptable(Resolve.java:735)
	at com.sun.tools.javac.comp.Resolve$4.argumentsAcceptable(Resolve.java:844)
	at com.sun.tools.javac.comp.Infer.instantiateMethod(Infer.java:162)
	at com.sun.tools.javac.comp.Resolve.rawInstantiate(Resolve.java:567)
	at com.sun.tools.javac.comp.Resolve.checkMethod(Resolve.java:604)
	at com.sun.tools.javac.comp.Attr.checkMethod(Attr.java:3829)
	at com.sun.tools.javac.comp.Attr.checkIdInternal(Attr.java:3616)
	at com.sun.tools.javac.comp.Attr.checkMethodIdInternal(Attr.java:3527)
	at com.sun.tools.javac.comp.Attr.checkMethodId(Attr.java:3502)
	at com.sun.tools.javac.comp.Attr.checkId(Attr.java:3489)
	at com.sun.tools.javac.comp.Attr.visitSelect(Attr.java:3371)
	at com.sun.tools.javac.tree.JCTree$JCFieldAccess.accept(JCTree.java:1897)
	at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:576)
	at com.sun.tools.javac.comp.Attr.visitApply(Attr.java:1825)
	at com.sun.tools.javac.tree.JCTree$JCMethodInvocation.accept(JCTree.java:1465)
	at com.sun.tools.javac.comp.Attr.attribTree(Attr.java:576)
	at com.sun.tools.javac.comp.DeferredAttr$2.complete(DeferredAttr.java:285)
	at com.sun.tools.javac.comp.DeferredAttr$DeferredType.check(DeferredAttr.java:246)
	at com.sun.tools.javac.comp.DeferredAttr$DeferredType.check(DeferredAttr.java:233)
	at com.sun.tools.javac.comp.Resolve$MethodResultInfo.check(Resolve.java:1008)
	at com.sun.tools.javac.comp.Resolve$4.checkArg(Resolve.java:835)
	at com.sun.tools.javac.comp.Resolve$AbstractMethodCheck.argumentsAcceptable(Resolve.java:735)
	at com.sun.tools.javac.comp.Resolve$4.argumentsAcceptable(Resolve.java:844)
	at com.sun.tools.javac.comp.Resolve.rawInstantiate(Resolve.java:579)
	at com.sun.tools.javac.comp.Resolve.checkMethod(Resolve.java:604)
	at com.sun.tools.javac.comp.Attr.checkMethod(Attr.java:3829)
	at com.sun.tools.javac.comp.Attr.checkIdInternal(Attr.java:3616)
chmod: cannot access 'output/bazel': No such file or directory
mv: cannot stat 'output/bazel': No such file or directory

Here is my patch code:

CompileError: command 'arm-linux-gnueabihf-gcc' failed with exit status 4

I'm trying to install on Pi b3+ and getting an error. I tried to manually install grpcio but it didn't fix the problem. Here is the full error.

CompileError: command 'arm-linux-gnueabihf-gcc' failed with exit status 4



----------------------------------------
  Rolling back uninstall of grpcio
Cleaning up...
Command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-KA5EiS/grpcio/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-Icbf3Z-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip-build-KA5EiS/grpcio

ImportError: /lib/arm-linux-gnueabihf/libm.so.6: version `GLIBC_2.23' not found

I download tensorflow-1.5.0-cp27-none -linux_armv7l.whl to install on pi3. But I get a Error:ImportError: /lib/arm-linux-gnueabihf/libm.so.6: version `GLIBC_2.23' not found (required by /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow_internal.so)

Error tell that can't found version 'GLIBC_2.23', it is mean that I need upgrade my GLIBC.

Please help how upgrade it.

two Exception error installing tensorflow

this error occurs when i do the "pip3 install tensorflow" command

Exception:Traceback (most recent call last):ย  File "/usr/share/python-wheels/urllib3-1.19.1-py2.py3-none-any.whl/urllib3/connectionpool.py", line 594, in urlopenย  ย  chunked=chunked)ย  File "/usr/share/python-wheels/urllib3-1.19.1-py2.py3-none-any.whl/urllib3/connectionpool.py", line 391, in _make_requestย  ย  six.raise_from(e, None)ย  File "", line 2, in raise_fromย  File "/usr/share/python-wheels/urllib3-1.19.1-py2.py3-none-any.whl/urllib3/connectionpool.py", line 387, in _make_requestย  ย  httplib_response = conn.getresponse()ย  File "/usr/lib/python3.5/http/client.py", line 1198, in getresponseย  ย  response.begin()ย  File "/usr/lib/python3.5/http/client.py", line 297, in beginย  ย  version, status, reason = self._read_status()ย  File "/usr/lib/python3.5/http/client.py", line 266, in _read_statusย  ย  raise RemoteDisconnected("Remote end closed connection without"http.client.RemoteDisconnected: Remote end closed connection without responseDuring handling of the above exception, another exception occurred:Traceback (most recent call last):ย  File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 215, in mainย  ย  status = self.run(options, args)ย  File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 353, in runย  ย  wb.build(autobuilding=True)ย  File "/usr/lib/python3/dist-packages/pip/wheel.py", line 749, in buildย  ย  self.requirement_set.prepare_files(self.finder)ย  File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 380, in prepare_filesย  ย  ignore_dependencies=self.ignore_dependencies))ย  File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 554, in _prepare_fileย  ย  require_hashesย  File "/usr/lib/python3/dist-packages/pip/req/req_install.py", line 278, in populate_linkย  ย  self.link = finder.find_requirement(self, upgrade)ย  File "/usr/lib/python3/dist-packages/pip/index.py", line 465, in find_requirementย  ย  all_candidates = self.find_all_candidates(req.name)ย  File "/usr/lib/python3/dist-packages/pip/index.py", line 423, in find_all_candidatesย  ย  for page in self._get_pages(url_locations, project_name):ย  File "/usr/lib/python3/dist-packages/pip/index.py", line 568, in _get_pagesย  ย  page = self._get_page(location)ย  File "/usr/lib/python3/dist-packages/pip/index.py", line 683, in _get_pageย  ย  return HTMLPage.get_page(link, session=self.session)ย  File "/usr/lib/python3/dist-packages/pip/index.py", line 792, in get_pageย  ย  "Cache-Control": "max-age=600",ย  File "/usr/share/python-wheels/requests-2.12.4-py2.py3-none-any.whl/requests/sessions.py", line 501, in getย  ย  return self.request('GET', url, **kwargs)ย  File "/usr/lib/python3/dist-packages/pip/download.py", line 386, in requestย  ย  return super(PipSession, self).request(method, url, *args, **kwargs)ย  File "/usr/share/python-wheels/requests-2.12.4-py2.py3-none-any.whl/requests/sessions.py", line 488, in requestย  ย  resp = self.send(prep, **send_kwargs)ย  File "/usr/share/python-wheels/requests-2.12.4-py2.py3-none-any.whl/requests/sessions.py", line 609, in sendย  ย  r = adapter.send(request, **kwargs)ย  File "/usr/share/python-wheels/CacheControl-0.11.7-py2.py3-none-any.whl/cachecontrol/adapter.py", line 47, in sendย  ย  resp = super(CacheControlAdapter, self).send(request, **kw)ย  File "/usr/share/python-wheels/requests-2.12.4-py2.py3-none-any.whl/requests/adapters.py", line 423, in sendย  ย  timeout=timeoutย  File "/usr/share/python-wheels/urllib3-1.19.1-py2.py3-none-any.whl/urllib3/connectionpool.py", line 643, in urlopenย  ย  _stacktrace=sys.exc_info()[2])ย  File "/usr/share/python-wheels/urllib3-1.19.1-py2.py3-none-any.whl/urllib3/util/retry.py", line 315, in incrementย  ย  total -= 1TypeError: unsupported operand type(s) for -=: 'Retry' and 'int'

Error Regarding _FusedConv2D on ARMv6/Pi Zero

Hi there. I have successfully install v1.13.1 on pi zero with the armv6l.whl. I also have keras installed and ran an MNIST MLP example correctly with KerasMNIST repo from EN10. But when I tried the cnn version of MNIST, some error regarding _FusedConv2D popped up. A bit researching suggests that the error is with the architecture. I have cross-checked with pi 3 and verified that the repo works on armv7.
According to the comment in the second link, cross-compiling with some setup will make it work. I was wondering if it is possible to release a new one with this feature?

Tensorboard

I built Tensorflow 1.6.0 for arm (Odroid UX4Q) successfully. However, when I am running:

pip2 install --install-option="--prefix=$prefix" /tmp/tensorflow_pkg/tensorflow-1.6.0-cp27-cp27mu-linux_armv7l.whl --upgrade --ignore-installed

I am getting the following error message:

Could not find a version that satisfies the requirement tensorboard<1.7.0,>=1.6.0 (from tensorflow==1.6.0) (from versions: ) No matching distribution found for tensorboard<1.7.0,>=1.6.0 (from tensorflow==1.6.0)

Which seems reasonable since there is no tensorboard pip package for arm... Is there a workaround for it?

Thanks!

Raspberry Pi ARMv8 - Install fails for version 1.12

This is fantastic support for installing latest and greatest tensorflow on raspberry pi. Had problem installing.

(venv) pi@raspberrypi:~/tf $ sudo pip3 install tensorflow-1.12.0-cp35-none-linux_armv7l.whl
tensorflow-1.12.0-cp35-none-linux_armv7l.whl is not a supported wheel on this platform.

After a little more exploration found that the Pi which i have is of ARMv8, so i am guessing that is what is making it fail.

This is the Raspberry Pi I am working with:

https://www.amazon.com/Raspberry-Pi-RASPBERRYPI3-MODB-1GB-Model-Motherboard/dp/B01CD5VC92/ref=olp_product_details?_encoding=UTF8&me=

patch.sh corrupt?

When I look at the ./build_tensorflow/patch.sh file it contains what looks like merge conflicts. Is this correct or is the file somehow corrupt from a previous merge?

For example at line 29 I see:

  local CROSSTOOL_VERSION=$($CROSSTOOL_DIR/bin/$CROSSTOOL_NAME-gcc -dumpversion)
  git apply << EOF
diff --git a/BUILD.local_arm_compiler b/BUILD.local_arm_compiler
new file mode 100644
index 000000000..e5d8cc384
+++ b/BUILD.local_arm_compiler
@@ -0,0 +1,81 @@
+package(default_visibility = ['//visibility:public'])
+

The rest of the file contains more merge annotations like this. I have looked at the file history and all the previous files contain this as well.

On Tegra TK1

I want to know whether these binaries will work on Jetson Tegra TK1 board?

tensorflow uses 2 times more memory on aarch64

I installed tensorflow-1.13.1-cp35-none-linux_aarch64.whl on aws a1.4xlarge instance (ubuntu16) and on firefly board (RK3399).
I downloaded the wheel from https://github.com/lhelontra/tensorflow-on-arm/releases/download/v1.13.1/tensorflow-1.13.1-cp35-none-linux_aarch64.whl

I tried to run resnet50 model (1,224,224,3)
memory usage was 1.6-1.7 GB

I also tried to run the same resnet50 model with official TF wheel for Raspberry pi or linux_x86. Memory usage was only 620-680MB

Raspberry pi wheel
https://www.piwheels.org/simple/tensorflow/tensorflow-1.13.1-cp35-none-linux_armv7l.whl

What I noticed it that Raspberry pi official wheel uses 650MB after I loaded the model and it does nto use any extra memory to run the model.

But lhelontra wheel uses 700 MB after I loaded the model and it also uses another 1GB after first run of the model. After the second and consequent runs the memory usage stays the same - 1.8GB

Raspberry pi official wheel

(224, 224, 3) panda.jpg
peak memory usage (bytes on OS X, kilobytes on Linux) 128920
Loading frozen model: resnet50_frozen.pb ....
WARNING:tensorflow:From ./run-tf.py:85: FastGFile.__init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
<class 'tensorflow.core.framework.graph_pb2.GraphDef'>
peak memory usage (bytes on OS X, kilobytes on Linux) 642516
peak memory usage (bytes on OS X, kilobytes on Linux) 642516
input_tensor_names: ['aimport/input_1:0']
output_tensor_names: {'aimport/fc1000/Softmax:0'}
Tensor("aimport/input_1:0", shape=(?, 224, 224, 3), dtype=float32)
input_shape: (?, 224, 224, 3)
Tensor("aimport/fc1000/Softmax:0", shape=(?, 1000), dtype=float32)
sess.run...
sess.run done
duration 6,654 ms
peak memory usage (bytes on OS X, kilobytes on Linux) 642516
sess.run...
sess.run done
duration 1,117 ms
peak memory usage (bytes on OS X, kilobytes on Linux) 642516
sess.run...
sess.run done
duration 1,103 ms
peak memory usage (bytes on OS X, kilobytes on Linux) 642516
1
(1, 1000)
panda.jpg - 388, 0.9995660185813904, giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca
peak memory usage (bytes on OS X, kilobytes on Linux) 642516

lhelontra wheel

(224, 224, 3) panda.jpg
peak memory usage (bytes on OS X, kilobytes on Linux) 179216
Loading frozen model: resnet50_frozen.pb ....
WARNING:tensorflow:From ./run-tf.py:85: FastGFile.__init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
<class 'tensorflow.core.framework.graph_pb2.GraphDef'>
peak memory usage (bytes on OS X, kilobytes on Linux) 727996
input_tensor_names: ['aimport/input_1:0']
output_tensor_names: {'aimport/fc1000/Softmax:0'}
Tensor("aimport/input_1:0", shape=(?, 224, 224, 3), dtype=float32)
input_shape: (?, 224, 224, 3)
Tensor("aimport/fc1000/Softmax:0", shape=(?, 1000), dtype=float32)
peak memory usage (bytes on OS X, kilobytes on Linux) 727996
sess.run...
sess.run done
duration 4,225 ms
peak memory usage (bytes on OS X, kilobytes on Linux) 1814044
sess.run...
sess.run done
duration 135 ms
peak memory usage (bytes on OS X, kilobytes on Linux) 1814044
sess.run...
sess.run done
duration 139 ms
peak memory usage (bytes on OS X, kilobytes on Linux) 1814044
1
(1, 1000)
panda.jpg - 388, 0.9995660185813904, giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca
peak memory usage (bytes on OS X, kilobytes on Linux) 1814044

Compilation for RPi fails when using Docker on Ubuntu 16.04

I've tried building the tensorflow wheel file for arm using Docker image and following the README.md. The setup works just fine and build is started, however, ultimately the build procedure fails (full error below).

The bazel version used by the docker is 0.15. I'm using Ubuntu 16.04 running as a guest in VirtualBox.

ERROR: /root/tensorflow-on-arm/build_tensorflow/sources/tensorflow/tensorflow/BUILD:592:1: Executing genrule //tensorflow:tensorflow_python_api_gen failed (Exit 1): bash failed: error executing command (cd /root/.cache/bazel/_bazel_root/ce71fd092aa8d5457c1f0e68f2d49c52/execroot/org_tensorflow && \ exec env - \ ERROR: /root/tensorflow-on-arm/build_tensorflow/sources/tensorflow/tensorflow/BUILD:592:1: Executing genrule //tensorflow:tensorflow_python_api_gen failed (Exit 1): bash failed: error executing command (cd /root/.cache/bazel/_bazel_root/ce71fd092aa8d5457c1f0e68f2d49c52/execroot/org_tensorflow && \ exec env - \ PATH=/root/tensorflow-on-arm/build_tensorflow/sources//bin/:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \ /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; bazel-out/host/bin/tensorflow/create_tensorflow.python_api --root_init_template=tensorflow/api_template.__init__.py --apidir=bazel-out/host/genfiles/tensorflow --apiname=tensorflow --apiversion=1 --package=tensorflow.python --output_package=tensorflow bazel-out/host/genfiles/tensorflow/__init__.py bazel-out/host/genfiles/tensorflow/app/__init__.py bazel-out/host/genfiles/tensorflow/bitwise/__init__.py bazel-out/host/genfiles/tensorflow/compat/__init__.py bazel-out/host/genfiles/tensorflow/data/__init__.py bazel-out/host/genfiles/tensorflow/debugging/__init__.py bazel-out/host/genfiles/tensorflow/distributions/__init__.py bazel-out/host/genfiles/tensorflow/dtypes/__init__.py bazel-out/host/genfiles/tensorflow/errors/__init__.py bazel-out/host/genfiles/tensorflow/feature_column/__init__.py bazel-out/host/genfiles/tensorflow/gfile/__init__.py bazel-out/host/genfiles/tensorflow/graph_util/__init__.py bazel-out/host/genfiles/tensorflow/image/__init__.py bazel-out/host/genfiles/tensorflow/io/__init__.py bazel-out/host/genfiles/tensorflow/initializers/__init__.py bazel-out/host/genfiles/tensorflow/keras/__init__.py bazel-out/host/genfiles/tensorflow/keras/activations/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/densenet/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/inception_resnet_v2/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/inception_v3/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/mobilenet/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/mobilenet_v2/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/nasnet/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/resnet50/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/vgg16/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/vgg19/__init__.py bazel-out/host/genfiles/tensorflow/keras/applications/xception/__init__.py bazel-out/host/genfiles/tensorflow/keras/backend/__init__.py bazel-out/host/genfiles/tensorflow/keras/callbacks/__init__.py bazel-out/host/genfiles/tensorflow/keras/constraints/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/boston_housing/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/cifar10/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/cifar100/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/fashion_mnist/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/imdb/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/mnist/__init__.py bazel-out/host/genfiles/tensorflow/keras/datasets/reuters/__init__.py bazel-out/host/genfiles/tensorflow/keras/estimator/__init__.py bazel-out/host/genfiles/tensorflow/keras/initializers/__init__.py bazel-out/host/genfiles/tensorflow/keras/layers/__init__.py bazel-out/host/genfiles/tensorflow/keras/losses/__init__.py bazel-out/host/genfiles/tensorflow/keras/metrics/__init__.py bazel-out/host/genfiles/tensorflow/keras/models/__init__.py bazel-out/host/genfiles/tensorflow/keras/optimizers/__init__.py bazel-out/host/genfiles/tensorflow/keras/preprocessing/__init__.py bazel-out/host/genfiles/tensorflow/keras/preprocessing/image/__init__.py bazel-out/host/genfiles/tensorflow/keras/preprocessing/sequence/__init__.py bazel-out/host/genfiles/tensorflow/keras/preprocessing/text/__init__.py bazel-out/host/genfiles/tensorflow/keras/regularizers/__init__.py bazel-out/host/genfiles/tensorflow/keras/utils/__init__.py bazel-out/host/genfiles/tensorflow/keras/wrappers/__init__.py bazel-out/host/genfiles/tensorflow/keras/wrappers/scikit_learn/__init__.py bazel-out/host/genfiles/tensorflow/layers/__init__.py bazel-out/host/genfiles/tensorflow/linalg/__init__.py bazel-out/host/genfiles/tensorflow/logging/__init__.py bazel-out/host/genfiles/tensorflow/losses/__init__.py bazel-out/host/genfiles/tensorflow/manip/__init__.py bazel-out/host/genfiles/tensorflow/math/__init__.py bazel-out/host/genfiles/tensorflow/metrics/__init__.py bazel-out/host/genfiles/tensorflow/nn/__init__.py bazel-out/host/genfiles/tensorflow/nn/rnn_cell/__init__.py bazel-out/host/genfiles/tensorflow/profiler/__init__.py bazel-out/host/genfiles/tensorflow/python_io/__init__.py bazel-out/host/genfiles/tensorflow/quantization/__init__.py bazel-out/host/genfiles/tensorflow/resource_loader/__init__.py bazel-out/host/genfiles/tensorflow/strings/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/builder/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/constants/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/loader/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/main_op/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/signature_constants/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/signature_def_utils/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/tag_constants/__init__.py bazel-out/host/genfiles/tensorflow/saved_model/utils/__init__.py bazel-out/host/genfiles/tensorflow/sets/__init__.py bazel-out/host/genfiles/tensorflow/sparse/__init__.py bazel-out/host/genfiles/tensorflow/spectral/__init__.py bazel-out/host/genfiles/tensorflow/summary/__init__.py bazel-out/host/genfiles/tensorflow/sysconfig/__init__.py bazel-out/host/genfiles/tensorflow/test/__init__.py bazel-out/host/genfiles/tensorflow/train/__init__.py bazel-out/host/genfiles/tensorflow/train/queue_runner/__init__.py bazel-out/host/genfiles/tensorflow/user_ops/__init__.py') Traceback (most recent call last): File "/root/.cache/bazel/_bazel_root/ce71fd092aa8d5457c1f0e68f2d49c52/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api.runfiles/org_tensorflow/tensorflow/python/tools/api/generator/create_python_api.py", line 27, in <module> from tensorflow.python.tools.api.generator import doc_srcs File "/root/.cache/bazel/_bazel_root/ce71fd092aa8d5457c1f0e68f2d49c52/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api.runfiles/org_tensorflow/tensorflow/python/__init__.py", line 49, in <module> from tensorflow.python import pywrap_tensorflow File "/root/.cache/bazel/_bazel_root/ce71fd092aa8d5457c1f0e68f2d49c52/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api.runfiles/org_tensorflow/tensorflow/python/pywrap_tensorflow.py", line 74, in <module> raise ImportError(msg) ImportError: Traceback (most recent call last): File "/root/.cache/bazel/_bazel_root/ce71fd092aa8d5457c1f0e68f2d49c52/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api.runfiles/org_tensorflow/tensorflow/python/pywrap_tensorflow.py", line 58, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "/root/.cache/bazel/_bazel_root/ce71fd092aa8d5457c1f0e68f2d49c52/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api.runfiles/org_tensorflow/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "/root/.cache/bazel/_bazel_root/ce71fd092aa8d5457c1f0e68f2d49c52/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api.runfiles/org_tensorflow/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) ImportError: /root/.cache/bazel/_bazel_root/ce71fd092aa8d5457c1f0e68f2d49c52/execroot/org_tensorflow/bazel-out/host/bin/tensorflow/create_tensorflow.python_api.runfiles/org_tensorflow/tensorflow/python/_pywrap_tensorflow_internal.so: undefined symbol: PyUnicode_InternFromString

Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/install_sources#common_installation_problems
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.

Target //tensorflow/tools/pip_package:build_pip_package failed to build
INFO: Elapsed time: 9795.185s, Critical Path: 3274.69s
INFO: 7633 processes: 7633 local.
FAILED: Build did NOT complete successfully

support for newer debians? ( please )

Hi,
This is a great project. I actually thought of doing this and then my friend told me to see if it already exists. So it seems like you saved me a whole bunch of time.

I am using ubuntu 16.04 ( and a tinker-board which has similar architecture to the pi ). It gives me an error about the gcc version that the tensorflow was compiled in. From some research on the web I found the issue is that the whl was compiled on older versions of debian.

If you can, it would be very nice if you can also perform the same procedures on a rasbery pi running an ubuntu 16.04 ( you can do this with a chroot if needed ). Ubuntu 16.04 LTS is by far the most prevalent linux distro ( until 18.04 takes over ) so it seems like it will be relevant for a lot of users hopefully.

Thanks in advance,
Dan

Illegal Instruction error when load trained model

I tried to install tensorflow 1.4, 1.5, 1.8, 1.9 to my ARTIK 530 board. When I loaded the trained model from another PC (tensorflow-GPU trained).

from keras.models import load_model
import numpy as np
model = load_model('./keras_trained_model.h5')

It always reported:

Using TensorFlow backend.
Illegal instruction

Any ideas?

Kevin

Pre-built debian docker image with all the dependencies intalled

Install docker on any 64-bit Linux host OS:

wget -qO- https://get.docker.com/ | sh
sudo usermod -aG docker $USER
sudo systemctl enable docker
sudo systemctl restart docker
# Reboot to make sure the Unix group membership in /etc/groups is configured for new logins

Pull the image:

docker pull daocloud.io/liuqun1986/tensorflow-on-arm

Start a container for cross-building:

mkdir -p /tmp/userconfigs /tmp/output_tmp

docker run -it --name my_container_name \
    -v /tmp/userconfigs:/root/userconfigs \
    -v /tmp/output_tmp:/tmp/ \
    daocloud.io/liuqun1986/tensorflow-on-arm:latest \
    /bin/bash

Inside the container:

cd /root
./build_tensorflow.sh /root/configs/rpi.conf
# or
./build_tensorflow.sh /root/userconfigs/xxx.conf

Original error was: libf77blas.so.3: cannot open shared object file: No such file or directory

Hi @lhelontra , you did great work. I try your tensorflow-1.5.0-cp35-none-linux_armv7l.whl in fresh 2018-03-13-raspbian-stretch-lite. Installation progress is work like a charm. But, when I'm importing tensorflow, there is an error, here the log:

pi@raspberrypi:~ $ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04)
[GCC 6.3.0 20170124] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/numpy/core/__init__.py", line 16, in <module>
    from . import multiarray
ImportError: libf77blas.so.3: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/__init__.py", line 24, in <module>
    from tensorflow.python import *
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/__init__.py", line 47, in <module>
    import numpy as np
  File "/usr/local/lib/python3.5/dist-packages/numpy/__init__.py", line 142, in <module>
    from . import add_newdocs
  File "/usr/local/lib/python3.5/dist-packages/numpy/add_newdocs.py", line 13, in <module>
    from numpy.lib import add_newdoc
  File "/usr/local/lib/python3.5/dist-packages/numpy/lib/__init__.py", line 8, in <module>
    from .type_check import *
  File "/usr/local/lib/python3.5/dist-packages/numpy/lib/type_check.py", line 11, in <module>
    import numpy.core.numeric as _nx
  File "/usr/local/lib/python3.5/dist-packages/numpy/core/__init__.py", line 26, in <module>
    raise ImportError(msg)
ImportError:
Importing the multiarray numpy extension module failed.  Most
likely you are trying to import a failed build of numpy.
If you're working with a numpy git repo, try `git clean -xdf` (removes all
files not under version control).  Otherwise reinstall numpy.

Original error was: libf77blas.so.3: cannot open shared object file: No such file or directory

And I found the solution after doing search in google, for libf77blas.so.3, I have to install ATLAS,

sudo apt-get install libatlas-base-dev

Now, no error when importing tensorflow.

Thank you.

Add link to official tensorflow wheels

Hi - thanks for going to the effort of building tensorflow for Raspberry Pi.

The TF team now officially support Raspberry Pi and have released wheels to piwheels.org (we automate building all packages on PyPI and provide a repo of arm wheels for Raspbery Pi).

See the announcements here:

So I just thought perhaps you could add a note to the README that TF is officially supported now and you can install it with pip install tensorflow.

Thanks again

Dockerfile is failing to build on branch v1.12.0

I am trying to build the tensorflow v1.12.0 using the docker container on VM (Ubuntu 16.04). I was successful multiple times in the past. Recently, (today) I deleted the docker image tf-arm:latest that I was using previously and tried to make a clean build of tensorflow for RaspPi but unfortunately that is failing with this error. I am not sure if this is temporary server downtime or something that needs to be addressed permanently.

Err:9 http://httpredir.debian.org/debian jessie/main arm64 Packages
  404  Not Found
Ign:9 http://cdn-fastly.deb.debian.org/debian jessie/main arm64 Packages
Ign:16 http://cdn-fastly.deb.debian.org/debian jessie/non-free arm64 Packages
Ign:17 http://cdn-fastly.deb.debian.org/debian jessie/contrib arm64 Packages
Fetched 13.9 MB in 3s (3608 kB/s)
Reading package lists...
E: Failed to fetch http://httpredir.debian.org/debian/dists/jessie/main/binary-arm64/Packages  404  Not Found
E: Some index files failed to download. They have been ignored, or old ones used instead.
The command '/bin/sh -c echo "deb http://httpredir.debian.org/debian/ jessie main contrib non-free" > /etc/apt/sources.list.d/jessie.list     && apt-get update && apt-get install -y libpng12-dev     && rm -f /etc/apt/sources.list.d/jessie.list     && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100

Any suggestion as to what can be done to come up with a clean build with minimum effort ?

libpng12-dev of Debian Jessie no longer exists in Stretch?

Package libpng12-dev is no longer available for Debian 9(Stretch). Use libpng-dev instead.

$ apt-get update && apt-get install -y \
        openjdk-8-jdk automake autoconf \
        curl zip unzip libtool swig libpng12-dev zlib1g-dev pkg-config git g++ wget xz-utils \
        python3-numpy python3-dev python3-pip python3-mock
Ign:1 http://cdn-fastly.deb.debian.org/debian stretch InRelease
Get:2 http://cdn-fastly.deb.debian.org/debian stretch-updates InRelease [91.0 kB]
Get:3 http://security.debian.org/debian-security stretch/updates InRelease [94.3 kB]
...
Get:17 http://cdn-fastly.deb.debian.org/debian stretch/main armhf Packages [6927 kB]
Reading package lists...
Building dependency tree...
Reading state information...
Package libpng12-dev is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'libpng12-dev' has no installation candidate

could you support tensorflow-1.13.1-cp36-none-linux_armv7l.whl?

I am using python3.6 with my applications (especially asyncio) and would therefore like to install tensorflow with python3.6.
With your wheel cp35 it works well with my raspberry pi 3. Is there a general problem with python3.6 and tensorflow?
On www.piwheels.org i found https://www.piwheels.org/simple/tensorflow/tensorflow-1.13.1-cp36-none-linux_armv7l.whl but i could not install it with pip.
I got errors:

Building wheels for collected packages: h5py
Building wheel for h5py (setup.py) ... error
ERROR: Complete output from command /home/pi/tf_env/bin/python3.6 -u -c 'import setuptools, tokenize;file='"'"'/tmp/pip-install-fu5ys8p_/h5py/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-oasnci4r --python-tag cp36:

Runetime Warning in tensorflow with Raspberry Pi 1 model B

When i want to use tensorflow on my raspberry pi 1, I have the following problem:
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.4 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5
return f(*args, **kwds)
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: builtins.type size changed, may indicate binary incompatibility. Expected 432, got 412
return f(*args, **kwds)

I have Python 3.5.3 and i install tensorflow (1.11) with the following command:
sudo python3 -m pip install --no-cache-dir tensorflow

Thx for the help!

Improve speed

Hi @lhelontra, can you try to compile the version with py35 armv7l with these flags?
To optimize the speed?
If you do this, can you share the wheel with me so I can test it?

-march=armv7-a -mtune=cortex-a8 -mfpu=neon -mfloat-abi=hard -std=gnu11 -O3

libstdc++.so.6: version `GLIBCXX_3.4.22' not found

Hi,
This is a great project. When I install it in my pi 3B and import tensorflow,there had a importerror:ImportError: /usr/lib/arm-linux-gnueabihf/libstdc++.so.6: version `GLIBCXX_3.4.22' not found (required by /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow_internal.so)

Raspberry Pi 3: fails to install

Hi,

I have a Raspberry Pi 3, with Raspbian Jessie.
The latet wheel package from here does not install:

$ wget https://github.com/lhelontra/tensorflow-on-arm/releases/download/v1.9.0/tensorflow-1.9.0-cp35-none-linux_armv7l.whl
$ sudo pip3 install tensorflow-1.9.0-cp35-none-linux_armv7l.whl
tensorflow-1.9.0-cp35-none-linux_armv7l.whl is not a supported wheel on this platform.
Storing debug log for failure in /root/.pip/pip.log
$ uname -a
Linux medal-3 4.4.21-v7+ #911 SMP Thu Sep 15 14:22:38 BST 2016 armv7l GNU/Linux
$ lsb_release -sc
jessie
$ sudo cat /root/.pip/pip.log
------------------------------------------------------------
/usr/bin/pip3 run on Fri Aug 10 22:03:30 2018
tensorflow-1.9.0-cp35-none-linux_armv7l.whl is not a supported wheel on this platform.
Exception information:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 122, in main
    status = self.run(options, args)
  File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 269, in run
    InstallRequirement.from_line(name, None))
  File "/usr/lib/python3/dist-packages/pip/req.py", line 168, in from_line
    raise UnsupportedWheel("%s is not a supported wheel on this platform." % wheel.filename)
pip.exceptions.UnsupportedWheel: tensorflow-1.9.0-cp35-none-linux_armv7l.whl is not a supported wheel on this platform

Should Bazel version be checked?

Currently no action is taken if Bazel is installed, should the installed version be checked against the version in the config file?

MD5 checksum for *.whl files

echo https://github.com/lhelontra/tensorflow-on-arm/releases/download/v1.9.0/tensorflow-1.9.0-cp{35,27}-none-linux_{armv6l,armv7l,aarch64}.whl | tr ' ' '\n' | tee /tmp/file.list

wget -i /tmp/file.list

sha256sum --tag -b *.whl
# sha1sum  --tag -b *.whl
# md5sum  --tag -b *.whl
[liuqun@localhost ~]$ sha256sum -b --tag *.whl
SHA256 (tensorflow-1.9.0-cp27-none-linux_armv6l.whl) = 27e4129e0fb26832f78f023bc56ead5137fd32612edb934ac76ca71cbf197bde
SHA256 (tensorflow-1.9.0-cp27-none-linux_armv7l.whl) = 7ea82a149c5dce5747b2137f72a722035907d978d66587127ecc6f58a559c064
SHA256 (tensorflow-1.9.0-cp35-none-linux_armv6l.whl) = 53cfc2eff2a29bce2b1b9fc90cb6407e6e0c47b9d9ca9c620117acf6e748cd6c
SHA256 (tensorflow-1.9.0-cp35-none-linux_armv7l.whl) = b2e53ea173811f2e6d35a7e9fe28fa018d23da1d5a222dacfca6dc29678cc8e1


[liuqun@localhost ~]$ md5sum  --tag -b *.whl
MD5 (tensorflow-1.9.0-cp27-none-linux_armv6l.whl) = 70316c510e4faba89dcad8d973ef05cc
MD5 (tensorflow-1.9.0-cp27-none-linux_armv7l.whl) = 39c303dbe22d1ae32306b69f04674f0a
MD5 (tensorflow-1.9.0-cp35-none-linux_armv6l.whl) = b4a282745f79c049d8a137e393efa048
MD5 (tensorflow-1.9.0-cp35-none-linux_armv7l.whl) = 14c49b9e1f2fe41785f65af5b5c028ad

It should work and keep people from broken downloadings. See also issue #24

linux deploy

hello, the aarch64 version can't be installed on Android's Linux deploy, 64 bit cpu.

Can't use this library with older versions of numpy <1.16

There is an issue with numpy 1.16 apparently: numpy/numpy#12837
It looks like it has been resolved in 1.16.1

I am using the aarch64 library with home assistant which somehow forces numpy 1.15.4 and when I attempt to load this library I get ImportError: No module named 'numpy.core._multiarray_umath'

It appears to me that maybe recompiling this with numpy 1.16.1 might fix it? I'm not a python expert so not totally sure.

How/Where did you set to use the Python 3.5?

Hi dude! (I am Br too haha)

How did you do to generate a tensorflow for python 3.5?
I tried to build it here, but the wheel is being created with the name tensorflow-1.13.0rc2-cp34-none-linux_armv7l.whl

I couldn't find where you set it. Can you point me that?

Also, I am not using your repo to build it, since it is not clear to me (looking the README.me) how to do that.

I am doing the crosscompilation like tensorflow website tells to do.

Open Source Licensing?

I found this project today, and it was exactly what I was hoping for. Very happy that someone went through the effort to fix the tensorflow build for cross compilation!

I have a couple of issues and potential PRs for this project, but I was wondering if you would be willing to license this under an open license before I contribute. A license similar to the https://github.com/samjabrahams/tensorflow-on-raspberry-pi project, or something equally permissive, would be ideal.

Thanks again!

Exception Error Installing Tensorflow

I've been following EdjeElectronics' tutorial on how to setup Tensorflow on my Raspberry Pi 1B. However I haven't gotten far as I get exception errors when issuing the following command to run the installation

sudo pip3 install /home/pi/tf/tensorflow-1.8.0-cp35-none-linux_armv6l.whl

Any help with this would be greatly appreciated!

Processing ./tensorflow-1.8.0-cp35-none-linux_armv6l.whl
Exception:
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 215, in main
    status = self.run(options, args)
  File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 353, in run
    wb.build(autobuilding=True)
  File "/usr/lib/python3/dist-packages/pip/wheel.py", line 749, in build
    self.requirement_set.prepare_files(self.finder)
  File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 380, in prepare_files
    ignore_dependencies=self.ignore_dependencies))
  File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 620, in _prepare_file
    session=self.session, hashes=hashes)
  File "/usr/lib/python3/dist-packages/pip/download.py", line 809, in unpack_url
    unpack_file_url(link, location, download_dir, hashes=hashes)
  File "/usr/lib/python3/dist-packages/pip/download.py", line 715, in unpack_file_url
    unpack_file(from_path, location, content_type, link)
  File "/usr/lib/python3/dist-packages/pip/utils/__init__.py", line 617, in unpack_file
    flatten=not filename.endswith('.whl')
  File "/usr/lib/python3/dist-packages/pip/utils/__init__.py", line 502, in unzip_file
    zip = zipfile.ZipFile(zipfp, allowZip64=True)
  File "/usr/lib/python3.5/zipfile.py", line 1026, in __init__
    self._RealGetContents()
  File "/usr/lib/python3.5/zipfile.py", line 1094, in _RealGetContents
    raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file

how do you fix this?

Python 3.5.3 (default, Jan 19 2017, 14:11:04)
[GCC 6.3.0 20170124] on linux
Type "copyright", "credits" or "license()" for more information.

=============== RESTART: /home/pi/Documents/camerafeed/Run.py ===============

Warning (from warnings module):
File "/usr/lib/python3.5/importlib/_bootstrap.py", line 222
return f(*args, **kwds)
RuntimeWarning: compiletime version 3.4 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5

Warning (from warnings module):
File "/usr/lib/python3.5/importlib/_bootstrap.py", line 222
return f(*args, **kwds)
RuntimeWarning: builtins.type size changed, may indicate binary incompatibility. Expected 432, got 412

Are you interested in PyTorch?

Would you like to build some whls for PyTorch on ARM? I failed to build it by myself due to my poor RAM and storage size.

build tensorflow 1.9.0 failed on Ubuntu16.04

Hi, there are some errors happen when I build tensorflow with python3.5 on Ubuntu16.04.

In file included from bazel-out/armeabi-opt/genfiles/external/local_config_python/python_include/Python.h:8:0,
from bazel-out/armeabi-opt/bin/tensorflow/contrib/lite/toco/python/tensorflow_wrap_toco.cc:171:
bazel-out/armeabi-opt/genfiles/external/local_config_python/python_include/pyconfig.h:9:53: fatal error: aarch64-linux-gnu/python3.5m/pyconfig.h: No such file or directory

#include <aarch64-linux-gnu/python3.5m/pyconfig.h>

compilation terminated.
INFO: Elapsed time: 539.906s, Critical Path: 65.77s
INFO: 3579 processes: 3579 local.
FAILED: Build did NOT complete successfully

installing tensorflow.whl fails with missing "hdf5.h"

I have built tensorflow-1.12.0-cp27-none-linux_armv7l.whl with beagle_black.conf

Installing it with pip install tensorflow-1.12.0-cp27-none-linux_armv7l.whl on Beaglebone fails:

    building 'h5py.defs' extension
    creating build/temp.linux-armv7l-2.7
    creating build/temp.linux-armv7l-2.7/tmp
    creating build/temp.linux-armv7l-2.7/tmp/pip-install-jngQka
    creating build/temp.linux-armv7l-2.7/tmp/pip-install-jngQka/h5py
    creating build/temp.linux-armv7l-2.7/tmp/pip-install-jngQka/h5py/h5py
    arm-linux-gnueabihf-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fdebug-prefix-map=/build/python2.7-sw1gMG/python2.7-2.7.13=. -fstack-protector-strong -Wformat -Werror=format-security -fPIC -DH5_USE_16_API -I./h5py -I/tmp/pip-install-jngQka/h5py/lzf -I/opt/local/include -I/usr/local/include -I/usr/local/lib/python2.7/dist-packages/numpy/core/include -I/usr/include/python2.7 -c /tmp/pip-install-jngQka/h5py/h5py/defs.c -o build/temp.linux-armv7l-2.7/tmp/pip-install-jngQka/h5py/h5py/defs.o
    In file included from /usr/local/lib/python2.7/dist-packages/numpy/core/include/numpy/ndarraytypes.h:1821:0,
                     from /usr/local/lib/python2.7/dist-packages/numpy/core/include/numpy/ndarrayobject.h:18,
                     from /usr/local/lib/python2.7/dist-packages/numpy/core/include/numpy/arrayobject.h:4,
                     from /tmp/pip-install-jngQka/h5py/h5py/api_compat.h:26,
                     from /tmp/pip-install-jngQka/h5py/h5py/defs.c:657:
    /usr/local/lib/python2.7/dist-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
     #warning "Using deprecated NumPy API, disable it by " \
      ^~~~~~~
    In file included from /tmp/pip-install-jngQka/h5py/h5py/defs.c:657:0:
    /tmp/pip-install-jngQka/h5py/h5py/api_compat.h:27:18: fatal error: hdf5.h: No such file or directory
     #include "hdf5.h"
                      ^
    compilation terminated.
    error: command 'arm-linux-gnueabihf-gcc' failed with exit status 1

  ----------------------------------------
  Failed building wheel for h5py
  Running setup.py clean for h5py
Failed to build h5py
Installing collected packages: termcolor, h5py, keras-applications, tensorboard, tensorflow
  Running setup.py install for h5py ... error
    Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-jngQka/h5py/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-9E7Ll8/install-record.txt --single-version-externally-managed --compile:
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-armv7l-2.7
    creating build/lib.linux-armv7l-2.7/h5py
    copying h5py/ipy_completer.py -> build/lib.linux-armv7l-2.7/h5py
    copying h5py/h5py_warnings.py -> build/lib.linux-armv7l-2.7/h5py
    copying h5py/__init__.py -> build/lib.linux-armv7l-2.7/h5py
    copying h5py/highlevel.py -> build/lib.linux-armv7l-2.7/h5py
    copying h5py/version.py -> build/lib.linux-armv7l-2.7/h5py
    creating build/lib.linux-armv7l-2.7/h5py/_hl
    copying h5py/_hl/base.py -> build/lib.linux-armv7l-2.7/h5py/_hl
    copying h5py/_hl/filters.py -> build/lib.linux-armv7l-2.7/h5py/_hl
    copying h5py/_hl/dims.py -> build/lib.linux-armv7l-2.7/h5py/_hl
    copying h5py/_hl/datatype.py -> build/lib.linux-armv7l-2.7/h5py/_hl
    copying h5py/_hl/files.py -> build/lib.linux-armv7l-2.7/h5py/_hl
    copying h5py/_hl/attrs.py -> build/lib.linux-armv7l-2.7/h5py/_hl
    copying h5py/_hl/selections.py -> build/lib.linux-armv7l-2.7/h5py/_hl
    copying h5py/_hl/selections2.py -> build/lib.linux-armv7l-2.7/h5py/_hl
    copying h5py/_hl/dataset.py -> build/lib.linux-armv7l-2.7/h5py/_hl
    copying h5py/_hl/__init__.py -> build/lib.linux-armv7l-2.7/h5py/_hl
    copying h5py/_hl/vds.py -> build/lib.linux-armv7l-2.7/h5py/_hl
    copying h5py/_hl/compat.py -> build/lib.linux-armv7l-2.7/h5py/_hl
    copying h5py/_hl/group.py -> build/lib.linux-armv7l-2.7/h5py/_hl
    creating build/lib.linux-armv7l-2.7/h5py/tests
    copying h5py/tests/__init__.py -> build/lib.linux-armv7l-2.7/h5py/tests
    copying h5py/tests/common.py -> build/lib.linux-armv7l-2.7/h5py/tests
    creating build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_base.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_attrs.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_objects.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_h5t.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_h5d_direct_chunk_write.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_selections.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_datatype.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_dataset.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/__init__.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_h5p.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_group.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_attrs_data.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_dimension_scales.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_file.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_slicing.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_file_image.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_h5f.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    copying h5py/tests/old/test_h5.py -> build/lib.linux-armv7l-2.7/h5py/tests/old
    creating build/lib.linux-armv7l-2.7/h5py/tests/hl
    copying h5py/tests/hl/test_deprecation.py -> build/lib.linux-armv7l-2.7/h5py/tests/hl
    copying h5py/tests/hl/test_attribute_create.py -> build/lib.linux-armv7l-2.7/h5py/tests/hl
    copying h5py/tests/hl/test_datatype.py -> build/lib.linux-armv7l-2.7/h5py/tests/hl
    copying h5py/tests/hl/test_dataset_getitem.py -> build/lib.linux-armv7l-2.7/h5py/tests/hl
    copying h5py/tests/hl/test_dims_dimensionproxy.py -> build/lib.linux-armv7l-2.7/h5py/tests/hl
    copying h5py/tests/hl/test_filters.py -> build/lib.linux-armv7l-2.7/h5py/tests/hl
    copying h5py/tests/hl/__init__.py -> build/lib.linux-armv7l-2.7/h5py/tests/hl
    copying h5py/tests/hl/test_dataset_swmr.py -> build/lib.linux-armv7l-2.7/h5py/tests/hl
    copying h5py/tests/hl/test_file.py -> build/lib.linux-armv7l-2.7/h5py/tests/hl
    copying h5py/tests/hl/test_threads.py -> build/lib.linux-armv7l-2.7/h5py/tests/hl
    creating build/lib.linux-armv7l-2.7/h5py/tests/hl/test_vds
    copying h5py/tests/hl/test_vds/test_lowlevel_vds.py -> build/lib.linux-armv7l-2.7/h5py/tests/hl/test_vds
    copying h5py/tests/hl/test_vds/test_virtual_source.py -> build/lib.linux-armv7l-2.7/h5py/tests/hl/test_vds
    copying h5py/tests/hl/test_vds/__init__.py -> build/lib.linux-armv7l-2.7/h5py/tests/hl/test_vds
    copying h5py/tests/hl/test_vds/test_highlevel_vds.py -> build/lib.linux-armv7l-2.7/h5py/tests/hl/test_vds
    running build_ext
    Autodetection skipped [libhdf5.so: cannot open shared object file: No such file or directory]
    ********************************************************************************
                           Summary of the h5py configuration
    
        Path to HDF5: None
        HDF5 Version: '1.8.4'
         MPI Enabled: False
    Rebuild Required: True
    
    ********************************************************************************

I would expect that HDFS (and other extensions such as AWS) being disabled in the build, I see relevant config settings https://github.com/lhelontra/tensorflow-on-arm/blob/master/build_tensorflow/configs/beagle_black.conf#L32 and libtensorflow_cc.so and libtensorflow_framework.so are built OK (not patched mainline 1.12 errors).

I do not understand - if TF is built w/o the extensions, and it seems so, why then pip installs h5py which needs hdf5.h include file? Any ideas?

Also, it looks strange that pip install compiles files (of dependencies), which I'm trying to avoid with cross-compiling, because it takes so looong on BBB... Why it happens, does pip compile dependencies from source code, can't find binary packages, or what?

I'm not a Python developer, I beg your pardon for the ignorance )

Issue while compiling for odroid c2

Hey, I'm trying to compile for odroidc2 board. Steps that I performed:

  1. Downloaded the dependencies mentioned.
  2. Ran the script - ./build_tensorflow.sh /configs/odroidc2.conf

ERROR FACED:

/home/huzefa/.cache/bazel/_bazel_root/8d1779a6ac504ff644c04935916f3004/external/protobuf_archive/BUILD:70:1: C++ compilation of rule '@protobuf_archive//:protobuf_lite' failed (Exit 1): aarch64-linux-gnu-gcc failed: error executing command
.
.
.
Target //tensorflow/tools/pip_package:build_pip_package failed to build
.
INFO: Elapsed time: 534.583s, Critical Path: 38.71s
INFO: 1572 processes: 1572 local.
.
FAILED: Build did NOT complete successfully

Please Help solve this error if you can help debug the solution. Regards.

ImportError: cannot import name 'cloud'

I installed tensorflow-1.12.0-cp35-none-linux_aarch64.whl. When I run mnist code, I got an error about import package as follows:

Traceback (most recent call last):
  File "mnist.py", line 1, in <module>
    from tensorflow.examples.tutorials.mnist import input_data
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/examples/tutorials/mnist/__init__.py", line 21, in <module>
    from tensorflow.examples.tutorials.mnist import input_data
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/examples/tutorials/mnist/input_data.py", line 30, in <module>
    from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/__init__.py", line 38, in <module>
    from tensorflow.contrib import cloud
ImportError: cannot import name 'cloud'

I checked /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/ and there is not cloud folder. Is there something missing?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.