Code Monkey home page Code Monkey logo

android-demo-app's Introduction

PyTorch Logo


PyTorch is a Python package that provides two high-level features:

  • Tensor computation (like NumPy) with strong GPU acceleration
  • Deep neural networks built on a tape-based autograd system

You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.

Our trunk health (Continuous Integration signals) can be found at hud.pytorch.org.

More About PyTorch

Learn the basics of PyTorch

At a granular level, PyTorch is a library that consists of the following components:

Component Description
torch A Tensor library like NumPy, with strong GPU support
torch.autograd A tape-based automatic differentiation library that supports all differentiable Tensor operations in torch
torch.jit A compilation stack (TorchScript) to create serializable and optimizable models from PyTorch code
torch.nn A neural networks library deeply integrated with autograd designed for maximum flexibility
torch.multiprocessing Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training
torch.utils DataLoader and other utility functions for convenience

Usually, PyTorch is used either as:

  • A replacement for NumPy to use the power of GPUs.
  • A deep learning research platform that provides maximum flexibility and speed.

Elaborating Further:

A GPU-Ready Tensor Library

If you use NumPy, then you have used Tensors (a.k.a. ndarray).

Tensor illustration

PyTorch provides Tensors that can live either on the CPU or the GPU and accelerates the computation by a huge amount.

We provide a wide variety of tensor routines to accelerate and fit your scientific computation needs such as slicing, indexing, mathematical operations, linear algebra, reductions. And they are fast!

Dynamic Neural Networks: Tape-Based Autograd

PyTorch has a unique way of building neural networks: using and replaying a tape recorder.

Most frameworks such as TensorFlow, Theano, Caffe, and CNTK have a static view of the world. One has to build a neural network and reuse the same structure again and again. Changing the way the network behaves means that one has to start from scratch.

With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you to change the way your network behaves arbitrarily with zero lag or overhead. Our inspiration comes from several research papers on this topic, as well as current and past work such as torch-autograd, autograd, Chainer, etc.

While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date. You get the best of speed and flexibility for your crazy research.

Dynamic graph

Python First

PyTorch is not a Python binding into a monolithic C++ framework. It is built to be deeply integrated into Python. You can use it naturally like you would use NumPy / SciPy / scikit-learn etc. You can write your new neural network layers in Python itself, using your favorite libraries and use packages such as Cython and Numba. Our goal is to not reinvent the wheel where appropriate.

Imperative Experiences

PyTorch is designed to be intuitive, linear in thought, and easy to use. When you execute a line of code, it gets executed. There isn't an asynchronous view of the world. When you drop into a debugger or receive error messages and stack traces, understanding them is straightforward. The stack trace points to exactly where your code was defined. We hope you never spend hours debugging your code because of bad stack traces or asynchronous and opaque execution engines.

Fast and Lean

PyTorch has minimal framework overhead. We integrate acceleration libraries such as Intel MKL and NVIDIA (cuDNN, NCCL) to maximize speed. At the core, its CPU and GPU Tensor and neural network backends are mature and have been tested for years.

Hence, PyTorch is quite fast — whether you run small or large neural networks.

The memory usage in PyTorch is extremely efficient compared to Torch or some of the alternatives. We've written custom memory allocators for the GPU to make sure that your deep learning models are maximally memory efficient. This enables you to train bigger deep learning models than before.

Extensions Without Pain

Writing new neural network modules, or interfacing with PyTorch's Tensor API was designed to be straightforward and with minimal abstractions.

You can write new neural network layers in Python using the torch API or your favorite NumPy-based libraries such as SciPy.

If you want to write your layers in C/C++, we provide a convenient extension API that is efficient and with minimal boilerplate. No wrapper code needs to be written. You can see a tutorial here and an example here.

Installation

Binaries

Commands to install binaries via Conda or pip wheels are on our website: https://pytorch.org/get-started/locally/

NVIDIA Jetson Platforms

Python wheels for NVIDIA's Jetson Nano, Jetson TX1/TX2, Jetson Xavier NX/AGX, and Jetson AGX Orin are provided here and the L4T container is published here

They require JetPack 4.2 and above, and @dusty-nv and @ptrblck are maintaining them.

From Source

Prerequisites

If you are installing from source, you will need:

  • Python 3.8 or later (for Linux, Python 3.8.1+ is needed)
  • A compiler that fully supports C++17, such as clang or gcc (gcc 9.4.0 or newer is required)

We highly recommend installing an Anaconda environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.

NVIDIA CUDA Support

If you want to compile with CUDA support, select a supported version of CUDA from our support matrix, then install the following:

Note: You could refer to the cuDNN Support Matrix for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardware

If you want to disable CUDA support, export the environment variable USE_CUDA=0. Other potentially useful environment variables may be found in setup.py.

If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are available here

AMD ROCm Support

If you want to compile with ROCm support, install

  • AMD ROCm 4.0 and above installation
  • ROCm is currently supported only for Linux systems.

If you want to disable ROCm support, export the environment variable USE_ROCM=0. Other potentially useful environment variables may be found in setup.py.

Intel GPU Support

If you want to compile with Intel GPU support, follow these

If you want to disable Intel GPU support, export the environment variable USE_XPU=0. Other potentially useful environment variables may be found in setup.py.

Install Dependencies

Common

conda install cmake ninja
# Run this command from the PyTorch directory after cloning the source code using the “Get the PyTorch Source“ section below
pip install -r requirements.txt

On Linux

conda install intel::mkl-static intel::mkl-include
# CUDA only: Add LAPACK support for the GPU if needed
conda install -c pytorch magma-cuda121  # or the magma-cuda* that matches your CUDA version from https://anaconda.org/pytorch/repo

# (optional) If using torch.compile with inductor/triton, install the matching version of triton
# Run from the pytorch directory after cloning
make triton

On MacOS

# Add this package on intel x86 processor machines only
conda install intel::mkl-static intel::mkl-include
# Add these packages if torch.distributed is needed
conda install pkg-config libuv

On Windows

conda install intel::mkl-static intel::mkl-include
# Add these packages if torch.distributed is needed.
# Distributed package support on Windows is a prototype feature and is subject to changes.
conda install -c conda-forge libuv=1.39

Get the PyTorch Source

git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
# if you are updating an existing checkout
git submodule sync
git submodule update --init --recursive

Install PyTorch

On Linux

If you would like to compile PyTorch with new C++ ABI enabled, then first run this command:

export _GLIBCXX_USE_CXX11_ABI=1

If you're compiling for AMD ROCm then first run this command:

# Only run this if you're compiling for ROCm
python tools/amd_build/build_amd.py

Install PyTorch

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py develop

Aside: If you are using Anaconda, you may experience an error caused by the linker:

build/temp.linux-x86_64-3.7/torch/csrc/stub.o: file not recognized: file format not recognized
collect2: error: ld returned 1 exit status
error: command 'g++' failed with exit status 1

This is caused by ld from the Conda environment shadowing the system ld. You should use a newer version of Python that fixes this issue. The recommended Python version is 3.8.1+.

On macOS

python3 setup.py develop

On Windows

Choose Correct Visual Studio Version.

PyTorch CI uses Visual C++ BuildTools, which come with Visual Studio Enterprise, Professional, or Community Editions. You can also install the build tools from https://visualstudio.microsoft.com/visual-cpp-build-tools/. The build tools do not come with Visual Studio Code by default.

If you want to build legacy python code, please refer to Building on legacy code and CUDA

CPU-only builds

In this mode PyTorch computations will run on your CPU, not your GPU

conda activate
python setup.py develop

Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking CMAKE_INCLUDE_PATH and LIB. The instruction here is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.

CUDA based build

In this mode PyTorch computations will leverage your GPU via CUDA for faster number crunching

NVTX is needed to build Pytorch with CUDA. NVTX is a part of CUDA distributive, where it is called "Nsight Compute". To install it onto an already installed CUDA run CUDA installation once again and check the corresponding checkbox. Make sure that CUDA with Nsight Compute is installed after Visual Studio.

Currently, VS 2017 / 2019, and Ninja are supported as the generator of CMake. If ninja.exe is detected in PATH, then Ninja will be used as the default generator, otherwise, it will use VS 2017 / 2019.
If Ninja is selected as the generator, the latest MSVC will get selected as the underlying toolchain.

Additional libraries such as Magma, oneDNN, a.k.a. MKLDNN or DNNL, and Sccache are often needed. Please refer to the installation-helper to install them.

You can refer to the build_pytorch.bat script for some other environment variables configurations

cmd

:: Set the environment variables after you have downloaded and unzipped the mkl package,
:: else CMake would throw an error as `Could NOT find OpenMP`.
set CMAKE_INCLUDE_PATH={Your directory}\mkl\include
set LIB={Your directory}\mkl\lib;%LIB%

:: Read the content in the previous section carefully before you proceed.
:: [Optional] If you want to override the underlying toolset used by Ninja and Visual Studio with CUDA, please run the following script block.
:: "Visual Studio 2019 Developer Command Prompt" will be run automatically.
:: Make sure you have CMake >= 3.12 before you do this when you use the Visual Studio generator.
set CMAKE_GENERATOR_TOOLSET_VERSION=14.27
set DISTUTILS_USE_SDK=1
for /f "usebackq tokens=*" %i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,17^) -products * -latest -property installationPath`) do call "%i\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION%

:: [Optional] If you want to override the CUDA host compiler
set CUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe

python setup.py develop
Adjust Build Options (Optional)

You can adjust the configuration of cmake variables optionally (without building first), by doing the following. For example, adjusting the pre-detected directories for CuDNN or BLAS can be done with such a step.

On Linux

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
python setup.py build --cmake-only
ccmake build  # or cmake-gui build

On macOS

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"}
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build --cmake-only
ccmake build  # or cmake-gui build

Docker Image

Using pre-built images

You can also pull a pre-built docker image from Docker Hub and run with docker v19.03+

docker run --gpus all --rm -ti --ipc=host pytorch/pytorch:latest

Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g. for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you should increase shared memory size either with --ipc=host or --shm-size command line options to nvidia-docker run.

Building the image yourself

NOTE: Must be built with a docker version > 18.06

The Dockerfile is supplied to build images with CUDA 11.1 support and cuDNN v8. You can pass PYTHON_VERSION=x.y make variable to specify which Python version is to be used by Miniconda, or leave it unset to use the default.

make -f docker.Makefile
# images are tagged as docker.io/${your_docker_username}/pytorch

You can also pass the CMAKE_VARS="..." environment variable to specify additional CMake variables to be passed to CMake during the build. See setup.py for the list of available variables.

make -f docker.Makefile

Building the Documentation

To build documentation in various formats, you will need Sphinx and the readthedocs theme.

cd docs/
pip install -r requirements.txt

You can then build the documentation by running make <format> from the docs/ folder. Run make to get a list of all available output formats.

If you get a katex error run npm install katex. If it persists, try npm install -g katex

Note: if you installed nodejs with a different package manager (e.g., conda) then npm will probably install a version of katex that is not compatible with your version of nodejs and doc builds will fail. A combination of versions that is known to work is [email protected] and [email protected]. To install the latter with npm you can run npm install -g [email protected]

Previous Versions

Installation instructions and binaries for previous PyTorch versions may be found on our website.

Getting Started

Three-pointers to get you started:

Resources

Communication

Releases and Contributing

Typically, PyTorch has three minor releases a year. Please let us know if you encounter a bug by filing an issue.

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.

If you plan to contribute new features, utility functions, or extensions to the core, please first open an issue and discuss the feature with us. Sending a PR without discussion might end up resulting in a rejected PR because we might be taking the core in a different direction than you might be aware of.

To learn more about making a contribution to Pytorch, please see our Contribution page. For more information about PyTorch releases, see Release page.

The Team

PyTorch is a community-driven project with several skillful engineers and researchers contributing to it.

PyTorch is currently maintained by Soumith Chintala, Gregory Chanan, Dmytro Dzhulgakov, Edward Yang, and Nikita Shulga with major contributions coming from hundreds of talented individuals in various forms and means. A non-exhaustive but growing list needs to mention: Trevor Killeen, Sasank Chilamkurthy, Sergey Zagoruyko, Adam Lerer, Francisco Massa, Alykhan Tejani, Luca Antiga, Alban Desmaison, Andreas Koepf, James Bradbury, Zeming Lin, Yuandong Tian, Guillaume Lample, Marat Dukhan, Natalia Gimelshein, Christian Sarofeen, Martin Raison, Edward Yang, Zachary Devito.

Note: This project is unrelated to hughperkins/pytorch with the same name. Hugh is a valuable contributor to the Torch community and has helped with many things Torch and PyTorch.

License

PyTorch has a BSD-style license, as found in the LICENSE file.

android-demo-app's People

Contributors

ivankobzarev avatar jeffxtang avatar nairbv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

android-demo-app's Issues

Custom model doesn't work

Hi! I used a pre trained resnet 18 model and have run it on my dataset, after that I have saved it as it was described in tutorial (https://heartbeat.fritz.ai/pytorch-mobile-image-classification-on-android-5c0cfb774c5b). I have torch version 1.3.0, which is correct. My programme works correctly with the model from a tutorial as well, but when I am running my custom one, I get this error message:

E/AndroidRuntime: FATAL EXCEPTION: main Process: com.example.carclassificator, PID: 13187 java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.carclassificator/com.example.carclassificator.CarImageActivity}: com.facebook.jni.CppException: false CHECK FAILED at ../c10/core/Backend.h (tensorTypeIdToBackend at ../c10/core/Backend.h:106) (no backtrace available) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3150) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3260) at android.app.ActivityThread.access$1000(ActivityThread.java:218) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1734) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:145) at android.app.ActivityThread.main(ActivityThread.java:6934) at java.lang.reflect.Method.invoke(Native Method) at java.lang.reflect.Method.invoke(Method.java:372) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1404) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1199) Caused by: com.facebook.jni.CppException: false CHECK FAILED at ../c10/core/Backend.h (tensorTypeIdToBackend at ../c10/core/Backend.h:106) (no backtrace available) at org.pytorch.Module$NativePeer.initHybrid(Native Method) at org.pytorch.Module$NativePeer.<init>(Module.java:70) at org.pytorch.Module.<init>(Module.java:25) at org.pytorch.Module.load(Module.java:21) at com.example.carclassificator.Classifier.<init>(Classifier.java:18) at com.example.carclassificator.CarImageActivity.onCreate(CarImageActivity.java:47) at android.app.Activity.performCreate(Activity.java:6609) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1134) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3103) ... 10 more

Can you help me, please?

PytorchDemoApp at 1.4_nightly

Hi,

I use Android Studio 3.5.2 and encounter below issue.

ERROR: Unable to resolve dependency for ':app@debug/compileClasspath': Failed to transform artifact 'pytorch_android.aar (org.pytorch:pytorch_android:1.4.0-SNAPSHOT:20191111.132026-35)' to match attributes {artifactType=jar}.

Legacy model format is not supported on mobile.

Hi , I'm trying to load a custom model on mobile device ,
error : Caused by: com.facebook.jni.CppException: Legacy model format is not supported on mobile. (deserialize at /var/lib/jenkins/workspace/torch/csrc/jit/import.cpp:201)
(no backtrace available)

Can anybody tell me, what does that mean. what is legacy model format???

When I load my model, I get an error

public void test(View view) {
        Module module = null;
        Log.i("test", "test: load pt fail!");
        try {
            // creating bitmap from packaged into app android asset 'image.jpg',
            // app/src/main/assets/image.jpg
            // loading serialized torchscript module from packaged into app android asset model.pt,
            // app/src/model/assets/model.pt
            String str = assetFilePath(this, "urban.pt");
            module = Module.load(str);
        } catch (Exception e) {
            Log.e("PytorchHelloWorld", "Error reading assets", e);
            finish();
        }
        Log.i("test", "test: load pt success!");
        Toast.makeText(MainActivity.this,"测试按钮3",Toast.LENGTH_SHORT).show();
    }

when program run on
module = Module.load(str);

a exception will be caught

2019-12-06 16:07:47.811 6636-6636/com.example.activitytest E/PytorchHelloWorld: Error reading assets
    com.facebook.jni.CppException: [enforce fail at inline_container.cc:137] . PytorchStreamReader failed reading zip archive: failed finding central directory
    (no backtrace available)
        at org.pytorch.Module$NativePeer.initHybrid(Native Method)
        at org.pytorch.Module$NativePeer.<init>(Module.java:70)
        at org.pytorch.Module.<init>(Module.java:25)
        at org.pytorch.Module.load(Module.java:21)
        at com.example.activitytest.MainActivity.test(MainActivity.java:50)
        at java.lang.reflect.Method.invoke(Native Method)
        at androidx.appcompat.app.AppCompatViewInflater$DeclaredOnClickListener.onClick(AppCompatViewInflater.java:397)
        at android.view.View.performClick(View.java:6669)
        at android.view.View.performClickInternal(View.java:6638)
        at android.view.View.access$3100(View.java:789)
        at android.view.View$PerformClick.run(View.java:26145)
        at android.os.Handler.handleCallback(Handler.java:873)
        at android.os.Handler.dispatchMessage(Handler.java:99)
        at android.os.Looper.loop(Looper.java:193)
        at android.app.ActivityThread.main(ActivityThread.java:6863)
        at java.lang.reflect.Method.invoke(Native Method)
        at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:537)
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:858)

i have no idea about this question

could you help me?

About the 20200103.095904-96.arr

Hi,

I use Android Studio 3.5.2 and encounter below issue.

ERROR: Unable to resolve dependency for ':app@debug/compileClasspath': Failed to transform artifact 'pytorch_android.aar (org.pytorch:pytorch_android:1.4.0-SNAPSHOT::20200103.095904-96)

Failed to load Custom LSTM pytorch model in android

As i am trying to load custom pytorch lstm model in android studio using torch.load() .. it is throwing error which is mentioned below.
Would anyone mind helping me out in this. What is the right way to load any custom model using
torch.load() function.

Process: org.pytorch.helloworld, PID: 31234
java.lang.RuntimeException: Unable to start activity ComponentInfo{org.pytorch.helloworld/org.pytorch.helloworld.MainActivity}: com.facebook.jni.CppException: open file failed, file path: torchmoji.pt (FileAdapter at ../caffe2/serialize/file_adapter.cc:11)
(no backtrace available)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2659)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2724)
at android.app.ActivityThread.-wrap12(ActivityThread.java)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1473)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:154)
at android.app.ActivityThread.main(ActivityThread.java:6123)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:867)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:757)
Caused by: com.facebook.jni.CppException: open file failed, file path: torchmoji.pt (FileAdapter at ../caffe2/serialize/file_adapter.cc:11)
(no backtrace available)
at org.pytorch.NativePeer.initHybrid(Native Method)
at org.pytorch.NativePeer.(NativePeer.java:18)
at org.pytorch.Module.load(Module.java:23)
at org.pytorch.helloworld.MainActivity.onCreate(MainActivity.java:94)
at android.app.Activity.performCreate(Activity.java:6672)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1140)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2612)
... 9 more

how HelloWorldApp run?

~/Downloads/android-demo-app/HelloWorldApp$ ./gradlew installDebug

FAILURE: Build failed with an exception.

  • What went wrong:
    Task 'installDebug' not found in root project 'HelloWorldApp'.

  • Try:
    Run gradlew tasks to get a list of available tasks. Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

  • Get more help at https://help.gradle.org

BUILD FAILED in 0s

android studio error

  • What went wrong:
    A problem occurred configuring root project 'PyTorchDemoApp'.

Could not resolve all artifacts for configuration ':classpath'.
Could not download guava.jar (com.google.guava:guava:27.0.1-jre)
> Could not get resource 'https://jcenter.bintray.com/com/google/guava/guava/27.0.1-jre/guava-27.0.1-jre.jar'.
> Could not GET 'https://d29vzk4ow07wi7.cloudfront.net/e1c814fd04492a27c38e0317eabeaa1b3e950ec8010239e400fe90ad6c9107b4?response-content-disposition=attachment%3Bfilename%3D%22guava-27.0.1-jre.jar%22&Policy=eyJTdGF0ZW1lbnQiOiBbeyJSZXNvdXJjZSI6Imh0dHAqOi8vZDI5dnprNG93MDd3aTcuY2xvdWRmcm9udC5uZXQvZTFjODE0ZmQwNDQ5MmEyN2MzOGUwMzE3ZWFiZWFhMWIzZTk1MGVjODAxMDIzOWU0MDBmZTkwYWQ2YzkxMDdiND9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPWF0dGFjaG1lbnQlM0JmaWxlbmFtZSUzRCUyMmd1YXZhLTI3LjAuMS1qcmUuamFyJTIyIiwiQ29uZGl0aW9uIjp7IkRhdGVMZXNzVGhhbiI6eyJBV1M6RXBvY2hUaW1lIjoxNTcxMDI1NzgzfSwiSXBBZGRyZXNzIjp7IkFXUzpTb3VyY2VJcCI6IjAuMC4wLjAvMCJ9fX1dfQ__&Signature=EHYRkIOE9b6FTEOUxmoECRNEqBb0yEv49UZyjwwQhCM~Qp9Oh4LDSPQpOlt7QqzyzWEtXoAOfCvqMi7ishga9KTSuxVnPTL97x4bcwdCpXaKafO4ItobvV5qTHgq-2OQ9hNa2E-Bwgyild6XYKMLbkGTOS~OGw6-IPB0I-galaLp8PuyucOlqOryk42zkA1B3OjD90vkPmTNX3bcHHn1WTgWtRSjkVKYzJjuHMWS9nH9iRl1mGOMHpJiGkNtclPijzs5fJASbcAFMgVHrePbFPfRkt7PHzeYcjv15XVED861e1S0sZCfylzUuVEhK~Xsmt-gVr~Tfl30Auu-SmvjuA__&Key-Pair-Id=APKAIFKFWOMXM2UMTSFA'.
> Remote host closed connection during handshake
Could not download kotlin-reflect.jar (org.jetbrains.kotlin:kotlin-reflect:1.3.41)
> Could not get resource 'https://jcenter.bintray.com/org/jetbrains/kotlin/kotlin-reflect/1.3.41/kotlin-reflect-1.3.41.jar'.
> Could not GET 'https://d29vzk4ow07wi7.cloudfront.net/01d469878c6853a607baaadf869c7474b971abe6dd2cb74f244bea0ffb453c76?response-content-disposition=attachment%3Bfilename%3D%22kotlin-reflect-1.3.41.jar%22&Policy=eyJTdGF0ZW1lbnQiOiBbeyJSZXNvdXJjZSI6Imh0dHAqOi8vZDI5dnprNG93MDd3aTcuY2xvdWRmcm9udC5uZXQvMDFkNDY5ODc4YzY4NTNhNjA3YmFhYWRmODY5Yzc0NzRiOTcxYWJlNmRkMmNiNzRmMjQ0YmVhMGZmYjQ1M2M3Nj9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPWF0dGFjaG1lbnQlM0JmaWxlbmFtZSUzRCUyMmtvdGxpbi1yZWZsZWN0LTEuMy40MS5qYXIlMjIiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE1NzEwMjYwMDV9LCJJcEFkZHJlc3MiOnsiQVdTOlNvdXJjZUlwIjoiMC4wLjAuMC8wIn19fV19&Signature=pCzHEnqNcs~JgV~Lvjh7Jeo8x0anu6j~NVTjQMY4pyeucxMztAp0yk49rAAmyB5seSOOlh2zBOvkd0m-Y03zMepQAf6eZuP4puRg7yzdcrYtFHkxCOBpNAKF7zQaBsPV5wRP2rtjU9EgN7BeAQBfVD~OPXurG0MmOMvgBe3Zid0yqWftqsW~QtfyMf~fLhFoka1C6FEI9jkYm5KHZH0thgP~~QroRGXnEk0gztEWwSsgyRiwMo~mQyNlNGeRvAmKY~nbXQUskFbii0lWF9hXbl-HudNe7S7Sxh6~BmFIOX9GkMFYtsIwNbn99nbDblVCz9kUZ2pFaGfuoORvRKhQyQ__&Key-Pair-Id=APKAIFKFWOMXM2UMTSFA'.
> Remote host closed connection during handshake
Could not download gson.jar (com.google.code.gson:gson:2.8.5)
> Could not get resource 'https://jcenter.bintray.com/com/google/code/gson/gson/2.8.5/gson-2.8.5.jar'.
> Could not GET 'https://d29vzk4ow07wi7.cloudfront.net/233a0149fc365c9f6edbd683cfe266b19bdc773be98eabdaf6b3c924b48e7d81?response-content-disposition=attachment%3Bfilename%3D%22gson-2.8.5.jar%22&Policy=eyJTdGF0ZW1lbnQiOiBbeyJSZXNvdXJjZSI6Imh0dHAqOi8vZDI5dnprNG93MDd3aTcuY2xvdWRmcm9udC5uZXQvMjMzYTAxNDlmYzM2NWM5ZjZlZGJkNjgzY2ZlMjY2YjE5YmRjNzczYmU5OGVhYmRhZjZiM2M5MjRiNDhlN2Q4MT9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPWF0dGFjaG1lbnQlM0JmaWxlbmFtZSUzRCUyMmdzb24tMi44LjUuamFyJTIyIiwiQ29uZGl0aW9uIjp7IkRhdGVMZXNzVGhhbiI6eyJBV1M6RXBvY2hUaW1lIjoxNTcxMDI2MjY0fSwiSXBBZGRyZXNzIjp7IkFXUzpTb3VyY2VJcCI6IjAuMC4wLjAvMCJ9fX1dfQ__&Signature=qXpeTJuzG3V-wDnfMv6-eo1tPnyFai1BQvjYlCMAvsNtFwsM-Yzmd-biMwLamXtOmXQIksKEjRj7IG0GJLhgz0HqknxUx7-N~UOckqlfjnc6JR-WdfrdW8jIgQBud-24SVV9a~zlovQV60iAex88zGf~juZmVo1GfTw77~PpeWFfrKBLfr4ZjvWim~qbpLHY1Ta6Dlqel4BK9GDpfFV4sdfQk2qznMFcWTNXkiCSN2N73mdKkw0HlEoTJkKHz6bjuFywDLbu-lKvMjR5g0qKfi5hUmqi30y-MKafZWWAK9-nzc-JSF1IGXPlhhaKg48ja0sAcllwL7kA-zi5FCeG~g__&Key-Pair-Id=APKAIFKFWOMXM2UMTSFA'.
> Remote host closed connection during handshake
Could not download kotlin-stdlib.jar (org.jetbrains.kotlin:kotlin-stdlib:1.3.41)
> Could not get resource 'https://jcenter.bintray.com/org/jetbrains/kotlin/kotlin-stdlib/1.3.41/kotlin-stdlib-1.3.41.jar'.
> Could not GET 'https://d29vzk4ow07wi7.cloudfront.net/6ea3d0921b26919b286f05cbdb906266666a36f9a7c096197114f7495708ffbc?response-content-disposition=attachment%3Bfilename%3D%22kotlin-stdlib-1.3.41.jar%22&Policy=eyJTdGF0ZW1lbnQiOiBbeyJSZXNvdXJjZSI6Imh0dHAqOi8vZDI5dnprNG93MDd3aTcuY2xvdWRmcm9udC5uZXQvNmVhM2QwOTIxYjI2OTE5YjI4NmYwNWNiZGI5MDYyNjY2NjZhMzZmOWE3YzA5NjE5NzExNGY3NDk1NzA4ZmZiYz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPWF0dGFjaG1lbnQlM0JmaWxlbmFtZSUzRCUyMmtvdGxpbi1zdGRsaWItMS4zLjQxLmphciUyMiIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTU3MTAyNjA3OX0sIklwQWRkcmVzcyI6eyJBV1M6U291cmNlSXAiOiIwLjAuMC4wLzAifX19XX0_&Signature=awbpB4ceHoK4aUaLTGO3zo3Oy3Lc3QwMUdnPHLzYeSSFkLPgJJ5Qxx~sqZSTMzuXgul9f0SK-LJxROkTv-GpM~QZsRT-7pFEohVRK~XU2zaageDNj5GcEIP~yL880DbUqGxDn-fHZYb0nkB8noQS9OnwXBchO4fN7Ql~sDc-2QkZ6I0ohkfqmg-PCCZHiPMZPnSdovvL6605y6XosgzMHOjrl~orSx0ggB2~2DFg4Ah9w2HmmB3Nm7GhTpdtP4QEsUtnDjnwCCAfPR3wLQW~sai~--zSwG0UTdwIgE68LuBBz6dpEFw7jywRKJdYQ38rJjj8Xs4grQlOuH68cnE5Yw__&Key-Pair-Id=APKAIFKFWOMXM2UMTSFA'.
> Remote host closed connection during handshake

  • Try:
    Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

Run the ImageClassification example have some question

  When I run the full quantization model of mobilenet, the current CPU platform is mtk8163. At present, I find a very strange phenomenon. When I limit the CPU number to 2 cores, I run the image classfication. Get the result cost the about the 1.3s, but if I don't limit the number of CPU cores, the cores nums is 4 . Run the same example and get the result only cost less than 0.1s. Whether  the pytorch mobile net been specially optimized for 4 cores?
   We feel confused, because at present we plan to migrate our PSE network based on Python to the framework of Python mobile net. Considering the problem of power consumption, the current situation is that only two cores can be opened. I hope to get your enthusiastic answer.

Problem with YOLO .pt model

I have a problem with run model YOLO implemented in Pytorch. I used a few repositories e.g:
https://github.com/eriklindernoren/PyTorch-YOLOv3
https://github.com/ayooshkathuria/pytorch-yolo-v3

To save YOLO model in .pt I add to detect.py (in both project) below lines after "model.eval()":
model.eval()
example = get_test_input() #return common image (size 416x416 which is equal to input size)
traced_script_module = torch.jit.trace(model, example)
traced_script_module.save("./yolov3_model.pt")

Model saved correctly with below warnings:
C:(...)\models.py:204: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if grid_size != self.grid_size:
C:(...)\models.py:208: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
pred_boxes = FloatTensor(prediction[..., :4].shape)

When I copy this saved model to assets and trying to run HelloWorld/Demo App I got error:

E/AndroidRuntime: FATAL EXCEPTION: main
Process: org.pytorch.helloworld, PID: 20315
java.lang.RuntimeException: Unable to start activity ComponentInfo{org.pytorch.helloworld/org.pytorch.helloworld.MainActivity}: com.facebook.jni.CppException: empty_strided not implemented for TensorTypeSet(VariableTensorId, CUDATensorId) (empty_strided at aten/src/ATen/Functions.h:3771)
(no backtrace available)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2521)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2595)
at android.app.ActivityThread.access$800(ActivityThread.java:178)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1470)
at android.os.Handler.dispatchMessage(Handler.java:111)
at android.os.Looper.loop(Looper.java:194)
at android.app.ActivityThread.main(ActivityThread.java:5631)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:959)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:754)
Caused by: com.facebook.jni.CppException: empty_strided not implemented for TensorTypeSet(VariableTensorId, CUDATensorId) (empty_strided at aten/src/ATen/Functions.h:3771)
(no backtrace available)
at org.pytorch.NativePeer.initHybrid(Native Method)
at org.pytorch.NativePeer.(NativePeer.java:20)
at org.pytorch.Module.load(Module.java:23)
at org.pytorch.helloworld.MainActivity.onCreate(MainActivity.java:39)
at android.app.Activity.performCreate(Activity.java:6092)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1112)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2468)
... 10 more

I can add that I was trying to save this model with and without CUDA (using set CUDA_VISIBLE_DEVICES=-1)

Moreover I change in project (in other case it resolve problem):
implementation 'org.pytorch:pytorch_android:1.4.0-SNAPSHOT'
implementation 'org.pytorch:pytorch_android_torchvision:1.4.0-SNAPSHOT'

I also tried with other custom models (models from torch vision working well-I tried a few) and get similar result. Is anyone who can say what is wrong or how to save model to work correctly in this demo app?

Getting started with Pytorch mobile

I tried to build Pytorch demo app for the first time. I am new to gradlew etc. having only used CMake and Make etc for C/C++ builds in past. I tried the following steps but installDebug is not found. ./gradlew tasks definitely doesn't have any installDebug tasks.

Is the documentation old or am I missing a step or two below? I do have Android SDK and NDK installed as far as I can tell.

$ git clone https://github.com/pytorch/android-demo-app.git
$ cd android-demo-app
$ cd HelloWorldApp
$ python trace_model.py
$ ./gradlew installDebug
FAILURE: Build failed with an exception.

* What went wrong:
Task 'installDebug' not found in root project 'HelloWorldApp'.

* Try:
Run gradlew tasks to get a list of available tasks. Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

BUILD FAILED in 0s
$ ./gradlew tasks

> Task :tasks

------------------------------------------------------------
Tasks runnable from root project
------------------------------------------------------------

Android tasks
-------------
sourceSets - Prints out all the source sets defined in this project.

Build tasks
-----------
assemble - Assembles all variants of all applications and secondary packages.
assembleAndroidTest - Assembles all the Test applications.
build - Assembles and tests this project.
buildDependents - Assembles and tests this project and all projects that depend on it.
buildNeeded - Assembles and tests this project and all projects it depends on.
clean - Deletes the build directory.
cleanBuildCache - Deletes the build cache directory.

Build Setup tasks
-----------------
init - Initializes a new Gradle build.
wrapper - Generates Gradle wrapper files.

Help tasks
----------
buildEnvironment - Displays all buildscript dependencies declared in root project 'HelloWorldApp'.
components - Displays the components produced by root project 'HelloWorldApp'. [incubating]
dependencies - Displays all dependencies declared in root project 'HelloWorldApp'.
dependencyInsight - Displays the insight into a specific dependency in root project 'HelloWorldApp'.
dependentComponents - Displays the dependent components of components in root project 'HelloWorldApp'. [incubating]
help - Displays a help message.
model - Displays the configuration model of root project 'HelloWorldApp'. [incubating]
projects - Displays the sub-projects of root project 'HelloWorldApp'.
properties - Displays the properties of root project 'HelloWorldApp'.
tasks - Displays the tasks runnable from root project 'HelloWorldApp' (some of the displayed tasks may belong to subprojects).

Install tasks
-------------
uninstallAll - Uninstall all applications.

Verification tasks
------------------
check - Runs all checks.
connectedCheck - Runs all device checks on currently connected devices.
deviceCheck - Runs all device checks using Device Providers and Test Servers.

To see all tasks and more detail, run gradlew tasks --all

To see more detail about a task, run gradlew help --task <task>

BUILD SUCCESSFUL in 0s
1 actionable task: 1 executed

HellowWorld app immediately crashes on launch

With a clone of the repo as of Aug 14, 2020, no modifications, the HelloWorld app crashes at launch trying to read the model.pt asset:

/AndroidRuntime: FATAL EXCEPTION: main
    Process: org.pytorch.helloworld, PID: 6473
    java.lang.RuntimeException: Unable to start activity ComponentInfo{org.pytorch.helloworld/org.pytorch.helloworld.MainActivity}: com.facebook.jni.CppException: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at ../caffe2/serialize/inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at ../caffe2/serialize/inline_container.cc:132)
    (no backtrace available)
        at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2913)
        at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3048)
        at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:78)
        at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:108)
        at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:68)
        at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1808)
        at android.os.Handler.dispatchMessage(Handler.java:106)
        at android.os.Looper.loop(Looper.java:193)
        at android.app.ActivityThread.main(ActivityThread.java:6669)
        at java.lang.reflect.Method.invoke(Native Method)
        at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493)
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:858)
     Caused by: com.facebook.jni.CppException: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at ../caffe2/serialize/inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at ../caffe2/serialize/inline_container.cc:132)

The crash is consistent even if I recreate the model with the trace_model script using a couple different versions of torch and torchvision:

With torch==1.6.0 torchvision==0.7.0, Run python trace_model.py, clean and launch, app crashes with same error.

With torch==1.4.0 torchvision==0.5.0, Run python trace_model.py, clean and launch, app crashes with same error.

It may be worth noting that I have multiple versions of the Android NDK installed and specify ndkVersion "21.1.6352462"

custom model cannot load in android using HelloWorld example, but successfully loaded with cpp?

I have converted model (MORAN - https://github.com/Canjie-Luo/MORAN_v2) with torch script, I used pytorch=1.3.1, torchvision=0.4.2.
When I tried to load with cpp, It worked. But when load model in Android using HelloWorld example, there was an error:
Unable to start activity ComponentInfo{org.pytorch.helloworld/org.pytorch.helloworld.MainActivity}: com.facebook.jni.CppException: false CHECK FAILED at ../c10/core/Backend.h (tensorTypeIdToBackend at ../c10/core/Backend.h:106)

line code to load model: module = Module.load(assetFilePath(this, model_name));

NMS in android

Hi!

Have a question - how to implement Non Maximum Suppression in Android application?

As I understand - maybe it's to compile with this C++ code for NMS:
https://github.com/pytorch/vision/blob/master/torchvision/csrc/cpu/nms_cpu.cpp
But in that case I need to add also torch dependencies to compilation somehow...

Maybe there is the better way?

  • Some Android Java NMS implementations ? (was unable to find it after one hour of googling)
  • Or maybe somehow to implement it with pytorch and then script it with torch.jit.script ?

Would be grateful for help :)

Is there a better way to process tensors than looping?

As of right now I can't find a way to process tensors efficiently as numpy can. I'm deploying a deeplab model onto an Android app, and the output tensor has the shape of [1x21x400x400]. With numpy i would just do np.argmax(out, axis=1), but on Android, I have to loop through the entire thing which is painfully slow.

UnsupportedNodeError: GeneratorExp aren't supported:

I'm getting this error while trying to trace pytorch model.
return
UnsupportedNodeError: GeneratorExp aren't supported:
at /usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py:105:31
any_param = next(self.parameters()).data
if not any_param.is_cuda or not torch.backends.cudnn.is_acceptable(any_param):
return

    # If any parameters alias, we fall back to the slower, copying code path. This is
    # a sufficient check, because overlapping parameter buffers that don't completely
    # alias would break the assumptions of the uniqueness check in
    # Module.named_parameters().
    all_weights = self._flat_weights
    unique_data_ptrs = set(p.data_ptr() for p in all_weights)
                           ~ <--- HERE
    if len(unique_data_ptrs) != len(all_weights):
        return

    with torch.cuda.device_of(any_param):
        import torch.backends.cudnn.rnn as rnn

        # NB: This is a temporary hack while we still don't have Tensor
        # bindings for ATen functions
        with torch.no_grad():

Legacy model format is not supported on mobile.

Error:

Caused by: com.facebook.jni.CppException: Legacy model format is not supported on mobile. (deserialize at /var/lib/jenkins/workspace/torch/csrc/jit/import.cpp:201)

Custom MobileNet v2 model, trained with PyTorch 1.4.0 and convert to the pt file.

build.gradle

    implementation 'org.pytorch:pytorch_android:1.4.0-SNAPSHOT'
    implementation 'org.pytorch:pytorch_android_torchvision:1.4.0-SNAPSHOT'

But the strange thing is, the other FaceNet model that I use before, it works on both 1.3.0 and 1.4.0.

Could someone help me please? thanks :)

sendAppFreezeEvent Error when Forwarding Inpug on Custom Model

Hi,

I transformed my custom model by doing same procedure of pytorch tutorial and it seems no problem when loading the model in Android.

Then I feed the input (exactly the same image that is included the assets folder of HelloWorldApp) to the loaded model by using model.forward(IValue.from(inputTensor)).ToTensor() like MainActivity.java of HelloWorldApp.

Only one line I have changed from original MainActivity.java of HelloWorldApp is the line which tries to load the model in assets folder because of loading custom model.

And error occurred, the details are below:
2019-10-25 13:20:53.412 3461-3532/org.pytorch.helloworld E/ZrHungImpl: sendAppFreezeEvent failed!

I checked there was no problem just before feeding the input to the model by using log.d funciton.
I also searched sendAppFreezeEvent but there were no enough information.

Are there any suggestions?
Thanks,

HelloWorldApp Module.load pre-trained model cause the app freeze and data size increase

Hi,

In HellowWorldApp, when I try to load my pre-trained model, the app freeze and when I look at the app data in setting, it keep increasing to 7 GB. It seems like some operations happens during load method that keep increase the app data size.

The same model, I put it in PyTorchDemo, and it's working fine.

What could be the cause of this ?

Thanks

Worse performance on mobile when using custom model

I have a custom model that is a variation on YOLOv3, to test the results, I have asserted that the inputTensor on device is the same as that I am loading on the computer. The output (which has detections and classifications) is giving near identical object and class confidences. However, the locations (x,y,w,h) are slightly off. Is this expected behaviour? Do you know if there is anything in particular I should investigate in my model or the trace of my model?

Force stopping app.

Downladed the assets and tried running on android device.
Log:
java.lang.RuntimeException: Unable to start activity ComponentInfo{org.pytorch.helloworld/org.pytorch.helloworld.MainActivity}: com.facebook.jni.CppException: false CHECK FAILED at ../torch/csrc/jit/import.cpp (deserialize at ../torch/csrc/jit/import.cpp:178) (no backtrace available) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2946) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3081) at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:78) at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:108) at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:68) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1831) at android.os.Handler.dispatchMessage(Handler.java:106) at android.os.Looper.loop(Looper.java:201) at android.app.ActivityThread.main(ActivityThread.java:6806) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:547) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:873) Caused by: com.facebook.jni.CppException: false CHECK FAILED at ../torch/csrc/jit/import.cpp (deserialize at ../torch/csrc/jit/import.cpp:178) (no backtrace available) at org.pytorch.Module$NativePeer.initHybrid(Native Method) at org.pytorch.Module$NativePeer.<init>(Module.java:70) at org.pytorch.Module.<init>(Module.java:25) at org.pytorch.Module.load(Module.java:21) at org.pytorch.helloworld.MainActivity.onCreate(MainActivity.java:39) at android.app.Activity.performCreate(Activity.java:7224) at android.app.Activity.performCreate(Activity.java:7213) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1272) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2926) ... 11 more

How to create custom model for the PyTorchDemoApplication?Thanks

Hi, I want to learn about how to apply pytorch model on andorid platform. And this android-demo-app is very useful to me.
The PyTorchDemoApp has already been deployed on my android mobile ,and it can be runned successfully.
But I want to know how to create a custom model with my own Image data.
When I copy the model.pt from HelloWorldApp, the PyTorchDemoApp crashes and tells me " Sorry There is an error"
Can anyone tell me how to create a custom model?
Thanks very much.

Questions related to score after predictions

From each prediction, we get back score of each result. I know that the number that close to 0 is the best predicted result.

My questions are

  1. How is the score calculated internally ?
  2. Why the score is negative?
  3. How to get percentage of the result based on the score?

Thanks.

How can I get more than one input in the forward?(Android studio,Java)

In my own model, I need three input tensors,but now i still can't solve this problem. It seems that Ivalue from only accept one tensor. So what can I do to send three inputs in my model?
code:
final Tensor outputTensor = module.forward(IValue.from(inputTensor1,inputTensor2)).toTensor();

Error in using other models in HelloWorld demo, such as densenet, mobilenet.

HI,I use other torchvision models,Only resnet works normally.

W/ResourceType: No package identifier when getting name for resource number 0x00000000
D/ViewContentFactory: initViewContentFetcherClass took 13ms
I/ContentCatcher: ViewContentFetcher : ViewContentFetcher
D/ViewContentFactory: createInterceptor took 14ms
I/ContentCatcher: Interceptor : Catcher list invalid for [email protected]@215660984
Interceptor : Get featureInfo from config pick_mode
I/zygote64: Rejecting re-init on previously-failed class java.lang.Class<androidx.core.view.ViewCompat$2>: java.lang.NoClassDefFoundError: Failed resolution of: Landroid/view/View$OnUnhandledKeyEventListener;
at void androidx.core.view.ViewCompat.setBackground(android.view.View, android.graphics.drawable.Drawable) (ViewCompat.java:2559)
at void androidx.appcompat.widget.ActionBarContainer.(android.content.Context, android.util.AttributeSet) (ActionBarContainer.java:63)
at java.lang.Object java.lang.reflect.Constructor.newInstance0(java.lang.Object[]) (Constructor.java:-2)
at java.lang.Object java.lang.reflect.Constructor.newInstance(java.lang.Object[]) (Constructor.java:334)
at android.view.View android.view.LayoutInflater.createView(java.lang.String, java.lang.String, android.util.AttributeSet) (LayoutInflater.java:651)
at android.view.View android.view.LayoutInflater.createViewFromTag(android.view.View, java.lang.String, android.content.Context, android.util.AttributeSet, boolean) (LayoutInflater.java:794)
at android.view.View android.view.LayoutInflater.createViewFromTag(android.view.View, java.lang.String, android.content.Context, android.util.AttributeSet) (LayoutInflater.java:734)
at void android.view.LayoutInflater.rInflate(org.xmlpull.v1.XmlPullParser, android.view.View, android.content.Context, android.util.AttributeSet, boolean) (LayoutInflater.java:867)
at void android.view.LayoutInflater.rInflateChildren(org.xmlpull.v1.XmlPullParser, android.view.View, android.util.AttributeSet, boolean) (LayoutInflater.java:828)
at android.view.View android.view.LayoutInflater.inflate(org.xmlpull.v1.XmlPullParser, android.view.ViewGroup, boolean) (LayoutInflater.java:519)
at android.view.View android.view.LayoutInflater.inflate(int, android.view.ViewGroup, boolean) (LayoutInflater.java:427)
at android.view.View android.view.LayoutInflater.inflate(int, android.view.ViewGroup) (LayoutInflater.java:374)
at android.view.ViewGroup androidx.appcompat.app.AppCompatDelegateImpl.createSubDecor() (AppCompatDelegateImpl.java:749)
at void androidx.appcompat.app.AppCompatDelegateImpl.ensureSubDecor() (AppCompatDelegateImpl.java:659)
at void androidx.appcompat.app.AppCompatDelegateImpl.setContentView(int) (AppCompatDelegateImpl.java:552)
at void androidx.appcompat.app.AppCompatActivity.setContentView(int) (AppCompatActivity.java:161)
at void org.pytorch.helloworld.MainActivity.onCreate(android.os.Bundle) (MainActivity.java:29)
at void android.app.Activity.performCreate(android.os.Bundle, android.os.PersistableBundle) (Activity.java:7098)
at void android.app.Activity.performCreate(android.os.Bundle) (Activity.java:7089)
I/zygote64: at void android.app.Instrumentation.callActivityOnCreate(android.app.Activity, android.os.Bundle) (Instrumentation.java:1215)
at android.app.Activity android.app.ActivityThread.performLaunchActivity(android.app.ActivityThread$ActivityClientRecord, android.content.Intent) (ActivityThread.java:2770)
at void android.app.ActivityThread.handleLaunchActivity(android.app.ActivityThread$ActivityClientRecord, android.content.Intent, java.lang.String) (ActivityThread.java:2895)
at void android.app.ActivityThread.-wrap11(android.app.ActivityThread, android.app.ActivityThread$ActivityClientRecord, android.content.Intent, java.lang.String) (ActivityThread.java:-1)
at void android.app.ActivityThread$H.handleMessage(android.os.Message) (ActivityThread.java:1616)
at void android.os.Handler.dispatchMessage(android.os.Message) (Handler.java:106)
at void android.os.Looper.loop() (Looper.java:173)
at void android.app.ActivityThread.main(java.lang.String[]) (ActivityThread.java:6653)
at java.lang.Object java.lang.reflect.Method.invoke(java.lang.Object, java.lang.Object[]) (Method.java:-2)
at void com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run() (RuntimeInit.java:547)
at void com.android.internal.os.ZygoteInit.main(java.lang.String[]) (ZygoteInit.java:821)
Caused by: java.lang.ClassNotFoundException: Didn't find class "android.view.View$OnUnhandledKeyEventListener" on path: DexPathList[[zip file "/data/app/org.pytorch.helloworld-Wmr9dqSpdIXoOCTGBBfpNw==/base.apk"],

thanks!

Neural network(model.pt) internal parameter type error

I am trying to replace the neural network model
with my own model. My own model works well in Pytorch. And I have converted my own model to TorchScript.

The following is specific error :

`E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.abc.APP, PID: 27480
java.lang.RuntimeException: Failure delivering result ResultInfo{who=null, request=1, result=-1, data=Intent { act=inline-data (has extras) }} to activity {com.abc.APP/com.abc.APP.MainActivity}: java.lang.RuntimeException: Expected object of scalar type Int but got scalar type Float for argument #2 'mat2' in call to _th_mm
The above operation failed in interpreter.
Traceback (most recent call last):
File "C:\Users\aaa\AppData\Local\conda\conda\envs\pytorch\lib\site-packages\torch\nn\functional.py", line 1372
ret = torch.addmm(bias, input, weight.t())
else:
output = input.matmul(weight.t())
~~~~~~~~~~~~ <--- HERE
if bias is not None:
output += bias
Serialized File "code/torch/torch/nn/functional/___torch_mangle_948.py", line 13
ret = torch.addmm(bias0, input, torch.t(weight), beta=1, alpha=1)
else:
output = torch.matmul(input, torch.t(weight))
~~~~~~~~~~~~ <--- HERE
if torch.isnot(bias, None):
bias1 = unchecked_cast(Tensor, bias)

    at android.app.ActivityThread.deliverResults(ActivityThread.java:4423)
    at android.app.ActivityThread.handleSendResult(ActivityThread.java:4465)
    at android.app.servertransaction.ActivityResultItem.execute(ActivityResultItem.java:49)
    at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:108)
    at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:68)
    at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1831)
    at android.os.Handler.dispatchMessage(Handler.java:106)
    at android.os.Looper.loop(Looper.java:201)
    at android.app.ActivityThread.main(ActivityThread.java:6810)
    at java.lang.reflect.Method.invoke(Native Method)
    at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:547)
    at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:873)
 Caused by: java.lang.RuntimeException: Expected object of scalar type Int but got scalar type Float for argument #2 'mat2' in call to _th_mm
The above operation failed in interpreter.
Traceback (most recent call last):
  File "C:\Users\aaa\AppData\Local\conda\conda\envs\pytorch\lib\site-packages\torch\nn\functional.py", line 1372
        ret = torch.addmm(bias, input, weight.t())
    else:
        output = input.matmul(weight.t())
                 ~~~~~~~~~~~~ <--- HERE
        if bias is not None:
            output += bias
Serialized   File "code/__torch__/torch/nn/functional/___torch_mangle_948.py", line 13
    ret = torch.addmm(bias0, input, torch.t(weight), beta=1, alpha=1)
  else:
    output = torch.matmul(input, torch.t(weight))
             ~~~~~~~~~~~~ <--- HERE
    if torch.__isnot__(bias, None):
      bias1 = unchecked_cast(Tensor, bias)

    at org.pytorch.NativePeer.forward(Native Method)
    at org.pytorch.Module.forward(Module.java:37)
    at com.abc.APP.Classifier.predict(Classifier.java:60)
    at com.abc.APP.MainActivity.onActivityResult(MainActivity.java:59)
    at android.app.Activity.dispatchActivityResult(Activity.java:7590)
    at android.app.ActivityThread.deliverResults(ActivityThread.java:4416)
    	... 11 more`

could you help me ?

Are half precision models supported?

Hello, I successfully run models on android following thees demos. When I want to use model.half() and trace tensor in half precision to accelerate the process, tracing models in half precision seems ok. However, I can not find toHalf() in the Tensor java class.

Is the half precision supported now?

PyTorchDemoApp cannot be compiled with latest CameraX version

The implementation of CameraX in my app is done with a newer version, and it looks like that some classes like ImageAnalysisConfig no longer exist, some constructors were privatized, etc. I'm gonna need to do some work on AbstractCameraXActivity.setupCameraX() to make it work.

It would be awesome if you guys updated the demo app to work with the latest CameraX library.

How to add built AAR libraries to a project

Hi,

I've faced an issue. On PyTorch website there's an intro how to build and deploy pytorch-mobile from source (https://pytorch.org/mobile/android/#building-pytorch-android-from-source) but the part with Gradle won't work for me.

I've succesfully build AAR files, then edited HelloWorldApp/app/gradle.build as it said in intro, and added this AAR files to HelloWorldApp/app/libs/

And run it ./gradlew installDebug --stacktrace

> Task :app:javaPreCompileDebug FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':app:javaPreCompileDebug'.
> Could not resolve all files for configuration ':app:debugCompileClasspath'.
   > Failed to transform artifact 'pytorch_android-release.aar (:pytorch_android-release:)' to match attributes {artifactType=android-classes, org.gradle.usage=java-api}.
      > Execution failed for JetifyTransform: /root/android-demo-app/HelloWorldApp/app/libs/pytorch_android-release.aar.
         > Java heap space

* Try:
Run with --info or --debug option to get more log output. Run with --scan to get full insights.

* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:javaPreCompileDebug'.
        at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:38)
        at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:73)
        at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
        at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:49)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:416)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:406)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:102)
        at org.gradle.internal.operations.DelegatingBuildOperationExecutor.call(DelegatingBuildOperationExecutor.java:36)
        at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:49)
        at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:43)
        at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:355)
        at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:343)
        at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:336)
        at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:322)
        at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:134)
        at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:129)
        at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:202)
        at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.executeNextNode(DefaultPlanExecutor.java:193)
        at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:129)
        at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
        at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
        at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
Caused by: org.gradle.api.internal.artifacts.ivyservice.DefaultLenientConfiguration$ArtifactResolveException: Could not resolve all files for configuration ':app:debugCompileClasspath'.
        at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.rethrowFailure(DefaultConfiguration.java:1195)
        at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.access$2100(DefaultConfiguration.java:138)
        at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration$ConfigurationFileCollection.getFiles(DefaultConfiguration.java:1170)
        at org.gradle.api.internal.file.AbstractFileCollection.iterator(AbstractFileCollection.java:72)
        at org.gradle.internal.snapshot.impl.DefaultFileSystemSnapshotter$FileCollectionLeafVisitorImpl.visitCollection(DefaultFileSystemSnapshotter.java:240)
        at org.gradle.api.internal.file.AbstractFileCollection.visitLeafCollections(AbstractFileCollection.java:233)
        at org.gradle.api.internal.file.CompositeFileCollection.visitLeafCollections(CompositeFileCollection.java:205)
        at org.gradle.internal.snapshot.impl.DefaultFileSystemSnapshotter.snapshot(DefaultFileSystemSnapshotter.java:126)
        at org.gradle.internal.fingerprint.impl.AbstractFileCollectionFingerprinter.fingerprint(AbstractFileCollectionFingerprinter.java:48)
        at org.gradle.api.internal.tasks.execution.DefaultTaskFingerprinter.fingerprintTaskFiles(DefaultTaskFingerprinter.java:46)
        at org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionStateTaskExecuter.createExecutionState(ResolveBeforeExecutionStateTaskExecuter.java:93)
        at org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionStateTaskExecuter.execute(ResolveBeforeExecutionStateTaskExecuter.java:73)
        at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:62)
        at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:108)
        at org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionOutputsTaskExecuter.execute(ResolveBeforeExecutionOutputsTaskExecuter.java:67)
        at org.gradle.api.internal.tasks.execution.ResolveAfterPreviousExecutionStateTaskExecuter.execute(ResolveAfterPreviousExecutionStateTaskExecuter.java:46)
        at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:94)
        at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
        at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:95)
        at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
        at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56)
        at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
        ... 24 more
Caused by: org.gradle.api.internal.artifacts.transform.TransformException: Failed to transform artifact 'pytorch_android-release.aar (:pytorch_android-release:)' to match attributes {artifactType=android-classes, org.gradle.usage=java-api}.
        at org.gradle.api.internal.artifacts.transform.TransformingArtifactVisitor.lambda$visitArtifact$1(TransformingArtifactVisitor.java:61)
        at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:191)
        at org.gradle.api.internal.artifacts.transform.TransformingArtifactVisitor.visitArtifact(TransformingArtifactVisitor.java:50)
        at org.gradle.api.internal.artifacts.ivyservice.resolveengine.artifact.ArtifactBackedResolvedVariant$SingleArtifactSet.visit(ArtifactBackedResolvedVariant.java:112)
        at org.gradle.api.internal.artifacts.transform.TransformCompletion.visit(TransformCompletion.java:42)
        at org.gradle.api.internal.artifacts.ivyservice.resolveengine.artifact.CompositeResolvedArtifactSet$CompositeResult.visit(CompositeResolvedArtifactSet.java:83)
        at org.gradle.api.internal.artifacts.ivyservice.resolveengine.artifact.ParallelResolveArtifactSet$VisitingSet.visit(ParallelResolveArtifactSet.java:64)
        at org.gradle.api.internal.artifacts.ivyservice.DefaultLenientConfiguration.visitArtifacts(DefaultLenientConfiguration.java:256)
        at org.gradle.api.internal.artifacts.ivyservice.DefaultLenientConfiguration.access$500(DefaultLenientConfiguration.java:69)
        at org.gradle.api.internal.artifacts.ivyservice.DefaultLenientConfiguration$2.run(DefaultLenientConfiguration.java:231)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:402)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:394)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:92)
        at org.gradle.internal.operations.DelegatingBuildOperationExecutor.run(DelegatingBuildOperationExecutor.java:31)
        at org.gradle.api.internal.artifacts.ivyservice.DefaultLenientConfiguration.visitArtifactsWithBuildOperation(DefaultLenientConfiguration.java:228)
        at org.gradle.api.internal.artifacts.ivyservice.DefaultLenientConfiguration.access$200(DefaultLenientConfiguration.java:69)
        at org.gradle.api.internal.artifacts.ivyservice.DefaultLenientConfiguration$1.visitArtifacts(DefaultLenientConfiguration.java:133)
        at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration$ConfigurationFileCollection.getFiles(DefaultConfiguration.java:1167)
        ... 43 more
Caused by: org.gradle.api.internal.artifacts.transform.TransformException: Execution failed for JetifyTransform: /root/android-demo-app/HelloWorldApp/app/libs/pytorch_android-release.aar.
        at org.gradle.api.internal.artifacts.transform.DefaultTransformerInvoker.lambda$invoke$1(DefaultTransformerInvoker.java:172)
        at org.gradle.internal.Try$Failure.mapFailure(Try.java:182)
        at org.gradle.api.internal.artifacts.transform.DefaultTransformerInvoker.lambda$invoke$2(DefaultTransformerInvoker.java:172)
        at org.gradle.api.internal.artifacts.transform.DefaultTransformerInvoker.fireTransformListeners(DefaultTransformerInvoker.java:219)
        at org.gradle.api.internal.artifacts.transform.DefaultTransformerInvoker.lambda$invoke$3(DefaultTransformerInvoker.java:117)
        at org.gradle.api.internal.artifacts.transform.ImmutableTransformationWorkspaceProvider.lambda$withWorkspace$0(ImmutableTransformationWorkspaceProvider.java:81)
        at org.gradle.cache.internal.LockOnDemandCrossProcessCacheAccess.withFileLock(LockOnDemandCrossProcessCacheAccess.java:90)
        at org.gradle.cache.internal.DefaultCacheAccess.withFileLock(DefaultCacheAccess.java:194)
        at org.gradle.cache.internal.DefaultPersistentDirectoryStore.withFileLock(DefaultPersistentDirectoryStore.java:170)
        at org.gradle.cache.internal.DefaultCacheFactory$ReferenceTrackingCache.withFileLock(DefaultCacheFactory.java:194)
        at org.gradle.api.internal.artifacts.transform.ImmutableTransformationWorkspaceProvider.withWorkspace(ImmutableTransformationWorkspaceProvider.java:76)
        at org.gradle.api.internal.artifacts.transform.AbstractCachingTransformationWorkspaceProvider.lambda$withWorkspace$0(AbstractCachingTransformationWorkspaceProvider.java:54)
        at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4717)
        at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3444)
        at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2193)
        at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2152)
        at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2042)
        at com.google.common.cache.LocalCache.get(LocalCache.java:3850)
        at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4712)
        at org.gradle.api.internal.artifacts.transform.AbstractCachingTransformationWorkspaceProvider.withWorkspace(AbstractCachingTransformationWorkspaceProvider.java:53)
        at org.gradle.api.internal.artifacts.transform.DefaultTransformerInvoker.invoke(DefaultTransformerInvoker.java:116)
        at org.gradle.api.internal.artifacts.transform.TransformationStep.lambda$transform$0(TransformationStep.java:104)
        at org.gradle.internal.Try$Success.flatMap(Try.java:102)
        at org.gradle.api.internal.artifacts.transform.TransformationStep.transform(TransformationStep.java:101)
        at org.gradle.api.internal.artifacts.transform.TransformationNode$InitialTransformationNode$1.transform(TransformationNode.java:159)
        at org.gradle.api.internal.artifacts.transform.TransformationNode$ArtifactTransformationStepBuildOperation.call(TransformationNode.java:229)
        at org.gradle.api.internal.artifacts.transform.TransformationNode$ArtifactTransformationStepBuildOperation.call(TransformationNode.java:212)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:416)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:406)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158)
        at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:102)
        at org.gradle.internal.operations.DelegatingBuildOperationExecutor.call(DelegatingBuildOperationExecutor.java:36)
        at org.gradle.api.internal.artifacts.transform.TransformationNode$InitialTransformationNode.execute(TransformationNode.java:145)
        at org.gradle.api.internal.artifacts.transform.TransformationNodeExecutor.execute(TransformationNodeExecutor.java:37)
        ... 12 more
Caused by: java.lang.OutOfMemoryError: Java heap space

Module.load is giving error "CppException: false CHECK FAILED at aten/src/ATen/Functions.h "

Need help
Module.load is giving error "CppException: false CHECK FAILED at aten/src/ATen/Functions.h "
What is the reason? Even i can reproduce this issue with simple convolution layer.

2019-12-30 08:51:20.487 28171-28328/org.pytorch.demo E/PyTorchDemo: Error during image analysis com.facebook.jni.CppException: false CHECK FAILED at aten/src/ATen/Functions.h (empty at aten/src/ATen/Functions.h:3535) (no backtrace available) at org.pytorch.Module$NativePeer.initHybrid(Native Method) at org.pytorch.Module$NativePeer.<init>(Module.java:70) at org.pytorch.Module.<init>(Module.java:25) at org.pytorch.Module.load(Module.java:21) at org.pytorch.demo.vision.ImageClassificationActivity.analyzeImage(ImageClassificationActivity.java:166) at org.pytorch.demo.vision.ImageClassificationActivity.analyzeImage(ImageClassificationActivity.java:31) at org.pytorch.demo.vision.AbstractCameraXActivity.lambda$setupCameraX$2$AbstractCameraXActivity(AbstractCameraXActivity.java:90) at org.pytorch.demo.vision.-$$Lambda$AbstractCameraXActivity$t0OjLr-l_M0-_0_dUqVE4yqEYnE.analyze(Unknown Source:2) at androidx.camera.core.ImageAnalysisAbstractAnalyzer.analyzeImage(ImageAnalysisAbstractAnalyzer.java:57) at androidx.camera.core.ImageAnalysisNonBlockingAnalyzer$1.run(ImageAnalysisNonBlockingAnalyzer.java:135) at android.os.Handler.handleCallback(Handler.java:907) at android.os.Handler.dispatchMessage(Handler.java:105) at android.os.Looper.loop(Looper.java:216) at android.os.HandlerThread.run(HandlerThread.java:65)

The compiled android module is attached below.

android_torch_test.zip

Script to reproduce this issue:

`
import torch
import torch.nn as nn
import torchvision
from torchsummary import summary

class MyBlock(nn.Module):
def init(self, ninput, noutput):
super().init()
self.conv = nn.Conv2d(ninput, noutput - ninput, (3, 3), stride=2, padding=1, bias=True)
def forward(self, input):
return self.conv(input)

model = MyBlock(3,16).cuda()
model.eval()

print("-----MODEL---------")
print(model)

example_input = torch.rand(1, 3, 512, 1024).cuda()
print("Input Shape = ", example_input.shape)
print("-----MODEL SUMMARY---------")
print(summary(model, example_input.shape[1:]))

traced_script_module = torch.jit.trace(model, example_input)
traced_script_module.save("android_torch_test.pt")

`

Pytorch Version
image

I tried changing filenames of "android_torch_test.pt" to many, but upon Module.load, im getting model load error.
But if i load any models present in assets folder, its loading fine.

app crashed with my customized segment model

run PyTorchDemoApp app with my customized segment model, app crashed with following error, thanks for any suggestions

2019-10-30 16:54:00.469 13290-13342/org.pytorch.demo E/PyTorchDemo: Error during image analysis
java.lang.IllegalStateException: Expected IValue type 2, actual type 7
at org.pytorch.IValue.preconditionType(IValue.java:307)
at org.pytorch.IValue.toTensor(IValue.java:240)
at org.pytorch.demo.vision.ImageClassificationActivity.analyzeImage(ImageClassificationActivity.java:181)
at org.pytorch.demo.vision.ImageClassificationActivity.analyzeImage(ImageClassificationActivity.java:31)
at org.pytorch.demo.vision.AbstractCameraXActivity.lambda$setupCameraX$2$AbstractCameraXActivity(AbstractCameraXActivity.java:90)
at org.pytorch.demo.vision.-$$Lambda$AbstractCameraXActivity$t0OjLr-l_M0-_0_dUqVE4yqEYnE.analyze(lambda)
at androidx.camera.core.ImageAnalysisAbstractAnalyzer.analyzeImage(ImageAnalysisAbstractAnalyzer.java:57)
at androidx.camera.core.ImageAnalysisNonBlockingAnalyzer$1.run(ImageAnalysisNonBlockingAnalyzer.java:135)
at android.os.Handler.handleCallback(Handler.java:751)

How to create a new nlp model?

Thanks for the project.
The example successful run on Android.
However, I want to create my our model for other nlp tasks.
So, can you show me the way to create the nlp model? Or the source of creating model-reddit16-f140225004_2.pt1?

false CHECK FAILED at ../aten/src/ATen/core/function_schema_inl.h

I've converted custom pytorch model, and trying to run on android , and raises error when trying run module.forward().
Caused by: com.facebook.jni.CppException: false CHECK FAILED at ../aten/src/ATen/core/function_schema_inl.h (checkAndNormalizeInputs at ../aten/src/ATen/core/function_schema_inl.h:270)
(no backtrace available)
what does that mean , and how to bypass that?????

org.pytorch:pytorch_android:1.4.0-SNAPSHOT crash

Last week my app with pytorch_android:1.4.0-SNAPSHOT run good.But I rebuild my app.My app happen native crash.

#00 pc 00000000001b4df8 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #01 pc 0000000000fa0e3c /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #02 pc 0000000000f946cc /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #03 pc 0000000000f5fdfc /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #04 pc 0000000000f6969c /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #05 pc 0000000000f9ce84 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #06 pc 0000000000f9cc30 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #07 pc 0000000000f93718 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #08 pc 0000000000f5fc40 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #09 pc 0000000000f86f1c /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #10 pc 0000000000f5de90 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #11 pc 0000000000f57834 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #12 pc 0000000000f54114 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #13 pc 0000000000f533d4 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #14 pc 0000000000f52e6c /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #15 pc 0000000001025ba4 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #16 pc 0000000000f4f360 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.442 14942 14942 F DEBUG : #17 pc 0000000000d9fbd0 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #18 pc 0000000000d9b34c /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #19 pc 0000000000d97f60 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #20 pc 0000000000da9c30 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #21 pc 0000000000fbf1b0 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #22 pc 0000000000d9f220 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #23 pc 0000000000d9b34c /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #24 pc 0000000000d97f60 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #25 pc 0000000000da9c30 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #26 pc 0000000000fbf1b0 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #27 pc 0000000000d9f220 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #28 pc 0000000000d9b34c /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #29 pc 0000000000d97f60 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #30 pc 0000000000d98488 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #31 pc 0000000000dae8e0 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #32 pc 0000000000d806c8 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #33 pc 0000000000d7abcc /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #34 pc 0000000000d79be8 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #35 pc 0000000000d79778 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #36 pc 0000000000dacb9c /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #37 pc 0000000000dae580 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #38 pc 0000000000dad9c0 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #39 pc 0000000000dae27c /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #40 pc 0000000000dae3d8 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so 01-04 10:14:40.443 14942 14942 F DEBUG : #41 pc 000000000011593c /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so (pytorch_jni::PytorchJni::PytorchJni(facebook::jni::alias_ref<_jstring*>)+104) 01-04 10:14:40.443 14942 14942 F DEBUG : #42 pc 0000000000115704 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so (_ZN8facebook3jni11HybridClassIN11pytorch_jni10PytorchJniENS0_6detail15BaseHybridClassEE15makeCxxInstanceIJRNS0_9alias_refIP8_jstringEEEEENS0_16basic_strong_refINS4_10HybridDataENS0_23LocalReferenceAllocatorEEEDpOT_+64) 01-04 10:14:40.443 14942 14942 F DEBUG : #43 pc 0000000000114298 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so (pytorch_jni::PytorchJni::initHybrid(facebook::jni::alias_ref<_jclass*>, facebook::jni::alias_ref<_jstring*>)+44) 01-04 10:14:40.443 14942 14942 F DEBUG : #44 pc 00000000001163cc /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/lib/arm64/libpytorch_jni.so (_ZN8facebook3jni6detail15FunctionWrapperIPFNS0_16basic_strong_refIPNS1_8JTypeForINS1_10HybridDataENS0_7JObjectEvE11_javaobjectENS0_23LocalReferenceAllocatorEEENS0_9alias_refIP7_jclassEENSC_IP8_jstringEEEXadL_ZN11pytorch_jni10PytorchJni10initHybridESF_SI_EESE_SB_JSI_EE4callEP7_JNIEnvP8_jobjectSH_+68) 01-04 10:14:40.443 14942 14942 F DEBUG : #45 pc 0000000000020ddc /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/oat/arm64/base.odex (offset 0x1e000) (org.pytorch.NativePeer.initHybrid+172) 01-04 10:14:40.443 14942 14942 F DEBUG : #46 pc 000000000055564c /system/lib64/libart.so (art_quick_invoke_static_stub+604) 01-04 10:14:40.444 14942 14942 F DEBUG : #47 pc 00000000000cf714 /system/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+232) 01-04 10:14:40.444 14942 14942 F DEBUG : #48 pc 000000000027f2e0 /system/lib64/libart.so (art::interpreter::ArtInterpreterToCompiledCodeBridge(art::Thread*, art::ArtMethod*, art::ShadowFrame*, unsigned short, art::JValue*)+344) 01-04 10:14:40.444 14942 14942 F DEBUG : #49 pc 00000000002792e8 /system/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+968) 01-04 10:14:40.444 14942 14942 F DEBUG : #50 pc 0000000000525e5c /system/lib64/libart.so (MterpInvokeStatic+204) 01-04 10:14:40.444 14942 14942 F DEBUG : #51 pc 0000000000547a94 /system/lib64/libart.so (ExecuteMterpImpl+14612) 01-04 10:14:40.444 14942 14942 F DEBUG : #52 pc 00000000003b6050 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/oat/arm64/base.vdex (org.pytorch.NativePeer.<init>+6) 01-04 10:14:40.444 14942 14942 F DEBUG : #53 pc 0000000000252fec /system/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.3126947107+488) 01-04 10:14:40.444 14942 14942 F DEBUG : #54 pc 0000000000258ae0 /system/lib64/libart.so (art::interpreter::ArtInterpreterToInterpreterBridge(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame*, art::JValue*)+216) 01-04 10:14:40.444 14942 14942 F DEBUG : #55 pc 00000000002792cc /system/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+940) 01-04 10:14:40.444 14942 14942 F DEBUG : #56 pc 0000000000525c98 /system/lib64/libart.so (MterpInvokeDirect+296) 01-04 10:14:40.444 14942 14942 F DEBUG : #57 pc 0000000000547a14 /system/lib64/libart.so (ExecuteMterpImpl+14484) 01-04 10:14:40.444 14942 14942 F DEBUG : #58 pc 00000000003b6008 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/oat/arm64/base.vdex (org.pytorch.Module.load+36) 01-04 10:14:40.444 14942 14942 F DEBUG : #59 pc 0000000000252fec /system/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.3126947107+488) 01-04 10:14:40.444 14942 14942 F DEBUG : #60 pc 0000000000258ae0 /system/lib64/libart.so (art::interpreter::ArtInterpreterToInterpreterBridge(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame*, art::JValue*)+216) 01-04 10:14:40.444 14942 14942 F DEBUG : #61 pc 00000000002792cc /system/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+940) 01-04 10:14:40.444 14942 14942 F DEBUG : #62 pc 0000000000525e5c /system/lib64/libart.so (MterpInvokeStatic+204) 01-04 10:14:40.444 14942 14942 F DEBUG : #63 pc 0000000000547a94 /system/lib64/libart.so (ExecuteMterpImpl+14612) 01-04 10:14:40.444 14942 14942 F DEBUG : #64 pc 00000000001117e0 /data/app/com.tct.hermes-gJL2ify1IrxkQxcOJCg9kg==/oat/arm64/base.vdex (com.tct.hermes.ai.pytorch.PytorchTrainingService.onCreate+48) 01-04 10:14:40.444 14942 14942 F DEBUG : #65 pc 0000000000252fec /system/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.3126947107+488) 01-04 10:14:40.444 14942 14942 F DEBUG : #66 pc 0000000000258ae0 /system/lib64/libart.so (art::interpreter::ArtInterpreterToInterpreterBridge(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame*, art::JValue*)+216) 01-04 10:14:40.444 14942 14942 F DEBUG : #67 pc 00000000002792cc /system/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+940) 01-04 10:14:40.444 14942 14942 F DEBUG : #68 pc 0000000000524958 /system/lib64/libart.so (MterpInvokeVirtual+588) 01-04 10:14:40.444 14942 14942 F DEBUG : #69 pc 0000000000547914 /system/lib64/libart.so (ExecuteMterpImpl+14228) 01-04 10:14:40.444 14942 14942 F DEBUG : #70 pc 00000000003976f8 /system/framework/boot-framework.vdex (android.app.ActivityThread.handleCreateService+206) 01-04 10:14:40.444 14942 14942 F DEBUG : #71 pc 0000000000252fec /system/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.3126947107+488) 01-04 10:14:40.444 14942 14942 F DEBUG : #72 pc 0000000000258ae0 /system/lib64/libart.so (art::interpreter::ArtInterpreterToInterpreterBridge(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame*, art::JValue*)+216) 01-04 10:14:40.444 14942 14942 F DEBUG : #73 pc 00000000002792cc /system/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+940) 01-04 10:14:40.444 14942 14942 F DEBUG : #74 pc 0000000000525c98 /system/lib64/libart.so (MterpInvokeDirect+296) 01-04 10:14:40.445 14942 14942 F DEBUG : #75 pc 0000000000547a14 /system/lib64/libart.so (ExecuteMterpImpl+14484) 01-04 10:14:40.445 14942 14942 F DEBUG : #76 pc 00000000004cc0d0 /system/framework/boot-framework.vdex (android.app.ActivityThread.access$1300) 01-04 10:14:40.445 14942 14942 F DEBUG : #77 pc 0000000000252fec /system/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.3126947107+488) 01-04 10:14:40.445 14942 14942 F DEBUG : #78 pc 0000000000258ae0 /system/lib64/libart.so (art::interpreter::ArtInterpreterToInterpreterBridge(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame*, art::JValue*)+216) 01-04 10:14:40.445 14942 14942 F DEBUG : #79 pc 00000000002792cc /system/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+940) 01-04 10:14:40.445 14942 14942 F DEBUG : #80 pc 0000000000525e5c /system/lib64/libart.so (MterpInvokeStatic+204) 01-04 10:14:40.445 14942 14942 F DEBUG : #81 pc 0000000000547a94 /system/lib64/libart.so (ExecuteMterpImpl+14612) 01-04 10:14:40.445 14942 14942 F DEBUG : #82 pc 00000000003930ae /system/framework/boot-framework.vdex (android.app.ActivityThread$H.handleMessage+1382) 01-04 10:14:40.445 14942 14942 F DEBUG : #83 pc 0000000000252fec /system/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.3126947107+488) 01-04 10:14:40.445 14942 14942 F DEBUG : #84 pc 0000000000258ae0 /system/lib64/libart.so (art::interpreter::ArtInterpreterToInterpreterBridge(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame*, art::JValue*)+216) 01-04 10:14:40.445 14942 14942 F DEBUG : #85 pc 00000000002792cc /system/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+940) 01-04 10:14:40.445 14942 14942 F DEBUG : #86 pc 0000000000524958 /system/lib64/libart.so (MterpInvokeVirtual+588) 01-04 10:14:40.445 14942 14942 F DEBUG : #87 pc 0000000000547914 /system/lib64/libart.so (ExecuteMterpImpl+14228) 01-04 10:14:40.445 14942 14942 F DEBUG : #88 pc 0000000000b028a6 /system/framework/boot-framework.vdex (android.os.Handler.dispatchMessage+42) 01-04 10:14:40.445 14942 14942 F DEBUG : #89 pc 0000000000252fec /system/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.3126947107+488) 01-04 10:14:40.445 14942 14942 F DEBUG : #90 pc 0000000000258ae0 /system/lib64/libart.so (art::interpreter::ArtInterpreterToInterpreterBridge(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame*, art::JValue*)+216) 01-04 10:14:40.445 14942 14942 F DEBUG : #91 pc 00000000002792cc /system/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+940) 01-04 10:14:40.445 14942 14942 F DEBUG : #92 pc 0000000000524958 /system/lib64/libart.so (MterpInvokeVirtual+588) 01-04 10:14:40.445 14942 14942 F DEBUG : #93 pc 0000000000547914 /system/lib64/libart.so (ExecuteMterpImpl+14228) 01-04 10:14:40.445 14942 14942 F DEBUG : #94 pc 0000000000b0998c /system/framework/boot-framework.vdex (android.os.Looper.loop+404) 01-04 10:14:40.445 14942 14942 F DEBUG : #95 pc 0000000000252fec /system/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.3126947107+488) 01-04 10:14:40.445 14942 14942 F DEBUG : #96 pc 0000000000258ae0 /system/lib64/libart.so (art::interpreter::ArtInterpreterToInterpreterBridge(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame*, art::JValue*)+216) 01-04 10:14:40.445 14942 14942 F DEBUG : #97 pc 00000000002792cc /system/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+940) 01-04 10:14:40.445 14942 14942 F DEBUG : #98 pc 0000000000525e5c /system/lib64/libart.so (MterpInvokeStatic+204) 01-04 10:14:40.445 14942 14942 F DEBUG : #99 pc 0000000000547a94 /system/lib64/libart.so (ExecuteMterpImpl+14612) 01-04 10:14:40.445 14942 14942 F DEBUG : #100 pc 000000000039961a /system/framework/boot-framework.vdex (android.app.ActivityThread.main+214) 01-04 10:14:40.445 14942 14942 F DEBUG : #101 pc 0000000000252fec /system/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.3126947107+488) 01-04 10:14:40.445 14942 14942 F DEBUG : #102 pc 00000000005151ec /system/lib64/libart.so (artQuickToInterpreterBridge+1020) 01-04 10:14:40.446 14942 14942 F DEBUG : #103 pc 000000000055e4fc /system/lib64/libart.so (art_quick_to_interpreter_bridge+92) 01-04 10:14:40.446 14942 14942 F DEBUG : #104 pc 000000000055564c /system/lib64/libart.so (art_quick_invoke_static_stub+604) 01-04 10:14:40.446 14942 14942 F DEBUG : #105 pc 00000000000cf714 /system/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+232) 01-04 10:14:40.446 14942 14942 F DEBUG : #106 pc 000000000045ca48 /system/lib64/libart.so (art::(anonymous namespace)::InvokeWithArgArray(art::ScopedObjectAccessAlreadyRunnable const&, art::ArtMethod*, art::(anonymous namespace)::ArgArray*, art::JValue*, char const*)+104) 01-04 10:14:40.446 14942 14942 F DEBUG : #107 pc 000000000045e49c /system/lib64/libart.so (art::InvokeMethod(art::ScopedObjectAccessAlreadyRunnable const&, _jobject*, _jobject*, _jobject*, unsigned long)+1440) 01-04 10:14:40.446 14942 14942 F DEBUG : #108 pc 00000000003ee3dc /system/lib64/libart.so (art::Method_invoke(_JNIEnv*, _jobject*, _jobject*, _jobjectArray*)+52) 01-04 10:14:40.446 14942 14942 F DEBUG : #109 pc 000000000011e6d4 /system/framework/arm64/boot-core-oj.oat (offset 0x114000) (java.lang.Class.getDeclaredMethodInternal [DEDUPED]+180) 01-04 10:14:40.446 14942 14942 F DEBUG : #110 pc 0000000000555388 /system/lib64/libart.so (art_quick_invoke_stub+584) 01-04 10:14:40.446 14942 14942 F DEBUG : #111 pc 00000000000cf6f4 /system/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+200) 01-04 10:14:40.446 14942 14942 F DEBUG : #112 pc 000000000027f2e0 /system/lib64/libart.so (art::interpreter::ArtInterpreterToCompiledCodeBridge(art::Thread*, art::ArtMethod*, art::ShadowFrame*, unsigned short, art::JValue*)+344) 01-04 10:14:40.446 14942 14942 F DEBUG : #113 pc 00000000002792e8 /system/lib64/libart.so (bool art::interpreter::DoCall<false, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+968) 01-04 10:14:40.446 14942 14942 F DEBUG : #114 pc 0000000000524958 /system/lib64/libart.so (MterpInvokeVirtual+588) 01-04 10:14:40.446 14942 14942 F DEBUG : #115 pc 0000000000547914 /system/lib64/libart.so (ExecuteMterpImpl+14228) 01-04 10:14:40.446 14942 14942 F DEBUG : #116 pc 0000000000c2c0d6 /system/framework/boot-framework.vdex (com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run+22) 01-04 10:14:40.446 14942 14942 F DEBUG : #117 pc 0000000000252fec /system/lib64/libart.so (_ZN3art11interpreterL7ExecuteEPNS_6ThreadERKNS_20CodeItemDataAccessorERNS_11ShadowFrameENS_6JValueEb.llvm.3126947107+488) 01-04 10:14:40.446 14942 14942 F DEBUG : #118 pc 00000000005151ec /system/lib64/libart.so (artQuickToInterpreterBridge+1020) 01-04 10:14:40.446 14942 14942 F DEBUG : #119 pc 000000000055e4fc /system/lib64/libart.so (art_quick_to_interpreter_bridge+92) 01-04 10:14:40.446 14942 14942 F DEBUG : #120 pc 0000000000bf66e0 /system/framework/arm64/boot-framework.oat (offset 0x3d1000) (com.android.internal.os.ZygoteInit.main+3088) 01-04 10:14:40.446 14942 14942 F DEBUG : #121 pc 000000000055564c /system/lib64/libart.so (art_quick_invoke_static_stub+604) 01-04 10:14:40.446 14942 14942 F DEBUG : #122 pc 00000000000cf714 /system/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+232) 01-04 10:14:40.446 14942 14942 F DEBUG : #123 pc 000000000045ca48 /system/lib64/libart.so (art::(anonymous namespace)::InvokeWithArgArray(art::ScopedObjectAccessAlreadyRunnable const&, art::ArtMethod*, art::(anonymous namespace)::ArgArray*, art::JValue*, char const*)+104) 01-04 10:14:40.446 14942 14942 F DEBUG : #124 pc 000000000045c6a8 /system/lib64/libart.so (art::InvokeWithVarArgs(art::ScopedObjectAccessAlreadyRunnable const&, _jobject*, _jmethodID*, std::__va_list)+424) 01-04 10:14:40.446 14942 14942 F DEBUG : #125 pc 0000000000361b78 /system/lib64/libart.so (art::JNI::CallStaticVoidMethodV(_JNIEnv*, _jclass*, _jmethodID*, std::__va_list)+652) 01-04 10:14:40.446 14942 14942 F DEBUG : #126 pc 00000000000b1da4 /system/lib64/libandroid_runtime.so (_JNIEnv::CallStaticVoidMethod(_jclass*, _jmethodID*, ...)+116) 01-04 10:14:40.446 14942 14942 F DEBUG : #127 pc 00000000000b47c8 /system/lib64/libandroid_runtime.so (android::AndroidRuntime::start(char const*, android::Vector<android::String8> const&, bool)+752) 01-04 10:14:40.446 14942 14942 F DEBUG : #128 pc 000000000000251c /system/bin/app_process64 (main+2000) 01-04 10:14:40.446 14942 14942 F DEBUG : #129 pc 00000000000c861c /system/lib64/libc.so (__libc_init+88)

Failed to Load Custom Model in Android

Hello,

I'm trying to deploy a custom model in python to Android using pytorch mobile. I used torch.jit.trace to trace and then save the model. The saved model is working correctly when I load it in both C++ and python. However, when using pytorch_android, the following error occurred when loading the module (inside Module.load(modelFileAbsolutePath) function). The only thing different from the PyTorchDemoApp is the model file path. And I've confirmed that the model file path is correct and the file exists. I'm using pytorch 1.3.0 in both python and android.

Any suggestions on why the model loading would fail? Would this be related to this issue? Any suggestions on how to get more debugging information? Thank you!

2019-10-15 21:25:36.585 24959-25880/org.pytorch.demo E/PyTorchDemo: Error during image analysis
    com.facebook.jni.CppException: false CHECK FAILED at ../c10/core/Backend.h (tensorTypeIdToBackend at ../c10/core/Backend.h:106)
    (no backtrace available)
        at org.pytorch.Module$NativePeer.initHybrid(Native Method)
        at org.pytorch.Module$NativePeer.<init>(Module.java:70)
        at org.pytorch.Module.<init>(Module.java:25)
        at org.pytorch.Module.load(Module.java:21)
        at org.pytorch.demo.vision.ImageClassificationActivity.analyzeImage(ImageClassificationActivity.java:168)
        at org.pytorch.demo.vision.ImageClassificationActivity.analyzeImage(ImageClassificationActivity.java:31)
        at org.pytorch.demo.vision.AbstractCameraXActivity.lambda$setupCameraX$2$AbstractCameraXActivity(AbstractCameraXActivity.java:90)
        at org.pytorch.demo.vision.-$$Lambda$AbstractCameraXActivity$t0OjLr-l_M0-_0_dUqVE4yqEYnE.analyze(lambda)
        at androidx.camera.core.ImageAnalysisAbstractAnalyzer.analyzeImage(ImageAnalysisAbstractAnalyzer.java:57)
        at androidx.camera.core.ImageAnalysisNonBlockingAnalyzer$1.run(ImageAnalysisNonBlockingAnalyzer.java:135)
        at android.os.Handler.handleCallback(Handler.java:751)
        at android.os.Handler.dispatchMessage(Handler.java:95)
        at android.os.Looper.loop(Looper.java:154)
        at android.os.HandlerThread.run(HandlerThread.java:61)

running pytorch model on android phone

I've traced pytorch model.when I'm running the model on android, I'm having this error on my android studio lgocat:
Can anyone help me with this?/ what does that mean????

Caused by: java.lang.RuntimeException: NNPACK SpatialConvolution_updateOutput failed
The above operation failed in interpreter, with the following stack trace:
at code/torch.py:364:26
98 = torch.tanh(_94)
99 = torch.sigmoid(95)
hx4 = torch.add
(torch.mul(_97, hx1), torch.mul(_96, _98), alpha=1)
input15 = torch.mul(_99, torch.tanh(hx4))
query = torch.dropout(input15, 0.10000000000000001, False)
_100 = [torch.unsqueeze(_84, 1), torch.unsqueeze(_86, 1)]
input16 = torch.cat(_100, 1)
input17 = torch.unsqueeze(query, 1)
processed_query = torch.matmul(input17, torch.t(weight5))
processed_attention = torch._convolution(input16, _35, None, [1], [15], [1], False, [0], 1, False, False, True)
~~~~~~~~~~~~~~~~~~ <--- HERE
input18 = torch.transpose(processed_attention, 1, 2)
processed_attention_weights = torch.matmul(input18, torch.t(weight8))
_101 = torch.add(processed_query, processed_attention_weights, alpha=1)
_102 = torch.add(_101, processed_memory, alpha=1)
input19 = torch.tanh(_102)
energies = torch.matmul(input19, torch.t(weight7))
input20 = torch.squeeze(energies, -1)
attention_weights = torch.softmax(input20, 1, None)
_103 = torch.unsqueeze(attention_weights, 1)

Multiple outputs

This line in the example expects a single vector as network output:

final float[] scores = outputTensor.getDataAsFloatArray();

Is there a function that accepts multiple outputs from the model?

Hello World App performance

Hi

I noticed that Pytorch Demo App runs almost x5 faster than Hello World App in an old Motorola G4.
For resnet18:

  • Hello World App --> 2650 ms (Pytorch 1.5) / 2750 ms (Pytorch 1.4).
  • Pytorch Demo App --> 700 ms (Pytorch 1.5) / 740 ms (Pytorch 1.4).

I saw that wolf image resolution is 400x400 when camera feed is 224x224. This is the reason for this slow down? There is another bottleneck? I was using Hello World App as reference but I don't know anymore if I should use Pytorch Demo App.

Finally, I was testing the performance between TFLite official app vs Pytorch Mobile and I discarded to use Pytorch Mobile due to its performance in the Hello World App. However, retesting with Pytorch Demo App, I get the same speed than TFLite with much less effort! I had a bad experience in deploying a model with TFLite. So, you could notice in the official mobile docs this performance drop in the Hello World App to avoid user confusion, especially if they want to test Pytorch performance. In my case, I discarded in Pytorch Mobile due to this misunderstanding.

EDIT: for those looking for performance, I tested mobilenet model from torchvision and I got 530 ms vs 460 ms of TFLite float model in TFLite example app.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.