Code Monkey home page Code Monkey logo

onnxruntime-iot-edge's Introduction

page_type languages products
sample
python
azure-machine-learning-service
azure-iot-edge
azure-storage

ONNX Runtime with Azure IoT Edge for acceleration of AI on the edge

This tutorial is a reference implementation for executing ONNX models across different device platforms using the ONNX Runtime inference engine. ONNX Runtime is an open source inference engine for ONNX Models. ONNX Runtime Execution Providers (EPs) enables the execution of any ONNX model using a single set of inference APIs that provide access to the best hardware acceleration available.

In simple terms, developers no longer need to worry about the nuances of hardware specific custom libraries to accelerate their machine learning models. This tutorial demonstrates that by enabling the same code to run on different HW platforms using their respecitive AI acceleration libraries for optimized execution of the ONNX model.

ONNX Runtime on NVIDIA Jetson Platform is the tutorial example for deploying pre-trained ONNX models on the NVIDIA Jetson Nano using Azure IoT Edge.

ONNX Runtime with Intel OpenVINO is the tutorial examle for dpeloying pre-trained ONNX models with ONNX Runtime using the OpenVINO SDK for acceleration of the model.

Using ONNX Runtime with Azure Machine Learning is the example using Azure Machine Learning Service to deploy the model to an IoT Edge Device.

Contribution

This project was created with active contributions from Abhinav Ayalur, Angela Martin, Kaden Dippe, Kelly Lin, Lindsey Cleary and Priscilla Lui

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.

onnxruntime-iot-edge's People

Contributors

manashgoswami avatar microsoftopensource avatar msftgits avatar priscillalui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

onnxruntime-iot-edge's Issues

Using customvision onnx on Nvidia (TensorRT/Deepstream)

I am experimenting to get a customvision model (exported as ONNX) to run on a nvidia device using deepstream SDK (TensorRT engine for accelerating ONNX).

  1. I was able to follow the steps in this repository and train a model using CustomVision.AI and run it on Nvidia device.
    This works great for a object detection type of model.

  2. When i am using a classification model trained using customvision and exported as ONNX, i get several warnings during conversion:
    [W1]:Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
    [W2]:Calling isShapeTensor before the entire network is constructed may result in an inaccurate result.

  3. Even though the TensorRT model gets generated, its not working and i get incorrect results (possibly due to the weights cast down).

So wanted to check if Microsoft has tried this setup? Any information on how customvision uses ONNX internally?

CustomVision onnxruntime openvino

I have following use-case:

  1. Want to use customvision.ai to train a model
  2. Want to export a model as ONNX. Note that customvision.ai supports ONNX for windows
  3. Convert the ONNX model to openvino OR use openvino's ONNX runtime

Is this possible? Does customvision.ai's ONNX model be able to run on a Ubuntu openvino installation? Or do i have to use like Azure ML to write/train an ONNX compatible model?

Confusion in local blob storage steps

Hi,
I am referring to the text in https://github.com/Azure-Samples/onnxruntime-iot-edge/blob/master/README-ONNXRUNTIME-arm64.md#cloud-storage . I am following the ssteps, but am confused as to which name and connection string to use where. there seem to be two sets of connections(cloud blob + local blob). these are the 4 steps where im gettign confused in. please help clarify.

  • Change the cloudStorageConnectionString variable to your cloud storage connection where it has "". You can find the connection string on the portal in your storage account under the Access Keys tab.

  • Change the variable LOCAL_STORAGE_ACCOUNT_NAME to the container you created in your storage account during phase one (i.e. storagetestlocal).

  • Change the variable LOCAL_STORAGE_ACCOUNT_KEY to your generated local storage account key. You can use this generator here.

  • In the InferenceModule directory, in main.py adjust the variable block_blob_service to hold the connection string to the local blob storage account. You can find information about configuring connection strings here or just replace the given < > with what is required.

Unable to get any object detected

I am standing in front of camera and expecting 1 object to be detected, but getting no object detected

Here are some fundamental design suggestion to make this sample better

  • Please add display support as without that its hard or impossible to debug issues like above
  • Stop sending message to Iot Hub when no object is detected
  • Send a Json object as results to implement easy checks rather string

Logs from inference module ::

PROCESSED 1 IN 0.18634247779846191 s
cam1 Results @05:45:02
172.18.0.4 - - [11/Sep/2019 12:45:02] "POST / HTTP/1.1" 200 -
Confirmation[0] received for message with result = OK
INFERENCE TIME (PURE ONNXRUNTIME) 47.68824577331543 ms
POST PROCESSING TIME 116.80459976196289 ms
TOTAL INFERENCE TIME 164.92486000061035 ms
NUMBER OBJECTS DETECTED: 0
PROCESSED 1 IN 0.18532228469848633 s
cam1 Results @05:45:02
172.18.0.4 - - [11/Sep/2019 12:45:02] "POST / HTTP/1.1" 200 -
Confirmation[0] received for message with result = OK
INFERENCE TIME (PURE ONNXRUNTIME) 46.94080352783203 ms
POST PROCESSING TIME 118.45207214355469 ms
TOTAL INFERENCE TIME 165.87448120117188 ms

Message send to cloud per sec even if no object is detected ::

[IoTHubMonitor] [5:45:58 AM] Message received from [a_jetson_Nano/PostProcessingModule]:
"cam1 Results @05:45:58 "
[IoTHubMonitor] [5:45:58 AM] Message received from [a_jetson_Nano/PostProcessingModule]:
"cam1 Results @05:45:58 "

Fails with openvino backend

This example didn't work for openvino on amd64.

Problems:

  1. I had to modify the iot edge network - The inferencemodule is on the host network, see deployment-amd64.template.json:

"createOptions": {
"HostConfig":{
"PortBindings": {},
"Binds":["/tmp/.X11-unix:/tmp/.X11-unix","/dev:/dev"],
"NetworkMode":"host",
"IpcMode":"host",
"previleged":true
},
"NetworkingConfig":{
"EndpointsConfig":{
"host":{}
}
},
The capture module couldn't address inference module by name.

  1. After removing NetworkConfig i could proceed further but Openvino failed:

2020-06-29 21:01:48.414589245 [W:onnxruntime:, graph.cc:814 Graph] Initializer convolution8_W appears in graph inputs and will not be treated as constant value/weight. This may fail some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
2020-06-29 21:01:48.414607717 [W:onnxruntime:, graph.cc:814 Graph] Initializer convolution8_B appears in graph inputs and will not be treated as constant value/weight. This may fail some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
loaded after 1.7521090507507324 s
CLOUD STORAGE STATUS: True
trying to make IOT Hub manager
INITIALIZED AFTER 1.0028696060180664 s

  • Serving Flask app "main" (lazy loading)
  • Environment: production
    WARNING: This is a development server. Do not use it in a production deployment.
    Use a production WSGI server instead.
  • Debug mode: off
  • Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
    [WARN] 2020-06-29T21:01:51z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'ai.onnx.preview.training' not recognized by nGraph
    [WARN] 2020-06-29T21:01:51z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'com.microsoft.nchwc' not recognized by nGraph
    [WARN] 2020-06-29T21:01:51z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'com.microsoft' not recognized by nGraph
    [WARN] 2020-06-29T21:01:51z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'ai.onnx.ml' not recognized by nGraph
    [WARN] 2020-06-29T21:01:51z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'com.microsoft.mlfeaturizers' not recognized by nGraph
    [WARN] 2020-06-29T21:01:51z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'ai.onnx.training' not recognized by nGraph
    E: [ncAPI] [ 528983] [python] ncDeviceOpen:1003 Failed to find booted device after boot
    2020-06-29 21:02:08.991818030 [E:onnxruntime:, sequential_executor.cc:281 Execute] Non-zero status code returned while running OpenVINO-EP-subgraph_1 node. Name:'OpenVINOExecutionProvider_OpenVINO-EP-subgraph_1_0' Status Message: /code/onnxruntime/onnxruntime/core/providers/openvino/backends/basic_backend.cc:41 onnxruntime::openvino_ep::BasicBackend::BasicBackend(const onnx::ModelProto&, onnxruntime::openvino_ep::GlobalContext&, const onnxruntime::openvino_ep::SubGraphContext&) [OpenVINO-EP] Exception while Loading Network for graph: OpenVINOExecutionProvider_OpenVINO-EP-subgraph_1_0Can not init Myriad device: NC_ERROR
  1. Openvino backend loads the model and makes a reboot but couldn't find the device back again within the container ? I mapped the dev on host and container :
    "Devices": [
    {
    "PathOnHost": "/dev",
    "PathInContainer": "/dev",
    "CgroupPermissions": "rwm"
    }
    ]
    This did remove the error: E: [ncAPI] [ 528983] [python] ncDeviceOpen:1003 Failed to find booted device after boot

but it still cannot init the device:

2020-06-30 18:52:10.059441680 [W:onnxruntime:, graph.cc:814 Graph] Initializer convolution8_W appears in graph inputs and will not be treated as constant value/weight. This may fail some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
2020-06-30 18:52:10.059456585 [W:onnxruntime:, graph.cc:814 Graph] Initializer convolution8_B appears in graph inputs and will not be treated as constant value/weight. This may fail some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
loaded after 1.5334908962249756 s
CLOUD STORAGE STATUS: True
trying to make IOT Hub manager
INITIALIZED AFTER 1.0015590190887451 s

  • Serving Flask app "main" (lazy loading)
  • Environment: production
    WARNING: This is a development server. Do not use it in a production deployment.
    Use a production WSGI server instead.
  • Debug mode: off
  • Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
    [WARN] 2020-06-30T18:52:13z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'ai.onnx.preview.training' not recognized by nGraph
    [WARN] 2020-06-30T18:52:13z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'com.microsoft.nchwc' not recognized by nGraph
    [WARN] 2020-06-30T18:52:13z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'com.microsoft' not recognized by nGraph
    [WARN] 2020-06-30T18:52:13z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'ai.onnx.ml' not recognized by nGraph
    [WARN] 2020-06-30T18:52:13z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'com.microsoft.mlfeaturizers' not recognized by nGraph
    [WARN] 2020-06-30T18:52:13z src/ngraph/frontend/onnx_import/ops_bridge.cpp 190 Domain 'ai.onnx.training' not recognized by nGraph
    2020-06-30 18:52:18.611212446 [E:onnxruntime:, sequential_executor.cc:281 Execute] Non-zero status code returned while running OpenVINO-EP-subgraph_1 node. Name:'OpenVINOExecutionProvider_OpenVINO-EP-subgraph_1_0' Status Message: /code/onnxruntime/onnxruntime/core/providers/openvino/backends/basic_backend.cc:41 onnxruntime::openvino_ep::BasicBackend::BasicBackend(const onnx::ModelProto&, onnxruntime::openvino_ep::GlobalContext&, const onnxruntime::openvino_ep::SubGraphContext&) [OpenVINO-EP] Exception while Loading Network for graph: OpenVINOExecutionProvider_OpenVINO-EP-subgraph_1_0Can not init Myriad device: NC_ERROR

EXCEPTION: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running OpenVINO-EP-subgraph_1 node. Name:'OpenVINOExecutionProvider_OpenVINO-EP-subgraph_1_0' Status Message: /code/onnxruntime/onnxruntime/core/providers/openvino/backends/basic_backend.cc:41 onnxruntime::openvino_ep::BasicBackend::BasicBackend(const onnx::ModelProto&, onnxruntime::openvino_ep::GlobalContext&, const onnxruntime::openvino_ep::SubGraphContext&) [OpenVINO-EP] Exception while Loading Network for graph: OpenVINOExecutionProvider_OpenVINO-EP-subgraph_1_0Can not init Myriad device: NC_ERROR

Any help please ?

maskrcnn

If I download maskrcnn model from model zoo. Can it run on onnxruntime with openvino2019 R3, if not, what i need to do. Thanks

Failed on Jetson Xavier inside inferencemodule with onnxruntime error: "Cannot load onnxruntime.capi..."

gubert@guy-xavier:~$ sudo iotedge logs inferencemodule:

/usr/local/lib/python3.6/dist-packages/onnxruntime/capi/_pybind_state.py:13: UserWarning: Cannot load onnxruntime.capi. Error: 'libnvinfer.so.5: cannot open shared object file: No such file or directory'
  warnings.warn("Cannot load onnxruntime.capi. Error: '{0}'".format(str(e)))
Traceback (most recent call last):
  File "./main.py", line 12, in <module>
    import onnxruntime as rt
  File "/usr/local/lib/python3.6/dist-packages/onnxruntime/__init__.py", line 21, in <module>
    from onnxruntime.capi._pybind_state import RunOptions, SessionOptions, set_default_logger_severity, get_device, NodeArg, ModelMetadata
ImportError: cannot import name 'RunOptions'

(ONNX Runtime with OpenVINO) CameraCaptureModule is not able to establish connection with inferencemodule.

Hi, I am not able to establish connection between CameraCaptureModule and inferenceModule.

CameraCaptureModule logs :

TIME TO PROCESS ALL FRAMES FOR 1 CAMERAS: 0.0001678466796875 ms
EXCEPTION: HTTPConnectionPool(host='inferencemodule', port=5000): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f30eda00ed0>: Failed to establish a new connection: [Errno 111] Connection refused'))
2019-11-12 21:26:19

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.