Code Monkey home page Code Monkey logo

barracuda-release's Introduction

Image description

Unity Barracuda

Unity Barracuda is a lightweight cross-platform Neural Networks inference library for Unity.
Barracuda can run Neural Networks both on GPU and CPU. For details, please look for Supported Platforms.

Note: The Barracuda package has been replaced by the Sentis package, which is in a closed beta phase. Refer to the Sentis documentation for more information. You can sign up for the closed beta.

Currently Barracuda is production-ready for use with machine learning (ML) agents and number of other network architectures. When you use Barracuda in other scenarios, it is in the preview development stage.

Installation

Installing Barracuda goes through how to install Barracuda, both locally and remotely.

Reporting issues

If you encounter issues running Barracuda in your Unity project, please report them on our GitHub repo.

barracuda-release's People

Contributors

alexribard avatar mantasp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

barracuda-release's Issues

About data preprocessing for input tensor

I came from opencvforunity. Is there any way to do data preprocessing in barracuda like this:
dnn.blobFromImage(Mat image, double scalefactor, Size size, Scalar mean, bool swapRB, bool crop)

Problem with function Unsqueeze?

Recently got error when parsing onnx model with Unsqueeze layer. After checking barracuda's souce code, im curious if the code of Unsqueeze is correct?

The function of Unsqueeze is defined as:

    public ONNXTensor Unsqueeze(int[] axes)
    {
        var newShape = m_Shape.ToList();
        foreach (var axis in axes)
        {
            // axis in [-rank,rank-1]
            var axisInRange = axis >= 0 ? axis : 4 + axis;
            newShape.Insert(axis, 1);
        }
        return Reshape(newShape.ToArray());
    } 
  1. should it pass axisInRange into newShape.Insert?

  2. it calls Reshape, and reshape is defined as:

     public ONNXTensor Reshape(long[] onnxShape)
     {
         var symbolicShape = ONNXLayout.ConvertSymbolicShapeToBarracuda(onnxShape, "?");
         var reshapedData = m_Data.Reshape(symbolicShape);
         for (var i = 0; i < onnxShape.Length; ++i)
         {
             if (onnxShape[i] < 0)
                 onnxShape[i] = reshapedData.shape[i];
             Debug.Assert(onnxShape[i] == reshapedData.shape[i]);
         }
         return new ONNXTensor(reshapedData, onnxShape);
     }
    

which calls Reshape(TensorShape) in Barracuda/Core/Tensor.cs, and in Reshape, it checks if the shape of tensor equals to the new shape:

public Tensor Reshape(TensorShape newShape)
{
    Assert.AreEqual(shape.length, newShape.length);
    return ShallowCopy(newShape, $"reshape of {name}");
}

However, for a unsqueeze operation, the newShape does not equal the old shape. Hence, always get error.

Creating a tensor from a texture maps pixel colour values to 0 and 1.

The model I'm using expects pixel values between 0 ad 255. When I use new Tensor() to create a tensor from a texture all the values are mapped to 0 and 1.

I think it needs to be sign posted better in the docs that the values inside tensors will be between 0 and 1, or -1 and 1, not sure which it is because I just went back to the docs and couldn't find it.

At the moment I'm creating a tensor from scratch with values between 0 and 255 and this works fine.

The model I'm using for reference:
https://github.com/onnx/models/tree/master/vision/style_transfer/fast_neural_style

error CS0433: The type 'ValueTuple<T1, T2>' exists in both 'System.ValueTuple' and 'mscorlib'

I'm upgrading a Unity custom project with an ML-Agents 0.4b version to the last 1.0 and I'm getting 12 errors like this:

Library\PackageCache\[email protected]\Editor\Window\TimelineWindow_Breadcrumbs.cs(91,47): error CS0433: The type 'ValueTuple<T1, T2>' exists in both 'System.ValueTuple, Version=4.0.1.1, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' and 'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'

Library\PackageCache\[email protected]\Barracuda\Runtime\Core\Backends\BarracudaCompute.cs(586,25): error CS0433: The type 'ValueTuple<T1, T2, T3>' exists in both 'System.ValueTuple, Version=4.0.1.1, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' and 'mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'

...

I'm a junior self-taught dev and I'm not confident on what I have to do. I found some links about it, but I don't know how to implement it:

dotnet/roslyn#21156

On this link there is a reference to this pull request: dotnet/roslyn@850799c

but I have no idea what I should change to fix it.

I would like to know, also, what I should study to be capable to deal with this kind of problems. Any recommendation?

Launch error when editing manifest.json as instructed

The installation instructions indicate that Barracuda must be installed either via the Unity package manager or by editing the manifest.json file.
The Unity ML Agents SDK project requires Unity 2017.4.35f1, which doesn't appear to have a package manager menu option.
Editing the manifest.json file to read:

{
        "dependencies": {
          "com.unity.barracuda" : "https://github.com/Unity-Technologies/barracuda-release.git"
        }
}

Results in Unity giving the error:

Failed to resolve packages: Manifest /UnityMLTest/UnityPackageManager/manifest.json has invalid dependencies:
  Error for package 'com.unity.barracuda':
    Version is invalid.  Expected a pattern like 'x.x.x[-prerelease]', got 'https://github.com/Unity-Technologies/barracuda-release.git' instead..  Please see the Editor.log file for more information.

ArgumentException: Off-axis dimensions must match

Hello, I have a onnx model trained in pytorch. I can't share the model as the rights of it does not belong to me. It takes in an image as an input and outputs an image. I have tried various ways to initialize the tensor but none of it seems to work.

The image shape in the inspector is as such, image shape: (1,384,384,3)

        var model = ModelLoader.Load(modelSource);
        var worker = BarracudaWorkerFactory.CreateWorker(BarracudaWorkerFactory.Type.ComputePrecompiled, model);
        Dictionary<string, Tensor> inputs = new Dictionary<string, Tensor>();

        Tensor tensor = new Tensor();
        Texture2D texture = new Texture2D(384, 384, TextureFormat.RGBAFloat, false);
        tensor = new Tensor(texture);

        float[] floatarr = new float[384 * 384 * 3];

        Tensor tensor2 = new Tensor(1, 384, 384, 3, floatarr);
        inputs["image"] = tensor2;
        worker.Execute(inputs);

        var inputNames = model.inputs;   // query model inputs
        var outputNames = model.outputs; // query model outputs

ArgumentException: Off-axis dimensions must match
Barracuda.TensorExtensions.Concat (Barracuda.TensorShape[] shapes, System.Int32 axis) (at :0)
Barracuda.ModelAnalyzer.ListTemporaryTensorShapes (Barracuda.Model model, System.Collections.Generic.IDictionary2[TKey,TValue] inputShapes, System.Collections.Generic.IDictionary2[System.String,Barracuda.TensorShape]& shapesByName) (at :0)
Barracuda.ModelAnalyzer.ListTemporaryTensorShapes (Barracuda.Model model, System.Collections.Generic.IDictionary2[TKey,TValue] inputShapes) (at <a57b068757aa4cec9131100114e8a39b>:0) Barracuda.ModelAnalyzer.FindLargestNecessaryTensorShape (Barracuda.Model model, System.Collections.Generic.IDictionary2[TKey,TValue] inputShapes) (at :0)
Barracuda.GenericVarsWithPreallocation.PrepareStorage (Barracuda.Model model, Barracuda.IOps ops, System.Collections.Generic.IDictionary2[TKey,TValue] inputShapes) (at <a57b068757aa4cec9131100114e8a39b>:0) Barracuda.GenericWorker.PrepareForInput (System.Collections.Generic.IDictionary2[TKey,TValue] inputShapes) (at :0)
Barracuda.GenericWorker.AddInput (System.String name, Barracuda.Tensor x) (at :0)
Barracuda.GenericWorker.Execute (System.Collections.Generic.IDictionary`2[TKey,TValue] inputs) (at :0)
BarrcudaScript.Start () (at Assets/BarrcudaScript.cs:26)

Any assistance would be greatly appreciated. Thank you.

Onnx to Barracuda Error (YOLO V3 Model)

I recently trained a yolov3-tiny-1 class model to import on barracuda for detection. I was able to export the model to an ONNX file. I tried two ways to import the model into Unity.

1. Changing format to Barracuda with onnx_to_barracuda
When I run: python onnx_to_barracuda.py last.onnx last.nn --verbose True
I get the following error:

     File "barracuda-release\Tools\onnx_to_barracuda.py", line 336, in get_tensor_data
          floats = struct.unpack('<'+str(int(elems))+'f', tensor.raw_data)
struct.error: unpack requires a buffer of 20 bytes

The error is happening in Reshape Layer.

2. Use ONNX import feature of Barracuda
When I export the file into Unity I get the following error:

Unknown type ConstantOfShape encountered while parsing layer 53.
OnnxImportException: Unexpected error while parsing layer 54 of type Reshape.
Only tensors of rank 4 or less are supported, but got rank 5

Does barracuda support YOLO V3? Is there something I need to specify during training?
SSD models are not supported on Barracuda yet so I decided using YOLO which I do not have much experience with so any guidance would be appreciated.

Now, I can get all the errors to disappear if I check the Treat Errors as Warning on the model but I am pretty sure that is the reason my app fails to run.

Support 'dilation' operation?

I am getting unsupported attribute dilations of type conv, when importing onnx

Edit

I changed the the layer, and the problem is dilation = 2?
what could be the problem?
if set this value the attribute unsupported and set to default 1

thank you :)

Object detection from ONNX Model Zoo not working

Hi, I want to realize an object detection in Unity. For this I tested all object detectors in the ONNX Model Zoo. But only 2 out of 9 could be imported by the ONNXModelImporter without error messages (DUC and Tiny Yolo V2). So what I actually wanted to know is:

  1. Is Barracuda meant to be used for Object Detection?

  2. And if so, can it be optimized for mobile, to achieve similar inference times the same model would achieve as an optimized .tflite version?

  3. Where can I read up on which restrictions my model would be able to be run in Baracuda? I assume the listed error messages are not just result of additional unsupported operations.

Here are the summarized error messages of the failed imports:

Asset import failed, "Assets/Model/yolov2_coco.onnx" > OnnxImportException: Unexpected error while parsing layer 204 of type Reshape. Only tensors of rank 4 or less are supported, but got rank 6

Asset import failed, "Assets/Model/yolov2.onnx" > OnnxImportException: Unknown type ImageScaler encountered while parsing layer image2.

Asset import failed, "Assets/Model/yolov3.onnx" > OnnxLayerImportException: Tensor data type Bool is not supported.

Asset import failed, "Assets/Model/mask_rcnn_R_50_FPN_1x.onnx" > OnnxImportException: Unexpected error while parsing layer 400 of type Gather. Index was outside the bounds of the array.

Asset import failed, "Assets/Model/yolov3-tiny.onnx" > OnnxLayerImportException: Tensor data type Bool is not supported.

Asset import failed, "Assets/Model/faster_rcnn_R_50_FPN_1x.onnx" > OnnxImportException: Unexpected error while parsing layer 400 of type Gather. Index was outside the bounds of the array.

Asset import failed, "Assets/Model/ssd.onnx" > OnnxImportException: Unexpected error while parsing layer Unsqueeze_344 of type Unsqueeze. Only tensors of rank 4 or less are supported, but got rank 5

Thank You in advance!

Why Does this Barracuda error suddenly pops and stops training - unity ml agents

I have been working on this project for a long time, I want to train a team of agents to simulate a game of basketball (kinda like the soccer example but more complicated because I have tought the agent to shoot and pass separately with different neural networks). I didnt have any problems training the agent to shoot perffectly and pass aswell but when I changed both of the neural nets to inference mode and act like an action in the bigger neural network things started to fail. Everything works well but after training for like a million steps I suddenly get a NullReferenceException from the Unity.Barracuda and then a lot of IndexOutOfRangeException(20 times each frame +-) (image attached) .Why is this error popping up and stopping my training procces? image of error - https://imgur.com/a/SW2bGp6

Support for LSTM in keras_to_barracuda.py

Hello Barracuda Team,

I am curious if using the keras LSTM layer is supported in the keras_to_barracuda.py script or even the tensorflow_to_barracuda.py script?

I would like to use an LSTM in the Barracuda inference engine and am not sure of the best way to do this. Is it possible to just convert a keras LSTM model to onnx and use that in barracuda?

Edit: After converting a simple keras model with only an LSTM and a Dense layer to onnx via keras2onnx, and importing the .onnx file into Unity, I get a 'KeyNotFoundException: the key was not found in the dictionary.' Is this because the LSTM model is not supported within Unity?
I am on Unity version 2019.3.6f1.

If I create a simple model without the LSTM layer, and convert it to onnx via keras2onnx, and importing the .onnx file into Unity, I do not get the same error as the model with the LSTM layer.

I really appreciate all of your help on this issue!

Thanks,
Austin

Android (Samsung) issues: GLSL Link error / Unable to link compute shader

Hi everyone,

I have a yolov2 model that is working in editor with releases 0.40 and 0.60 however when on device it's throwing these linker errors. I've pulled these out of adb logcat:

02-24 19:43:56.438 21717 21857 E Unity : -------- GLSL link error: Max number of total work group invocations exceeded. 02-24 19:43:56.438 21717 21857 E Unity : 02-24 19:43:56.438 21717 21857 E Unity : 02-24 19:43:56.438 21717 21857 E Unity : 02-24 19:43:56.438 21717 21857 E Unity : (Filename: Line: 645) 02-24 19:43:56.438 21717 21857 E Unity : 02-24 19:43:56.438 21717 21857 E Unity : ERROR: Unable to link compute shader: Dense.Dense_T16x16_R4x4 02-24 19:43:56.438 21717 21857 E Unity : (Filename: Line: 2596) 02-24 19:43:56.438 21717 21857 E Unity : 02-24 19:43:56.694 21717 21857 E Unity : -------- GLSL link error: Max number of total work group invocations exceeded. 02-24 19:43:56.694 21717 21857 E Unity : 02-24 19:43:56.694 21717 21857 E Unity : 02-24 19:43:56.694 21717 21857 E Unity : 02-24 19:43:56.694 21717 21857 E Unity : (Filename: Line: 645) 02-24 19:43:56.694 21717 21857 E Unity : 02-24 19:43:56.694 21717 21857 E Unity : ERROR: Unable to link compute shader: Conv.Conv2DKernelKxK_T16x16_R4x4 02-24 19:43:56.694 21717 21857 E Unity : (Filename: Line: 2596) 02-24 19:43:56.694 21717 21857 E Unity : 02-24 19:43:56.775 21717 21857 E Unity : -------- GLSL link error: Max number of total work group invocations exceeded. 02-24 19:43:56.775 21717 21857 E Unity : 02-24 19:43:56.775 21717 21857 E Unity : 02-24 19:43:56.775 21717 21857 E Unity : 02-24 19:43:56.775 21717 21857 E Unity : (Filename: Line: 645) 02-24 19:43:56.775 21717 21857 E Unity : 02-24 19:43:56.775 21717 21857 E Unity : ERROR: Unable to link compute shader: Conv.Conv2DKernelKxK_StrictC16K64_T16x16_R4x4 02-24 19:43:56.775 21717 21857 E Unity : (Filename: Line: 2596) 02-24 19:43:56.775 21717 21857 E Unity : 02-24 19:43:56.829 21717 21857 E Unity : -------- GLSL link error: Max number of total work group invocations exceeded. 02-24 19:43:56.829 21717 21857 E Unity : 02-24 19:43:56.829 21717 21857 E Unity : 02-24 19:43:56.829 21717 21857 E Unity : 02-24 19:43:56.829 21717 21857 E Unity : (Filename: Line: 645) 02-24 19:43:56.829 21717 21857 E Unity : 02-24 19:43:56.829 21717 21857 E Unity : ERROR: Unable to link compute shader: Conv.Conv2DKernel1x1_StrictC16K64_T16x16_R4x4 02-24 19:43:56.829 21717 21857 E Unity : (Filename: Line: 2596) 02-24 19:43:56.829 21717 21857 E Unity : 02-24 19:43:57.145 21717 21857 E Unity : -------- GLSL link error: Max number of total work group invocations exceeded. 02-24 19:43:57.145 21717 21857 E Unity : 02-24 19:43:57.145 21717 21857 E Unity : 02-24 19:43:57.145 21717 21857 E Unity : 02-24 19:43:57.145 21717 21857 E Unity : (Filename: Line: 645) 02-24 19:43:57.145 21717 21857 E Unity : 02-24 19:43:57.145 21717 21857 E Unity : ERROR: Unable to link compute shader: Conv.Conv2DTrans_KernelCached_K5x5_T16x16 02-24 19:43:57.145 21717 21857 E Unity : (Filename: Line: 2596)

Barracuda runs very slow on Android devices

Hi, I've built recognition app with barracuda and on Editor it works well, but if I build it as Android app it has many problems. I've already submitted another issue also for Android builds #21

But if I run it on CPU or another phone to avoid opengl error it is very slow.
I'm using that network: https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/tiny_yolov2

On Huawei Mate 10 it runs on GPU without errors but it takes about 11 seconds to finish interference. In profiler the most time consuming is Barracuda.DownloadDataFromGPU which takes hundreds up to thousands miliseconds to finish.

But on CPU it takes a few minutes to finish one interference! I thing that it isn't correct behaviour.
Do you have idea why time of interference on mobiles are so high?

Get Max value from a tensor?

How would do I get the max value from the Tensor result which has a shape (1,1,1,10)

Tensor result = m_Worker.Fetch();

Kernel Dense_T8x8_R8x8 is missing when evaluating model

I am trying out the new release 0.3 of Barracuda and got an error Kernel Dense_T8x8_R8x8 is missing when trying to run a standard CNN model (Relu, Conv2D, MaxPool, Softmax) that was converted to ONNX through the provided Docker scripts.

I looked at the source and this issue could be solved by uncommenting line 6 here:

//#pragma kernel Dense_T8x8_R8x8 DENSE=1 BLOCK_SIZE=8

I now found another workaround, which is to convert from a Tensorflow model (pb file) instead of the ONNX model, but maybe this is not supposed to be commented out?

Barracuda OPENGL native errors on Android

Hi! I've encountered some issues while developing mobile detection app.
On my device Xiaomi Redmi Note 4 in logs appear two errors:

  • On NN interference start there are
    `-------- GLSL link error: Warning: barrier used in non-uniform control flowWarning: barrier used in non-uniform control flow

UnityEngine.Resources:Load(String, Type)
UnityEngine.Resources:Load(String) (at /Users/builduser/buildslave/unity/build/Runtime/Export/Resources/Resources.bindings.cs:46)
Barracuda.ComputeShaderSingleton:LoadIf(Boolean, String) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\ComputeShaderSingleton.cs:38)
Barracuda.ComputeShaderSingleton:LoadIf(Boolean, String, List`1) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\ComputeShaderSingleton.cs:45)
Barracuda.ComputeShaderSingleton:.ctor() (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\ComputeShaderSingleton.cs:24)
Barracuda.ComputeShaderSingleton:.cctor() (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\ComputeShaderSingleton.cs:11)
Barracuda.BarracudaBackendsFactory:CreateOps(Type, ITensorAllocator, Boolean) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\BarracudaBackendsFactory.cs:50)
Barracuda.BarracudaBackendsFactory:CreateWorker(Type, Model, String[], String[], Boolean, Type) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\BarracudaBackendsFactory.cs:99)
Barracuda.WorkerFactory:CreateWorker(Type, Model, String[], String[], Boolean, Type) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Barracuda.cs:331)
Barracuda.WorkerFactory:CreateWorker(Model, String[], String[], Device, Boolean) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Barracuda.cs:345)
Barracuda.WorkerFactory:CreateWorker(Model, String[], Device, Boolean) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Barracuda.cs:426)
Barracuda.WorkerFactory:CreateWorker(Model, Device, Boolean) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Barracuda.cs:414)
Barracuda.WorkerFactory:CreateWorker(Model, Boolean) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Barracuda.cs:403)
NNHandler:.ctor(NNModel) (at C:\Users\Kuuupa\YOLO\Assets\Scripts\NN\NNHandler.cs:13)
NNImageHandler:Start() (at C:\Users\Kuuupa\YOLO\Assets\Scripts\NNImageHandler.cs:33)

[./Runtime/GfxDevice/opengles/ApiGLES.cpp line 687]
(Filename: /Users/builduser/buildslave/unity/build/Runtime/Export/Resources/Resources.bindings.cs Line: 46)`

and

ERROR: Unable to link compute shader: Dense.Dense_T16x16_R4x4 UnityEngine.Resources:Load(String, Type) UnityEngine.Resources:Load(String) (at /Users/builduser/buildslave/unity/build/Runtime/Export/Resources/Resources.bindings.cs:46) Barracuda.ComputeShaderSingleton:LoadIf(Boolean, String) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\ComputeShaderSingleton.cs:38) Barracuda.ComputeShaderSingleton:LoadIf(Boolean, String, List1) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\ComputeShaderSingleton.cs:45)
Barracuda.ComputeShaderSingleton:.ctor() (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\ComputeShaderSingleton.cs:24)
Barracuda.ComputeShaderSingleton:.cctor() (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\ComputeShaderSingleton.cs:11)
Barracuda.BarracudaBackendsFactory:CreateOps(Type, ITensorAllocator, Boolean) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\BarracudaBackendsFactory.cs:50)
Barracuda.BarracudaBackendsFactory:CreateWorker(Type, Model, String[], String[], Boolean, Type) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\BarracudaBackendsFactory.cs:99)
Barracuda.WorkerFactory:CreateWorker(Type, Model, String[], String[], Boolean, Type) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Barracuda.cs:331)
Barracuda.WorkerFactory:CreateWorker(Model, String[], String[], Device, Boolean) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Barracuda.cs:345)
Barracuda.WorkerFactory:CreateWorker(Model, String[], Device, Boolean) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Barracuda.cs:426)
Barracuda.WorkerFactory:CreateWorker(Model, Device, Boolean) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Barracuda.cs:414)
Barracuda.WorkerFactory:CreateWorker(Model, Boolean) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Barracuda.cs:403)
NNHandler:.ctor(NNModel) (at C:\Users\Kuuupa\YOLO\Assets\Scripts\NN\NNHandler.cs:13)
NNImageHandler:Start() (at C:\Users\Kuuupa\YOLO\Assets\Scripts\NNImageHandler.cs:33)

[./Runtime/GfxDevice/opengles/GfxDeviceGLES.cpp line 2571]
(Filename: /Users/builduser/buildslave/unity/build/Runtime/Export/Resources/Resources.bindings.cs Line: 46)
`
one after another.

And on end of interference there is
`allocation 0x0xc0000001 already registered @ ./Runtime/GfxDevice/opengles/DataBuffersGLES.cpp:l234 size 4096; now calling from ./Runtime/GfxDevice/opengles/DataBuffersGLES.cpp:l234 size 84500?
UnityEngine.ComputeBuffer:InitBuffer(Int32, Int32, ComputeBufferType, ComputeBufferMode)
UnityEngine.ComputeBuffer:.ctor(Int32, Int32, ComputeBufferType, ComputeBufferMode, Int32) (at /Users/builduser/buildslave/unity/build/Runtime/Export/Shaders/ComputeShader.bindings.cs:90)
UnityEngine.ComputeBuffer:.ctor(Int32, Int32) (at /Users/builduser/buildslave/unity/build/Runtime/Export/Shaders/ComputeShader.bindings.cs:65)
Barracuda.ComputeTensorData:.ctor(TensorShape, String, Boolean) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\BarracudaReferenceCompute.cs:37)
Barracuda.ReferenceComputeOps:Pin(Tensor) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\BarracudaReferenceCompute.cs:550)
Barracuda.ComputeOps:Conv2D(Tensor, Tensor, Tensor, Int32[], Int32[]) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\BarracudaCompute.cs:654)
Barracuda.PrecompiledComputeOps:Conv2D(Tensor, Tensor, Tensor, Int32[], Int32[]) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\BarracudaPrecompiledCompute.cs:235)
Barracuda.VerboseOps:Barracuda.IOps.Conv2D(Tensor, Tensor, Tensor, Int32[], Int32[]) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\VerboseOps.cs:40)
Barracuda.d__27:MoveNext() (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\GenericWorker.cs:204)
Barracuda.GenericWorker:Execute() (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\GenericWorker.cs:117)
Barracuda.GenericWorker:Execute(Tensor) (at C:\Users\Kuuupa\YOLO\Library\PackageCache\[email protected]\Barracuda\Core\Backends\GenericWorker.cs:111)
NNImageHandler:Update() (at C:\Users\Kuuupa\YOLO\Assets\Scripts\NNImageHandler.cs:43)

[./Runtime/Allocator/MemoryManager.cpp line 1645]
(Filename: /Users/builduser/buildslave/unity/build/Runtime/Export/Shaders/ComputeShader.bindings.cs Line: 90)
`
But these 3 errors didn't occur on another tested device that is Huawei Mate 10.

Another have place if I try run WebCamTexture together with Barracuda Worker on my Xiaomi :
`
OPENGL NATIVE PLUG-IN ERROR: GL_INVALID_ENUM: enum argument out of range
UnityEngine.WebCamTexture:INTERNAL_CALL_Play(WebCamTexture)
UnityEngine.WebCamTexture:Play() (at /Users/builduser/buildslave/unity/build/artifacts/Android/modules/Audio/UnityEngineWebCamTextureBindings.gen.cs:63)
NNImageHandler:Start() (at C:\Users\Kuuupa\YOLO\Assets\Scripts\NNImageHandler.cs:30)

[./Runtime/GfxDevice/opengles/GfxDeviceGLES.cpp line 348]
(Filename: /Users/builduser/buildslave/unity/build/artifacts/Android/modules/Audio/UnityEngineWebCamTextureBindings.gen.cs Line: 63)

`
I have no idea if these errors have something in common so I post them as one issue.

They didn't occur also on my laptop while testing.
I use YOLOv2 Tiny network, downloaded already in onnx format.

Different result between barracuda and OpenCV.DNN

Hi, I have two .pb files: the network in barra_part.pb is the part of ones in barra_all.pb, and I convert them to .nn using tensorflow_to_barracuda.py .

I test .nn in Unity using barracuda (testbarrann.cs in IncorrectBarra.zip).
And test .pb in python using OpenCV.dnn. (testbarrann.py in IncorrectBarra.zip)

The barra_part.pb/nn output the same result, but the barra_all.pb/nn output the different result.
I got the different thing which should be the same, and barra_all.pb just have a little more layers than barra_part.pb.

Have you guys had any problems like this?

Finally my barracuda version is 0.5.
There are my test files:IncorrectBarra.zip

Is cyclegan supported in Barracuda

I have been trying to import cyclegan model inside the Barracuda. I am not familiar with neural network, so I am a little bit struggling now. I try to convert different pretrained cyclgan gan model into nn model, but usullay it comes to errors like missing layers(I am not sure if this will affect the usage of the model, I have one converted successfully to nn with missing layers, but failed when run on Android. At the same time, I am not quite sure if I am correctly writing C# script for executing the model.), -1 not supported by lamda...
Thus, there are two problems for me now.

  1. how to convert cyclegan model to Barracuda or onnx.
  2. how to use the model correctly, I used the solo texture as a solo tensor to put into the worker engine, and then use the output as texture. I am quite familiar with Tensor, so I don't know if this is correct. However, I am getting errors: Error Unity KeyNotFoundException: The given key was not present in the dictionary.

Unsupported attribute axis of type Softmax

I have a very simple CNN model trained in Keras, converted to onnx. I have verified that the Keras to onnx conversion worked fine and the outputs are the same. But when I load it into Unity, I get all sorts of cryptic warnings like "Unsupported attribute axis, node dense of type Softmax" or "Unexpected error while parsing layer... of type Transpose." And the output values are totally different.

I thought Barracuda supported Softmax? Why am I getting these warnings, and why is the inference incorrect?

These are the layers in my model.

network = models.Sequential()
network.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(1, image_height, image_width)))
network.add(layers.MaxPooling2D((2, 2)))
network.add(layers.Conv2D(64, (3, 3), activation='relu'))
network.add(layers.MaxPooling2D((2, 2)))
network.add(layers.Conv2D(64, (3, 3), activation='relu'))
network.add(layers.MaxPooling2D((2, 2)))
network.add(layers.Conv2D(64, (3, 3), activation='relu'))
network.add(layers.MaxPooling2D((2, 2)))
network.add(layers.Conv2D(64, (3, 3), activation='relu'))
network.add(layers.MaxPooling2D((2, 2)))
network.add(layers.Flatten())
network.add(layers.Dense(516, kernel_regularizer=regularizers.l2(0.001), activation='relu'))
network.add(layers.Dense(num_classes, activation='softmax'))

Thanks!

UnboundLocalError: local variable 'data' referenced before assignment

Colab Research Google Notebook

tensorflow-1.15.0
keras-2.2.5
numpy-1.17.4
onnx-1.6.0
h5py-2.8.0
barracuda scripts ( 0.3.2 )

Saved, then loaded h5 files and got this message

/usr/local/lib/python3.6/dist-packages/keras/engine/saving.py:310: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
  warnings.warn('No training configuration found in save file: '

Converted h5 to pb via : https://stackoverflow.com/a/45466355/2496170

WARNING:tensorflow:From <ipython-input-42-102ff4f8144d>:32: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.convert_variables_to_constants`
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/graph_util_impl.py:277: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
INFO:tensorflow:Froze 16 variables.
INFO:tensorflow:Converted 16 variables to const ops.

ERROR

when running this line
!python './drive/My Drive/unity.barracuda.0.3.2/Tools/tensorflow_to_barracuda.py' ./models/encoder_model.pb ./export/encoder_model.nn

Converting ./models/encoder_model.pb to ./export/encoder_model.nn
WARNING:tensorflow:From /content/drive/My Drive/unity.barracuda.0.3.2/Tools/tensorflow_to_barracuda.py:1328: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.

Traceback (most recent call last):
  File "./drive/My Drive/unity.barracuda.0.3.2/Tools/tensorflow_to_barracuda.py", line 21, in <module>
    tf2bc.convert(args.source_file, args.target_file, args.trim_unused_by_output, args)
  File "/content/drive/My Drive/unity.barracuda.0.3.2/Tools/tensorflow_to_barracuda.py", line 1373, in convert
    lambda tensor: get_tensor_data(tensor))
  File "/content/drive/My Drive/unity.barracuda.0.3.2/Tools/barracuda.py", line 221, in setup_constants
    data = np.reshape(get_tensor_data_lambda(tensor), shape).astype(np.float32))]
  File "/content/drive/My Drive/unity.barracuda.0.3.2/Tools/tensorflow_to_barracuda.py", line 1373, in <lambda>
    lambda tensor: get_tensor_data(tensor))
  File "/content/drive/My Drive/unity.barracuda.0.3.2/Tools/tensorflow_to_barracuda.py", line 658, in get_tensor_data
    return np.array(data).reshape(dims)
UnboundLocalError: local variable 'data' referenced before assignment

Edit: printed the data type of the tensor that caused the issue, its a DT_INT64

Edit 2:
For my current use case did a quick modification of the tensorflow_to_barracuda.py (gist) file to return None and skipp adding the item to the array of tensors, ran the same script again, now i see this message:

[x] UNSUPPORTED: data type DT_INT64
GLOBALS: 'dense_1/kernel', 'dense_1/bias', 'dense_2/kernel', 'dense_2/bias', 'SGD/iterations', 'SGD/lr', 'SGD/momentum', 'SGD/decay', 'training/SGD/Variable', 'training/SGD/Variable_1', 'training/SGD/Variable_2', 'training/SGD/Variable_3', 'dense_2_1/kernel', 'dense_2_1/bias'
IN: 'input_1_1': [-1, 1, 1, 3] => 'dense_1_1/BiasAdd'
OUT: 'dense_1_1/Relu'
DONE: wrote encoder_model.nn file.

then imported the generated file into unity. On the initial import i was able to see the file asset in the inspector and view all the tensors from there, but once i have created a script and moved the NN file into the streaming assets folder, loaded it from a mono script it became a generic white icon file, i don't seem to see any errors or warning tho. During play mode start the unity now outputs all the tensors information to the console.

OnnxImportException: Unexpected error while parsing layer 2452 of type Unsqueeze. Only tensors of rank 4 or less are supported, but got rank 5

Stack Trace

Json: { "input": [ "2435" ], "output": [ "2452" ], "name": "Unsqueeze_173", "opType": "Unsqueeze", "attribute": [ { "name": "axes", "ints": [ "0" ], "type": "INTS" } ] }
at Unity.Barracuda.ONNXLayout.AxisPermutationsForMappingONNXLayoutToBarracuda (System.Int32 onnxRank, System.String onnxLayout) [0x00028] in C:\Users\Shreyas\AIMage\12345\Library\PackageCache\[email protected]\Barracuda\Editor\ONNXLayout.cs:44
at Unity.Barracuda.ONNXLayout.PermuteToBarracuda (System.Int64[] shape, System.String onnxLayout) [0x00005] in C:\Users\Shreyas\AIMage\12345\Library\PackageCache\[email protected]\Barracuda\Editor\ONNXLayout.cs:158
at Unity.Barracuda.ONNXLayout.ConvertSymbolicShapeToBarracuda (System.Int64[] onnxShape, System.String onnxLayout) [0x00001] in C:\Users\Shreyas\AIMage\12345\Library\PackageCache\[email protected]\Barracuda\Editor\ONNXLayout.cs:223
at Unity.Barracuda.ONNXTensor.Reshape (System.Int64[] onnxShape) [0x00001] in C:\Users\Shreyas\AIMage\12345\Library\PackageCache\[email protected]\Barracuda\Editor\ONNXTensor.cs:184
at Unity.Barracuda.ONNXTensor.Unsqueeze (System.Int32[] axes) [0x0003a] in C:\Users\Shreyas\AIMage\12345\Library\PackageCache\[email protected]\Barracuda\Editor\ONNXTensor.cs:237
at Unity.Barracuda.ONNXModelImporter.<.ctor>b__14_4 (Unity.Barracuda.ModelBuilder net, Unity.Barracuda.ONNXNodeWrapper node) [0x0000c] in C:\Users\Shreyas\AIMage\12345\Library\PackageCache\[email protected]\Barracuda\Editor\ONNXModelImporter.cs:177
at Unity.Barracuda.ONNXModelImporter.ConvertOnnxModel (Onnx.ModelProto onnxModel) [0x00367] in C:\Users\Shreyas\AIMage\12345\Library\PackageCache\[email protected]\Barracuda\Editor\ONNXModelImporter.cs:1043

Unity.Barracuda.ONNXModelImporter.Err (Unity.Barracuda.Model model, System.String layerName, System.String message, System.String extendedMessage, System.String debugMessage) (at Library/PackageCache/[email protected]/Barracuda/Editor/ONNXModelImporter.cs:1358)
Unity.Barracuda.ONNXModelImporter.ConvertOnnxModel (Onnx.ModelProto onnxModel) (at Library/PackageCache/[email protected]/Barracuda/Editor/ONNXModelImporter.cs:1052)
Unity.Barracuda.ONNXModelImporter.OnImportAsset (UnityEditor.Experimental.AssetImporters.AssetImportContext ctx) (at Library/PackageCache/[email protected]/Barracuda/Editor/ONNXModelImporter.cs:958)
UnityEditor.Experimental.AssetImporters.ScriptedImporter.GenerateAssetData (UnityEditor.Experimental.AssetImporters.AssetImportContext ctx) (at :0)
UnityEditorInternal.InternalEditorUtility:ProjectWindowDrag(HierarchyProperty, Boolean)
UnityEngine.GUIUtility:ProcessEvent(Int32, IntPtr)

Is it because unsqueeze is not supported in barracuda?

Inference works in Unity+Windows development but not in Android

Hello Unity Team,

I'm trying to run an inference on a convolutional neural net in ONNX format. It works like a charm in Windows+Unity Player development environment, but when I try to do inference in Android it fails. I already setup the Vulkan API in the player settings. The following are the logs I get with the verbose flag:

Compute Info graphicsDeviceType=Vulkan
Compute Info graphicsMemorySize=3577
Compute Info graphicsShaderLevel=45
Compute Info graphicsDeviceName=Adreno (TM) 506
Compute Info supportsCompute=True
Compute Info graphicsDeviceVendor=Qualcomm
Compute Info maxComputeWorkGroupSize=256
Compute Info supportsComputeSharedMemory=True
Compute Info supportsDense32x32=True
Compute Info supportsDense64x64=True

Unity   : !(1, 512, 512, 3)
Unity   : !(1, 256, 256, 128)
Unity   : !(1, 256, 256, 128)
Unity   : Conv2D_RegisterBlock4x2
Unity   : ScaleBias_Flat
Unity   : Relu_FlatStrict
Unity   : Conv2DWinograd_2x2_3x3
Unity   : Kernel ScaleBias_Flat dispatch arguments out of range (any [65536,1,1] > 65535), skipping..
Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
Unity   :
Unity   : ScaleBias_CNyx2
Unity   : Kernel Relu_Flat dispatch arguments out of range (any [65536,1,1] > 65535), skipping..
Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
Unity   :
Unity   : Relu_FlatStrict
Unity   : Conv2DKernelKxK_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Conv2DWinograd_2x2_3x3
Unity   : Kernel ScaleBias_Flat dispatch arguments out of range (any [131072,1,1] > 65535), skipping..
Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
Unity   :
Unity   : ScaleBias_CNyx2
Unity   : Kernel Relu_FlatStrict dispatch arguments out of range (any [65536,1,1] > 65535), skipping..
Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
Unity   :
Unity   : Kernel Relu_Flat dispatch arguments out of range (any [131072,1,1] > 65535), skipping..
Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
Unity   :
Unity   : Relu_Loop
Unity   : Conv2DWinograd_2x2_3x3
Unity   : Kernel ScaleBias_Flat dispatch arguments out of range (any [131072,1,1] > 65535), skipping..
Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
Unity   :
Unity   : ScaleBias_CNyx2
Unity   : Kernel Relu_FlatStrict dispatch arguments out of range (any [65536,1,1] > 65535), skipping..
Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
Unity   :
Unity   : Kernel Relu_Flat dispatch arguments out of range (any [131072,1,1] > 65535), skipping..
Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
Unity   :
Unity   : Relu_Loop
Unity   : MaxPool2D
Unity   : BroadcastAdd
Unity   : Conv2DKernelKxK_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : Kernel ScaleBias_Flat dispatch arguments out of range (any [65536,1,1] > 65535), skipping..
Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
Unity   :
Unity   : ScaleBias_CNyx2
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : Kernel ScaleBias_Flat dispatch arguments out of range (any [65536,1,1] > 65535), skipping..
Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
Unity   :
Unity   : ScaleBias_CNyx2
Unity   : Kernel Relu_Flat dispatch arguments out of range (any [65536,1,1] > 65535), skipping..
Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
Unity   :
Unity   : Relu_FlatStrict
Unity   : Conv2DWinograd_2x2_3x3
Unity   : Kernel ScaleBias_Flat dispatch arguments out of range (any [65536,1,1] > 65535), skipping..
Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
Unity   :
Unity   : ScaleBias_CNyx2
Unity   : BroadcastAdd
Unity   : BroadcastMul
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : Kernel ScaleBias_Flat dispatch arguments out of range (any [65536,1,1] > 65535), skipping..
Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
Unity   :
Unity   : ScaleBias_CNyx2
Unity   : Kernel Relu_Flat dispatch arguments out of range (any [65536,1,1] > 65535), skipping..
Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
Unity   :
Unity   : Relu_FlatStrict
Unity   : Conv2DWinograd_2x2_3x3
Unity   : Kernel ScaleBias_Flat dispatch arguments out of range (any [65536,1,1] > 65535), skipping..
Unity   : (Filename: ./Runtime/Export/Debug/Debug.bindings.h Line: 35)
Unity   :
Unity   : ScaleBias_CNyx2
Unity   : BroadcastAdd
Unity   : BroadcastMul
Unity   : MaxPool2D
Unity   : BroadcastAdd
Unity   : Conv2DKernelKxK_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Conv2DKernelKxK_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : BroadcastAdd
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Relu_FlatStrict
Unity   : Conv2DWinograd_2x2_3x3
Unity   : ScaleBias_Flat
Unity   : BroadcastAdd
Unity   : BroadcastMul
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Relu_FlatStrict
Unity   : Conv2DWinograd_2x2_3x3
Unity   : ScaleBias_Flat
Unity   : BroadcastAdd
Unity   : BroadcastMul
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Relu_FlatStrict
Unity   : Conv2DWinograd_2x2_3x3
Unity   : ScaleBias_Flat
Unity   : BroadcastAdd
Unity   : BroadcastMul
Unity   : MaxPool2D
Unity   : BroadcastAdd
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Conv2DKernelKxK_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : BroadcastAdd
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Relu_FlatStrict
Unity   : Conv2DWinograd_2x2_3x3
Unity   : ScaleBias_Flat
Unity   : BroadcastAdd
Unity   : BroadcastMul
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Relu_FlatStrict
Unity   : Conv2DWinograd_2x2_3x3
Unity   : ScaleBias_Flat
Unity   : BroadcastAdd
Unity   : BroadcastMul
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Relu_FlatStrict
Unity   : Conv2DWinograd_2x2_3x3
Unity   : ScaleBias_Flat
Unity   : BroadcastAdd
Unity   : BroadcastMul
Unity   : BroadcastAdd
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Relu_FlatStrict
Unity   : Conv2DWinograd_2x2_3x3
Unity   : ScaleBias_Flat
Unity   : BroadcastAdd
Unity   : BroadcastMul
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Relu_FlatStrict
Unity   : Conv2DWinograd_2x2_3x3
Unity   : ScaleBias_Flat
Unity   : BroadcastAdd
Unity   : BroadcastMul
Unity   : Conv2DKernel1x1_StrictC16K64_T16x16_R4x4
Unity   : ScaleBias_Flat
Unity   : Relu_FlatStrict
Unity   : Conv2DWinograd_2x2_3x3
Unity   : ScaleBias_Flat
Unity   : BroadcastAdd
Unity   : BroadcastMul
Unity   : Conv2D
Unity   : Conv2D
Unity   : Conv2DKernelKxK_T16x16_R4x4
Unity   : Conv2DKernelKxK_T16x16_R4x4
Unity   : Conv2D
Unity   : Conv2D
Unity   : Conv2DKernelKxK_T16x16_R4x4
Unity   : Conv2DKernelKxK_T16x16_R4x4
Unity   : Conv2D
Unity   : Conv2D
Unity   : Conv2DKernelKxK_T16x16_R4x4
Unity   : Conv2DKernelKxK_T16x16_R4x4
Unity   : Conv2D
Unity   : Conv2D
Unity   : Conv2DKernelKxK_T16x16_R4x4
Unity   : Conv2DKernelKxK_T16x16_R4x4
Unity   : Conv2D
Unity   : Conv2D
Unity   : Copy
Unity   : Copy
Unity   : Conv2D
Unity   : Layer: Load 348
Unity   : Layer: Load 356
Unity   : Layer: Load 373
Unity   : Layer: Load 381
Unity   : Layer: Load 389
Unity   : Layer: Load 406
Unity   : Layer: Load 414
Unity   : Layer: Load 422
Unity   : Layer: Load 433
Unity   : Layer: Load 441
Unity   : Layer: Load 449
Unity   : Layer: Conv2D 322
Unity   : (1, 512, 512, 3) # (3, 3, 3, 32) + (32)
Unity   : After Conv2D -9.306686 -24.25711 -30.93287 50.54438 21.51115 10.03584 5.341391 35.10511 6.052817 -24.48709 22.76584 0.7012411 1.920963 4.458261 23.61639 5.454086 -18.4272 -47.6225 33.3117 -13.76155 3.82831 -13.99453 -10.48075 -7.344406 -26.0222 -17.80285 -21.04449 -1.871986 -8.830113 1.657878 4.88501 5.768596 ...
Unity   : Layer: ScaleBias 323
Unity   : (1, 256, 256, 32) * (32) + (32)
Unity   : After ScaleBias -0.3133516 -3.826512 -3.939916 0.5396392 1.946343 1.501908 0.4224707 5.659012 0.9401181 -3.003928 6.657176 0.1721411 -0.1631552 1.172062 3.329777 1.351002 -3.448009 -3.914448 3.48201 -0.6203777 0.6722068 -1.323542 -0.4821055 -1.354061 -4.132155 -3.805963 -2.196498 -0.253558 -1.176509 0.2907692 0.6607981 -0.01640578 ...
Unity   : Layer: Activation.Relu 324
Unity   : (1, 256, 256, 32) ()
Unity   : After Relu 0 0 0 0.5396392 1.946343 1.501908 0.4224707 5.659012 0.9401181 0 6.657176 0.1721411 0 1.172062 3.329777 1.351002 0 0 3.48201 0 0.6722068 0 0 0 0 0 0 0 0 0.2907692 0.6607981 0 ...
Unity   : Layer: Conv2D 325
Unity   : (1, 256, 256, 32) # (3, 3, 32, 64) + (64)
Adreno-GSL: <gsl_ldd_control:548>: ioctl fd 53 code 0x400c0907 (IOCTL_KGSL_DEVICE_WAITTIMESTAMP_CTXTID) failed: errno 35 Resource deadlock would occur
Adreno-GSL: <log_gpu_snapshot:457>: panel.gpuSnapshotPath is not set.not generating user snapshot

Any ideas? The same warnings about out of bounds kernel shows up in Windows but it works. I'm using the latest [0.7.1] - 2020-05-12 release.

Possibility to Add another Operators in ONNX

Hi, I really willing that you could help me by adding new operators from ONNX. I need ReduceLogSumExp and ReduceSumSquare for my onnx model.

I need to load onnx model in unity, but currently Baraccuda still not supported those operators.
Could we have the possibility to add these operators in Baraccuda?

Here I attached my model maybe it will help to implement these new operators.

https://drive.google.com/file/d/1utXlhL2W1q62EYI7vOYiYnmrsu1Mncy-/view?usp=sharing

Thank you

tf2.x module 'tensorflow' has no attribute 'GraphDef'

This error occurs when .pb is converted to. nn. maybe in tf2, it is called tf.compat.v1.GraphDef? when i used the new one, the other error occurs:
image

It's exciting to be able to run the neural network in unity, hoping to fix the bugs in the transformation. also hope to add conv2d-transpose(deconvolution) and up-sampling layer, which is also common.

Trying to debug Tensors

I am attempting to debug some Tensors from a pose model integration following dahburj's repo.

To debug this I decided to try and visualise the tensors using the Barracuda utils.

`

Tensor heatmap = localWorker.Fetch("heatmap");
    
Debug.Log("Heatmap: " + heatmap.width + ":" + heatmap.height);

RenderTexture heatmapTexture = new RenderTexture(heatmap.width, heatmap.height, 32, RenderTextureFormat.ARGB32);

heatmapTexture.Create();

BarracudaTextureUtils.TensorToRenderTexture(heatmap, heatmapTexture);

`
When I run this I get the following exceptions:

RenderTexture.Create failed: format unsupported for random writes - RGBA8 sRGB (4). UnityEngine.RenderTexture:Create() Barracuda.ReferenceComputeOps:TensorToRenderTexture(Tensor, RenderTexture, Int32, Int32, Single, Single) Barracuda.BarracudaTextureUtils:TensorToRenderTexture(Tensor, RenderTexture, Int32, Int32, Single, Single) BarracudaPoseEstimation:GeneratePose(Texture2D, Int64, RawImage) (at Assets/Scripts/PoseEstimation/BarracudaPoseEstimation.cs:49)

The Create that is failing is not the one in my code but the one called by BarracudaTextureUtils

I also get the following exception (I suspect because of the previous exception):
ArgumentException: Kernel TensorToTexture is missing Barracuda.ComputeFunc..ctor (UnityEngine.ComputeShader[] cs, System.String kn) (at <a57b068757aa4cec9131100114e8a39b>:0) Barracuda.ComputeFunc..ctor (UnityEngine.ComputeShader cs, System.String kn) (at <a57b068757aa4cec9131100114e8a39b>:0) Barracuda.ReferenceComputeOps.TensorToRenderTexture (Barracuda.Tensor X, UnityEngine.RenderTexture target, System.Int32 batch, System.Int32 fromChannel, System.Single scale, System.Single bias) (at <a57b068757aa4cec9131100114e8a39b>:0)

For what it's worth I tried each of the BarracudaTextureUtils calls and both raised the same exceptions.

I am running Unity 2019.2.12f1 on a Macbook Pro 2017. Barracuda v 0.3.2 Build target is set to Android but exceptions raised while running in editor.

Any advice or pointers greatly appreciated.

Thanks
Adrian

Can I include unsupported operations in tensorflow_to_barracuda.py ? Or does Unity Inference engine have limitations?

Hi all,

In ml-agents, my agent learns best when using 3D convolutions.
I got those successfully implemented there, and they work as expected.

However, when trying to export the tensorflow graph to .nn , I get the IGNORED: Conv3D unknown layer error from the ml-agents tensorlow_to_barracuda.py script.

I've looked into tensorlow_to_barracuda.py, and it seems I can add some Conv3D functionality. However, before going down that rabbit-hole, I was wondering: Is this even possible by only changing this script? and the barracuda.py script? Will this be a big endeavor? Or won't the Unity inference engine support 3D convolutions operations anyway?

If all works out, I can share the scripts or contribute if anyone is interested?

Many thanks

Only tensors of rank 4 or less are supported, but got rank 5

Hi, I'm fairly new to all of this, so hopefully I can explain this in a way that will make sense.

I'm using PyTorch 1.0.0 and managed to perform transfer learning on a pretrained ResNet18 model. It works great in the Python code I've written, but now I'm trying to bring it into Unity. I used PyTorch's built-in ONNX exporter (https://pytorch.org/docs/stable/onnx.html?highlight=onnx#module-torch.onnx) to create an onnx file of the model. I know this exporter does work, because when I tried it on a very simple model I created from scratch it imported fine.

When I bring the pretrained model into Unity, I get this:
OnnxImportException: Unexpected error while parsing layer 194 of type Unsqueeze.
Only tensors of rank 4 or less are supported, but got rank 5

This must be because the pretrained ResNet18 model uses rank 5 tensors, though I'm not even sure of that. I can't seem to find any information about that in the PyTorch docs. I'm not sure if there's any way to convert them to rank 4 in the python code so they'll work with Barracuda. It's possible I'm asking in the wrong place, and I should be going to PyTorch's Issue queue, but I'm hoping someone here will have more experience with this kind of thing.

Maybe I should try one of the other pretrained models that comes with PyTorch? Does anyone know if there is a good one out there that already uses rank 4 tensors? Am I even on the right planet?

Trouble to run in multiple threads

when I run the execute() in main thread, it work OK, like this:

public NNModel palmmodelSource;
private Model palmmodel;
private IWorker palmiworker;
Task task;
// Start is called before the first frame update
void Start()
{
    palmmodel = ModelLoader.Load(palmmodelSource);
    palmiworker = WorkerFactory.CreateWorker(WorkerFactory.Type.ComputePrecompiled, palmmodel);
    Tensor input_texture2d = new Tensor(1, 256, 256, 3);
    Debug.Log("start ex");
    palmiworker.Execute(input_texture2d);
    var t1 = palmiworker.Fetch();
    Debug.Log("end ex");

    input_texture2d.Dispose();
}

And it outputs "start ex" and "end ex", but when I run it in Task, it cannot work:

public NNModel palmmodelSource;
private Model palmmodel;
private IWorker palmiworker;
Task task;
// Start is called before the first frame update
void Start()
{
    palmmodel = ModelLoader.Load(palmmodelSource);
    palmiworker = WorkerFactory.CreateWorker(WorkerFactory.Type.ComputePrecompiled, palmmodel);
    Tensor input_texture2d = new Tensor(1, 256, 256, 3);
    task = Task.Run(() =>
    {
        Debug.Log("start ex");
        palmiworker.Execute(input_texture2d);
        var t1 = palmiworker.Fetch();
        Debug.Log("end ex");
    });
    input_texture2d.Dispose();
}

It only outputs "start ex" but no "end ex". It seems the Execute() blocks in the task and doesn't run.
But I check that the task.IsCompleted() is true in Update(). What's more, when I use task.Wait() in Update(), Unity3d throws exception:

UnityException: SupportsComputeShaders can only be called from the main thread.
Constructors and field initializers will be executed from the loading thread when loading a scene.
Don't use this function in the constructor or field initializers, instead move initialization code to the Awake or Start function.

So, what happened about those thing, and can you please give out an example to run the Execute() in multithread?
My barracuda version is 0.4.

Getting output is 50x slower than running inference

Calling worker.Execute is very fast (about 0.02 seconds), but getting the data from the outputs using Tensor.AsFloats(), Tensor.data, Tensor.Download, or anything similar is very slow (more than 1 second). The number of floats it returns is 2560.

Platform: Android (with Vulkan)
Model: probably not the cause of the issue, it's an implementation of AlignedReID, essentially it's just a wrapper for MobileNet. I've uploaded it here just in case

void Start()
{
    model = ModelLoader.Load(modelFile);
    worker = WorkerFactory.CreateWorker(model, WorkerFactory.Device.GPU);
}

// Run the model and return the array of floats it outputs.
public float[] GetFeatures(Texture2D img)
{
    float start = Time.realtimeSinceStartup;
    var inputs = new Dictionary<string, Tensor>();
    var inputTensor = new Tensor(img);
    inputs.Add(model.inputs.First().name, inputTensor);

    worker.Execute(inputs);

    output = worker.PeekOutput();
    float[] features = output.AsFloats();
    Debug.Log($"Time taken: {Time.realtimeSinceStartup - start}");
    return features;
}

Is there anything that can affect the speed of getting output data? The only time that is runs fast is when I call Tensor.ToRenderTexture so it's probably the transport from GPU to CPU that is causing the problem.

Tensorflow to Barracuda (.pb to .nn) conversion Error.

Hi, I am trying to convert my .pb file to a .nn version. I followed the guidelines provided by the documentation. Using the correct layers and the method suggested for loading the model to a .pb file.

However when I try to convert it using

python tensorflow_to_barracuda.py train.pb saved_model.nn

I get the error:

Instructions for updating:
non-resource variables are not supported in the long term
Converting train.pb to saved_model.nn
Traceback (most recent call last):
  File "tensorflow_to_barracuda.py", line 28, in <module>
    tf2bc.convert(args.source_file, args.target_file, args.trim_unused_by_output, args)
  File "D:\OneDrive\Documents\CyberSens\envs\ml-agents\ml-agents\mlagents\trainers\tensorflow_to_barracuda.py", line 1549, in convert
    i_model.ParseFromString(f.read())
google.protobuf.message.DecodeError: Error parsing message

I've attached my notebook below for reference.
Model.zip

Unclear how to access outputs/intermediate outputs correctly

I have tried following the guide to access intermeditate outputs with "additionalOutputs" but I keep getting just the layer name and inputs outputted to me instead of the actual output.

I want access the string "After Softmax" that is outputted when verbose is true, but haven't found a way to do so. The final model output is "Identity" which I'm not sure how to access either. (should be output of array size 2, predicting dog or cat.

Thanks in advance!

debug

Problem converting Tensorflow model

Hi,

I have a complex tensorflow model that can't be converted to a barracuda model using the script you provide, as it says:

"
Traceback (most recent call last):
File "tensorflow_to_barracuda.py", line 28, in
tf2bc.convert(args.source_file, args.target_file, args.trim_unused_by_output, args)
File "/home/sestini/Scrivania/Barracuda/tensorflow_to_barracuda.py", line 1549, in convert
i_model.ParseFromString(f.read())
google.protobuf.message.DecodeError: Error parsing message
"

Until now I was able to use these models with TensorFlowSharp, but it seems to not support TensorFlow 2.0.

Is there any simple way to use TensorFlow.NET (https://github.com/SciSharp/TensorFlow.NET) in Unity?

Thanks!

Does Barracuda support quantization model?

When I put my onnx file into project, it occurs:
OnnxLayerImportException: Tensor data type Uint8 is not supported.
Asset import failed, "Assets/Model/quantized_model.onnx" > OnnxLayerImportException: Tensor data type Uint8 is not supported.

Support for tensor data types other than float,int32 and int64?

Hi there.
We're doing some very interesting work with trained ResNet and MobileNet models which we're successfully running across various deviced in Unity using the Barracuda runtime via the built-in .onnx support. The vast majority of our weights are floats, and we're interested in being able to use small datatypes for (mainly) size reasons. I'm not intimately familiar with how mobile compute shader support might allow or hamper implementation of other datatypes, but we've converted one of our models from float to int8 with promising results outside of Barracuda, which of course won't import the model. Any chance of support for int8 and/or other small datatypes on the horizon?

Andrew

Inaccurate prediction results on Apple A10X devices

Hi, we are having a strange issue, where specifically on an iPhone 7 and iPad Pro (2017), we are getting wildly inaccurate image predictions. Both of these devices use the A10X/A10X Fusion SoC.

We tested the same project on an iPhone 5, 6, 7, 8, X, an iPad Pro 2017/2019, as well as on plenty of Android devices, and only observed the issue on 2 separate iPhone 7's running iOS 12 and iOS 13 (both have since been updated to iOS 13.3 with no effect). Using Unity 2019.2.18f1, tried multiple different versions and build machines.

Our project downloads a Barracuda model (converted from Tensorflow 1.14) trained for recognizing daisies, roses, and sunflowers. Pressing the rose button runs a prediction on a preloaded rose image. On other devices, it guesses roses with 99% confidence, on the iPhone 7 and iPad Pro 2017 however, it almost always guesses daisies, I get a full prediction and no errors, but it's just inaccurate.

Here is a link to my project:
https://github.com/coach-ml/coach-flowers-demo

And I've included my model and a snippet to run it:
https://coach-public.s3.amazonaws.com/unity.bytes

// This normalizes the inputs
private Tensor ToTensor(Texture2D tex)
{
    var pic = tex.GetPixels32();
    int INPUT_SIZE = 224;
    int IMAGE_MEAN = 0;
    float IMAGE_STD = 255;

    float[] floatValues = new float[(INPUT_SIZE * INPUT_SIZE) * 3];

    for (int i = 0; i < pic.Length; i++)
    {
        var color = pic[i];
        floatValues[i * 3] = (color.r - IMAGE_MEAN) / IMAGE_STD;
        floatValues[i * 3 + 1] = (color.g - IMAGE_MEAN) / IMAGE_STD;
        floatValues[i * 3 + 2] = (color.b - IMAGE_MEAN) / IMAGE_STD;
    }

    var shape = new TensorShape(1, INPUT_SIZE, INPUT_SIZE, 3);
    return new Tensor(shape, floatValues);
}

var myTexture = Resources.Load<Texture2D>("Materials/rose");
var scaledTexture = ... // I have another function for scaling the texture to 224x224
var imageTensor = ToTensor(scaledTexture);

var modelPath = Path.Combine(path, "unity.bytes");
Model model = ModelLoader.LoadFromStreamingAssets(modelPath);

var Worker = WorkerFactory.CreateComputeWorker(model);
Worker.PrepareForInput(new Dictionary<string, TensorShape>()
{
    { "input", new TensorShape(1, 255, 255, 3) }
});

Worker.Execute(imageTensor);
var output = Worker.Fetch("output");

Are you guys experiencing any accuracy issues on A10X devices? I'm wondering if this has something to do with the graphics API.

Thanks,
Loren

Tensorflow to Barracuda(FusedBatchNormV3)

I have a yolov3tiny model in a frozen graph (.pb) file. When I run the tensorflow_to_barracuda.py script I get the following error.

IGNORED: FusedBatchNormV3 unknown layer
Traceback (most recent call last):
File "tensorflow_to_barracuda.py", line 26, in
tf2bc.convert(args.source_file, args.target_file, args.trim_unused_by_output, args)
File "C:\Users\sudes\Documents\barracuda-release\Tools\tensorflow_to_barracuda.py", line 1347, in convert
process_model(i_model, args)
File "C:\Users\sudes\Documents\barracuda-release\Tools\tensorflow_to_barracuda.py", line 1209, in process_model
process_layer(node, o_context, args)
File "C:\Users\sudes\Documents\barracuda-release\Tools\tensorflow_to_barracuda.py", line 1084, in process_layer
assert(all_elements_equal(input_ranks))
AssertionError

Is this because FusedBatchNormv3 is not supported. If not, when will it be?

the speed of reading onnx file gets slower by updating from 0.4 to 0.6.

Hi,
Firstly,We really appreciate for you've done.
Thanks to your product, we could build our results↓
https://github.com/digital-standard/ThreeDPoseUnityBarracuda

And there is a question this time.
We are using version 0.4 of Barracuda to read onnx file and move 3D avatar in unity and it goes well.
But the speed gets slower by updating the version to 0.6.

Do you know the reason or solution of this?
It is very kind of you, if you could tell me that.

Thank you for your help.
Hiro

How to read / write data ?

Got the following model:
image

Logging the inputs & outputs shapes in console can see
image

But the default log info is a bit different ( there is no negative size )
image

I used this sample code to create an auto encoder in Keras:
https://github.com/jg-fisher/autoencoder/blob/master/ffae.py

Here is my attempt so far, all i can see are unchanging zeros values as the output :

var inputs = new Dictionary<string, Tensor>();

float[][] jagged_input_data = new float[][]
{
    new float[] { 1f },
    new float[] { 0f },
    new float[] { 1f }
};

var input_1_1 = new Tensor(-1, 1, 1, 3, jagged_input_data);
inputs["input_1_1"] = input_1_1;

worker.Execute(inputs);
        
Tensor output = worker.Fetch("dense_1_1/Relu");
float[] data = new float[1 * 1 * 3];

output.data.Upload(data, 0, 3 );

Debug.Log(string.Join(",", data) );

Am i doing something wrong with the dimensions ?

GenericWorker NullReferenceException line 172

NullReferenceException: Object reference not set to an instance of an object
Barracuda.GenericWorker+d__28.MoveNext () (at Assets/Coach-ML/Barracuda/Core/Backends/GenericWorker.cs:172)
UnityEngine.SetupCoroutine.InvokeMoveNext (System.Collections.IEnumerator enumerator, System.IntPtr returnValueAddress) (at <94c5f4c38cdc42d2b006f8badef04394>:0)

Barracuda stops VideoPlayer playing in Android.

I am working on an take Barracuda as inference engine in Unity. The input was RenderTexture taken from VideoPlayer. For Windows, all functions work just well. But for Android, I found this issue below in Unity2019.3.1, Unity2019.3.5:
When the VideoPlayer is playing and Barracuda is disabled, The VideoPlayer works well. While Barracuda is inferencing, the VideoPlayer just stopped. The state of VideoPlayer.isPlaying is True, but the RenderTexture stay the same.
This issue was found only in Android.
Any help will be appreciated.

Support for Conv1D

Hi!

We were wondering if the latest version supports 1D convolutions. We couldn't info about this type of layer in the docs. Conv1D layers are very useful for sequence / temporal data, i.e audio, 3d character animations, text sequences etc.

Platform support: UWP / Microsoft Hololens 2

When I try to compile for Hololens 2 I am getting this error:
Library\PackageCache\[email protected]\Barracuda\Runtime\Core\Tensor.cs(858,57): error CS0433: The type 'ICloneable' exists in both 'Microsoft.Windows.MixedReality.DotNetWinRT, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' and 'netstandard, Version=2.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51'
I assume UWP / HL2 is not supported yet. If that is so, are there any plans for the future?

Edit: I could get it running by removing some third party code and disabling Burst.

LSTMCell's zero_state causes tensorflow_to_barracuda error

Describe the bug
I trained a TensorFlow model by myself. but right now, I want to transform it into a barracuda model, so it can run in Unity. However, I encountered a problem that seems to be a bug. It's an error when executing G:\Unity_projects\ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\tensorflow_to_barracuda.py line 476.

To Reproduce
I have tried to reproduce it in example environments, but the thrown errors are not the same. So, I post the key snippet of my model here:

    with tf.variable_scope(self.scope):
        ...
        self._core_input = tf.reshape(tf.concat([self._stat_enc, self._entt_ebddings], axis=-1), [-1, 1, 128])
        ml_lstm = tf.nn.rnn_cell.BasicLSTMCell(self._core_hid_size)
        _init_state = ml_lstm.zero_state(self.batch_size, dtype=tf.float32) 
        // ml_lstm.get_initial_state(inputs=self._core_input, dtype=tf.float32)    // this way leads to same error
        self._core_outputs, self._core_states = tf.nn.dynamic_rnn(ml_lstm, inputs=self._core_input, initial_state=_init_state, time_major=False)

When I tried to convert the model to barracuda model, it threw this error:

Traceback (most recent call last):
  File ".\transform_mymdl2barracuda.py", line 19, in <module>
    main()
  File ".\transform_mymdl2barracuda.py", line 15, in main
    agt.export_model()
  File "G:\Unity_projects\ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\KZ_model.py", line 223, in export_model
    tf2bc.convert(frozen_graph_def_path, self.model_path + ".nn", verbose=True)
  File "g:\unity_projects\ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\tensorflow_to_barracuda.py", line 1553, in convert
    i_model, args
  File "g:\unity_projects\ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\tensorflow_to_barracuda.py", line 1381, in process_model
    nodes, var_tensors, const_tensors, o_context
  File "g:\unity_projects\ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\tensorflow_to_barracuda.py", line 476, in <lambda>
    int(by_name(tensors, "/axis").data[0]), context.layer_ranks[inputs[0]]
IndexError: list index out of range

I tried to reproduce it in the given example environments later, where I modified the function create_recurrent_encoder() in file ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\models.py as follows:

    ...
    with tf.variable_scope(name):
        rnn_cell = tf.contrib.rnn.BasicLSTMCell(half_point)
        # lstm_vector_in = tf.contrib.rnn.LSTMStateTuple(
        #     memory_in[:, :half_point], memory_in[:, half_point:]
        # )
        _init_state = rnn_cell.get_initial_state(inputs=lstm_input_state, dtype=tf.float32)
        recurrent_output, lstm_state_out = tf.nn.dynamic_rnn(
            rnn_cell, lstm_input_state, initial_state=_init_state
        )

It got this error:

Traceback (most recent call last):
  File "D:\ProgramData\Anaconda3\Scripts\mlagents-learn-script.py", line 11, in <module>
    load_entry_point('mlagents', 'console_scripts', 'mlagents-learn')()
  File "g:\unity_projects\ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\learn.py", line 408, in main
    run_training(0, run_seed, options, Queue())
  File "g:\unity_projects\ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\learn.py", line 253, in run_training
    tc.start_learning(env)
  File "g:\unity_projects\ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\trainer_controller.py", line 226, in start_learning
    self._export_graph()
  File "g:\unity_projects\ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\trainer_controller.py", line 130, in _export_graph
    self.trainers[brain_name].export_model()
  File "g:\unity_projects\ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\trainer.py", line 152, in export_model
    self.policy.export_model()
  File "g:\unity_projects\ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\tf_policy.py", line 230, in export_model
    tf2bc.convert(frozen_graph_def_path, self.model_path + ".nn")
  File "g:\unity_projects\ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\tensorflow_to_barracuda.py", line 1553, in convert
    i_model, args
  File "g:\unity_projects\ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\tensorflow_to_barracuda.py", line 1381, in process_model
    nodes, var_tensors, const_tensors, o_context
  File "g:\unity_projects\ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\tensorflow_to_barracuda.py", line 558, in <lambda>
    nodes, inputs, tensors, context, find_type="Reshape"
  File "g:\unity_projects\ml-agents\ml-agents-0.11.0\ml-agents\mlagents\trainers\tensorflow_to_barracuda.py", line 948, in basic_lstm
    assert len(inputs) == 2
AssertionError

Console logs / stack traces
For my own transformation, I turned on the verbose of convert function in file tensorflow_to_barracuda.py and found:

...
PATTERN: agtbrain/concat ~~ ConcatV2 <- ['agtbrain/stat_enc/dense/Tanh', 'agtbrain/entt_enc/dense_1/Tanh'] + ['agtbrain/concat/axis']
         ['ConcatV2']
'agtbrain/concat' Concat Vars:['agtbrain/stat_enc/dense/Tanh', 'agtbrain/entt_enc/dense_1/Tanh'] Const:[]
PATTERN: agtbrain/Reshape ~~ Reshape <- ['agtbrain/concat'] + ['agtbrain/Reshape/shape']
         ['Reshape']
'agtbrain/Reshape' Reshape Vars:['agtbrain/concat'] Const:[]
PATTERN: agtbrain/BasicLSTMCellZeroState/concat ~~ ConcatV2 <- [] + ['agtbrain/BasicLSTMCellZeroState/Const', 'agtbrain/BasicLSTMCellZeroState/Const_1', 'agtbrain/BasicLSTMCellZeroState/concat/axis']
         ['ConcatV2']

It seems that var_tensors in tensorflow_to_barracuda.py: line 1381 should be:

['agtbrain/BasicLSTMCellZeroState/Const', 'agtbrain/BasicLSTMCellZeroState/Const_1']

but it was listed into const_tensors.
How to solve the problem? If you need more information, I'll put it here ASAP.

Environment:

  • Windows10
  • ML-Agents v0.11.0
  • Tensorflow 1.15.0

Dispatch model on two graphics card.

I have 2 model to run in parallel and i have 2 graphics cards but i can't choose to dispatch one model on each graphics card to avoid performance issue.

using this piece of code does not solve the problem :

var enum_l = worker_l.ExecuteAsync(intensor_l);
var enum_r = worker_r.ExecuteAsync(intensor_r);
while (enum_l.MoveNext() || enum_r.MoveNext()) { }

Tensorflow model to Barracuda or Onnx

I would like to know if it would be possible to get a Teachable Machine model into Unity and use it with Barracuda Currently I have tried to convert a Tensorflow model with tensorflow_to_barracuda.py which I get this error: google.protobuf.message.DecodeError: Error parsing message

I have also tried to convert the model with the tf2onnxt but when I have successfully convert the model and import it the model has warnings

also when I import any model of my own the camera is responding every second

-Unity version 2020.1
-Barracuda version have tried with 0.4.0 and 0.6.3

Hope Teachable Machine models is possible to use since it's fast and easy for the project i am working on

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.