Code Monkey home page Code Monkey logo

threedposeunitybarracuda's Introduction

ThreeDPoseUnityBarracuda

This project is not maintained. Since it is still old, an error may occur, but please see it as such. Also, the code for ThreeDPoseTracker has been moved to a private repository, so I haven't maintained the code published on Github. We have not answered any questions regarding the code.

このプロジェクトはメンテナンスを行っていません。古いままなのでエラーが発生するかも知れませんがそういう物として見てください。 また、ThreeDPoseTrackerのコードはプライベートのリポジトリに移動したため、Githubに公開しているコードはメンテナンスしていません。 コードに関しての質問も返答しておりません。

Unity sample of 3D pose estimation using Barracuda

Outline

ThreeDPoseUnityBarracuda is a sample source which read the onnx by Barracuda and do threeD pose estimation on Unity. the accuracy got better than pre model.
*Be aware of that the target should be only one person. It does not work for multi target.

This sample lets the avatar named as "Unity chan" behaves same as the man on real time by estimating the 3D positions on the movie.

preview_daring.gif
preview_capture_v2.gif

Created with Unity ver 2019.3.13f1.
We use Barracuda 1.0.0 to load onnx.

Performance Report

GPU

GeForce RTX2070 SUPER ⇒ About 30 FPS
GeForce GTX1070 ⇒ About 20 FPS
※Without GPU, it does not work fine basically

Install and Tutorial

Download and put files

  1. Put the folders named ad "Assets" and "Packages" in your Unity Project.
    Now we have added project settings to the code. So please just download/clone them to your local PC.

  2. Download onnx from our home page by clicking following URL in our HP.
    https://digital-standard.com/threedpose/models/Resnet34_3inputs_448x448_20200609.onnx

Settings in Unity Inspector

  1. Open the Unity project with Unity Editor and put the onnx file in /Assets/Scripts/Model/ In this action, the onnx file is being converted into NNModel type of Barracuda automatically.

  2. Open "SampleScene" in "Scene" folder.
    If dialog shows up, please choose "Don't save".

  3. Set model
    Drag the NNModel you put before in Assets/Scripts/Model/ and drop it to "NN Model" in Gameobject named as "BarracudaRunner" in Inspector view. unity_inspector.PNG

  4. Start Debug
    Now you can see real time motion capture by starting Debug. unity_wiper_too_big.PNG

    But it would take about 15 secounds to load model while video has already started playing.
    ※It depends on machine how much time you need to wait for loading the model. unity_wiper_no_model.PNG

    You can avoid this problem by stopping playing video till it loads model completely.
    Please make playback speed of video player 0 to wair for loading the model.
    unity_debug_video_playback_speed.PNG

    And plase make the value 1 to restart the video after loading the model.

  5. Arrange Size
    Sometimes the avatar get out of the box like above screen shot.
    In this case, you should arrange the number in "Video Background Scale" of "MainTexture".
    The range is 0.1 ~ 1 and the default value is 1.
    Here please set this 0.8.
    unity_arrange_size.PNG

  6. Start Debug anain
    As you can see, the size of the avater fit the box. unity_wiper_size_suit.PNG

※Other Option

・ Choose Video
You can choose the target video.
Put the video you choose in Assets/Video/, and then drag the file and drop it to Video Clip of "Video Player".
unity_chooseVideo.PNG

・Choose Avatar
There are two types of avatar in this Scene.
You can change the avatar easily in inspector view.
Firstly activate Gameobject named as "Tait" and deactivate "unitychan".
Secondly drag the Gameobject and drop it to "V Nect Model" of "BarracudaRunner".
unity_set_anoter_avater_to_obj.PNG

*To determin direction of the face of avatar, a gameobject which works as nose has been added in those of avatars.
 So if you would like to adopt your original avatar, please add the nose referencing the code.

・Use Web Camera By checking "Use Web Cam", you can change the input images.
unity_use_web_cam.PNG

・Skip On Drop
If "Skip On Drop" in Video Player checked, VideoPlayer is allowed to skip frames to catch up with current time.

How to make a good estimate?

how_to_make_good_estimate.png

The frame displayed in the upper left corner (InputTexture) is the input image to the trained model. Make sure that the whole body fits in this frame. It is not possible to correctly estimate if the limbs stick out due to the edges of this frame. Since the program is performed assuming that the whole body is always in the frame, the error will increase if it exceeds the limit. Also, the background is as simple as possible, and pants are better than skirts.

Info

・Record
If you want to record the motion, the following package might be suitable.
https://github.com/zizai-inc/EasyMotionRecorder

License

Non-commercial use

・Please use it freely for hobbies and research.
When redistributing, it would be appreciated if you could enter a credit (Digital- Standard Co., Ltd.).

・The videos named as "Action_with_wiper.mp4"( original video: https://www.youtube.com/watch?v=C9VtSRiEM7s) and "onegai_darling.mp4"(original video: https://www.youtube.com/watch?v=tmsK8985dyk) contained in this code are not copyright free. So you should not use those of files in other places without permission.

Commercial use

・Non-commercial use only.

Unitychan

We follow the Unity-Chan License Terms.
https://unity-chan.com/contents/license_en/
Light_Frame.png

threedposeunitybarracuda's People

Contributors

h-nakae avatar yukihiko avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

threedposeunitybarracuda's Issues

ThreeDPoseUnityBarracuda setting

If you use ThreeDPoseUnityBarracuda, the avatar will be displayed in the center, so it will overlap with the character.
How can I make the avatar appear on the right side of the character like in ThreeDPoseTracker?
Thank you.

Question about the Neural network model

Hey there,
Thank you for this great project.
I had a question about the ONNX model that is used inside the project, Can you explain how the model works(what is its outputs), and did you train it by yourself or it is a pre-trained model like BlazePose or OpenPose.

Thanks.

Questions about the pretrained onnx model and 3 inputs

Dear Author,
Thank you for your open source code.
I have been studying pose estimation and unity c# with your code as a practitioner.
and I have a couple of questions about the pretrained model named "Resnet34_3inputs_448x448....onnx"
To see the model input/output and its architecture, I uploaded the onnx file to netron app. It displays 3 inputs and your VnectBarracudaRunner.cs also uses 3 inputs. My question is what does your code do with 3 inputs?
and what kind of 3 inputs data did you train the model with?

Best regards
Foowally

Missing UnityChan License Archive

https://unity-chan.com/contents/license_en/

3.4 In the event the User redistributes the Digital Asset Data of Company’s Characters distributed by the Company based on the use permitted under Article 3.1, it shall distribute by enclosing a set of separately defined license related files in addition to the indication of the license logo or license sign provided in Article 3.3.

https://unity-chan.com/contents/license_jp/

利用者が第1項で許諾された利用に基づいて、弊社が配布する弊社キャラクターのデジタルアセットデータを再配布する場合、第3項に定めるライセンスロゴもしくはライセンス表記の掲示に加えて、別途定めるライセンス関連ファイル一式を同梱して配布するものとします。

https://unity-chan.com/download/releaseNote.php?id=UnityChanLicenseDoc

android build error

Hi,
I can't build this project in android platform.

CommandInvokationFailure: Gradle build failed.
C:/Program Files/Unity/Hub/Editor/2019.3.13f1/Editor/Data/PlaybackEngines/AndroidPlayer\OpenJDK\bin\java.exe -classpath "C:\Program Files\Unity\Hub\Editor\2019.3.13f1\Editor\Data\PlaybackEngines\AndroidPlayer\Tools\gradle\lib\gradle-launcher-5.1.1.jar" org.gradle.launcher.GradleMain "-Dorg.gradle.jvmargs=-Xmx4096m" "assembleRelease"

stderr[

FAILURE: Build failed with an exception.

  • What went wrong:
    A problem occurred configuring project ':launcher'.

Could not resolve all artifacts for configuration ':launcher:classpath'.
Could not resolve com.android.tools.build:gradle:3.4.0.
Required by:
project :launcher
> Could not resolve com.android.tools.build:gradle:3.4.0.
> Could not get resource 'https://dl.google.com/dl/android/maven2/com/android/tools/build/gradle/3.4.0/gradle-3.4.0.pom'.
> Could not GET 'https://dl.google.com/dl/android/maven2/com/android/tools/build/gradle/3.4.0/gradle-3.4.0.pom'.
> Connect to dl.google.com:443 [dl.google.com/127.0.1.2] failed: Connection refused: connect
> Could not resolve com.android.tools.build:gradle:3.4.0.
> Could not get resource 'https://jcenter.bintray.com/com/android/tools/build/gradle/3.4.0/gradle-3.4.0.pom'.
> Could not GET 'https://jcenter.bintray.com/com/android/tools/build/gradle/3.4.0/gradle-3.4.0.pom'.
> Connect to jcenter.bintray.com:443 [jcenter.bintray.com/127.0.1.3] failed: Connection refused: connect

  • Try:
    Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.

  • Get more help at https://help.gradle.org

BUILD FAILED in 20s
]
stdout[

]
exit code: 1
UnityEditor.Android.Command.WaitForProgramToRun (UnityEditor.Utils.Program p, UnityEditor.Android.Command+WaitingForProcessToExit waitingForProcessToExit, System.String errorMsg) (at <3167064085404657b0d6c498207da025>:0)
UnityEditor.Android.Command.Run (System.Diagnostics.ProcessStartInfo psi, UnityEditor.Android.Command+WaitingForProcessToExit waitingForProcessToExit, System.String errorMsg) (at <3167064085404657b0d6c498207da025>:0)
UnityEditor.Android.Command.Run (System.String command, System.String args, System.String workingdir, UnityEditor.Android.Command+WaitingForProcessToExit waitingForProcessToExit, System.String errorMsg) (at <3167064085404657b0d6c498207da025>:0)
UnityEditor.Android.AndroidJavaTools.RunJava (System.String args, System.String workingdir, System.Action1[T] progress, System.String error) (at <3167064085404657b0d6c498207da025>:0) UnityEditor.Android.GradleWrapper.Run (UnityEditor.Android.AndroidJavaTools javaTools, System.String workingdir, System.String task, System.Action1[T] progress) (at <3167064085404657b0d6c498207da025>:0)
Rethrow as GradleInvokationException: Gradle build failed
UnityEditor.Android.GradleWrapper.Run (UnityEditor.Android.AndroidJavaTools javaTools, System.String workingdir, System.String task, System.Action`1[T] progress) (at <3167064085404657b0d6c498207da025>:0)
UnityEditor.Android.PostProcessor.Tasks.BuildGradleProject.Execute (UnityEditor.Android.PostProcessor.PostProcessorContext context) (at <3167064085404657b

Looking for model for smaller resolution video, or a way to train my own onnx

I'm really impressed with the quality, but performance is a bit low, around 15 fps at the 448x448 pixels.
I notice the model stops working at other resolutions, onnx filename contains 448x448 so that is probably why! :)
but at 224x224 it is a full 60 FPS so I would like to try a resolution like that.

This was by far the easiest project to use and (highest performance so far on a project that actually works) so I would really like to use it on the project I'm working on.

Unable to reproduce sample output

Machine : Macbook Pro (with Intel graphics card)
OS : Mac OS X (10.14.6)
Unity : 2019.2.12.f1

I followed the steps provided in the README and only the video plays on the screen.
There is no change in Unitychan.
Can you clarify what is 'Start Debug'?

Can't import or drag video into the "Video Clip" and errors occur during run the game with "Use Web Cam"

Hi, thank you for your contributions of this project.
My setups are: Unbuntu(16.04LTS) + Unity(2019.3.7f1) +GPU(Nvidia GeForce RTX 2060).
But there are some problems with me as following while I run the project just now according to the suggestions as post in README.

  1. I can't import the video samples from the Assets/Video in the Inspector "Video Player/Video Clip", and even can't drag the video samples into the "Video Player/Video Clip" Box in inspector.
  2. Well, then errors occur as showing "ArgumentException: Kernel TextureToTensor and TextureToTensor_NHWC are both missing" (errors starting from: Unity.Barracuda.ComputeFunc..ctor (UnityEngine.ComputeShader[] cs, System.String kn) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/BarracudaReferenceCompute.cs:1710)) while I am running the project.

So I really appreciate you if some help provided.

Some questions about training process

Hi, thank you for your amazing work!
I have some question about the training process.
As far as I know, you are using 4 type of loss: 2D, 2D offset, 3D and 3D offset lost. As I understand x, y in predicted 3D is in pixel which are the same with 2D pixel right? Why do you have to seperate 2D output and 3D output?
And how can you create the offset ground truth? thank you again for your time.

Having the same issue

Hi everyone i am very new to unity and don't know much about using the application so please be patient with me. I am receiving the same error code:

AssertionException: Assertion failure. Values are not equal.
Expected: 3 == 4
UnityEngine.Assertions.Assert.Fail (System.String message, System.String userMessage) (at :0)
UnityEngine.Assertions.Assert.AreEqual[T] (T expected, T actual, System.String message, System.Collections.Generic.IEqualityComparer`1[T] comparer) (at :0)
UnityEngine.Assertions.Assert.AreEqual[T] (T expected, T actual, System.String message) (at :0)
UnityEngine.Assertions.Assert.AreEqual (System.Int32 expected, System.Int32 actual) (at :0)
Unity.Barracuda.PrecompiledComputeOps.Conv2D (Unity.Barracuda.Tensor X, Unity.Barracuda.Tensor K, Unity.Barracuda.Tensor B, System.Int32[] stride, System.Int32[] pad, Unity.Barracuda.Layer+FusedActivation fusedActivation) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/BarracudaPrecompiledCompute.cs:625)
Unity.Barracuda.StatsOps.Unity.Barracuda.IOps.Conv2D (Unity.Barracuda.Tensor X, Unity.Barracuda.Tensor K, Unity.Barracuda.Tensor B, System.Int32[] stride, System.Int32[] pad, Unity.Barracuda.Layer+FusedActivation fusedActivation) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/StatsOps.cs:69)
Unity.Barracuda.GenericWorker+d__30.MoveNext () (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/GenericWorker.cs:214)
UnityEngine.SetupCoroutine.InvokeMoveNext (System.Collections.IEnumerator enumerator, System.IntPtr returnValueAddress) (at :0)
UnityEngine.MonoBehaviour:StartCoroutine(IEnumerator)
VNectBarracudaRunner:UpdateVNectModel() (at Assets/ThreeDPoseUnityBarracuda-master/ThreeDPoseUnityBarracuda-master/Assets/Scripts/VNectBarracudaRunner.cs:242)
VNectBarracudaRunner:Update() (at Assets/ThreeDPoseUnityBarracuda-master/ThreeDPoseUnityBarracuda-master/Assets/Scripts/VNectBarracudaRunner.cs:173)

I'm using nvidia geforce gtx 660 on windows 10 and install the currenct lastest version of unity and barracuda. can someone tell me what or how i can fix the issue as the avatar model is not moving but the video plays and also please tell me where i should change anything and where is it's location as i don't know where to find new tensor etc thanks alot in advance

missing script on MainCamera

Awesome project, Could you please point out which script is attached to the MainCamera besides CameraMover?

This also happens on your other ThreeDPose projects. I'm using Windows 10

What preprocessing has been done to the video

Hello,I want to separate 3D pose estimation with onnx model and animation generation, and my results are strange. I estimated 3d pose parameter with onnx.runtime(python). but when I generate the animation, I found that all the model is inverse. I checked the process after the parameter generation, and there was no problem. I guess the problem lies in the preprocess of image .Could you tell me how you preprocessed the images here.

#here is the code I input the inputs to the model
     def infer(self, image_data):
        for t in range(3):
            #inputshape(1,3,448,448)
            #imageshape(448,448,3)
            if image_data[t].ndim == 3:
                image_data[t] = image_data[t].transpose([2, 1, 0])
                image_data[t] = np.expand_dims(image_data[t], axis=0)
                image_data[t] = image_data[t]/255
            image_data[t] = image_data[t].astype('float32')
        outputs = [output.name for output in self.session.get_outputs()]
        return self.session.run(outputs, {"input.1":image_data[0],"input.4":image_data[1],"input.7":image_data[2]})

wave

You can see that the head is inverse and her back is facing the camera

If you see it, can you reply me? Thank you!

Reference character for custom avatar

Hey Hironori

I would like to use my own avatar. I was considering to use Unity chan or tacit as the reference character, but it seems like they have a lot of parts. DO you have recommendations on how to proceed?

Thanks,
Agatha

How can i get/download motion data from one video?

hello

I'd like to download the motion data, but don't know how.
i'll use only one video for motion capture so i just want to save the motion in somewhere

i tried to google, but there are no useful blog or article
plz help me.

+)or is there any api to use barracuda?

Mediapipe alternative to Barracuda for 3D pose tracking?

First of all, thank you for this great project @yukihiko @h-nakae
The barracuda initialization in this project really takes a long time and is really slow on lower-end devices.
To solve this out, I wanted to use Mediapipe for the full-body pose detection (or maybe holistic model) so that it doesn't take a lot of time in initialization and the FPS is also higher in Mediapipe.
Can you give me an insight on how do you actually animated the Unity chan 3D model using the pose you were getting from the baraccuda model?
Code samples mentioned from your repository would be highly appreciated.

Thanks

Is ThreeDPoseUnityBarracuda supports unity 2019.4.1f1?

Hello, I have cloned your repository, upgrade its to unity 2019.4.1f1 and follow your tutorial. However, Unitychan won’t move and the error log appears.
error

This is my computer info
OS: Mac OS X (10.14.6)
Graphic: Intel UHD Graphics 630 1536 MB
Unity: 2019.4.1f1

Have you encountered this problem before?
thank you

How to get location of joints in world coordinate

Hi @yukihiko @h-nakae @digista-tanaka, Thanks for this awesome project!

I check the project and find that the learned location of joints are in pixel coordinate. Can you please tell me how can I convert the location in pixel coordinate to world coordinate? I tried to convert locations using standard
position transform in pixel coordinate -> camera coordinate -> world coordinate, I get right x and y values, but the z value is wrong. So I guess these could be something I have missed. Could you please give an advice to go on this? Thanks in advance.

don't play the avatar with video

I had download the source code and NNModel from https://digital-standard.com/threedpose/models/Resnet34_3inputs_448x448_20200609.onnx.
And I copied it in assets/scripts/model and I drag and drop to NNModel in BarracudaRunner Inspector.
After start debug and don't play the avatar with video.
This is error.
Environment is MacOS Catalina 10.15.7.

AssertionException: Assertion failure. Values are not equal.
Expected: 3 == 4
UnityEngine.Assertions.Assert.Fail (System.String message, System.String userMessage) (at /Users/builduser/buildslave/unity/build/Runtime/Export/Assertions/Assert/AssertBase.cs:29)
UnityEngine.Assertions.Assert.AreEqual[T] (T expected, T actual, System.String message, System.Collections.Generic.IEqualityComparer`1[T] comparer) (at /Users/builduser/buildslave/unity/build/Runtime/Export/Assertions/Assert/AssertGeneric.cs:31)
UnityEngine.Assertions.Assert.AreEqual[T] (T expected, T actual, System.String message) (at /Users/builduser/buildslave/unity/build/Runtime/Export/Assertions/Assert/AssertGeneric.cs:19)
UnityEngine.Assertions.Assert.AreEqual (System.Int32 expected, System.Int32 actual) (at /Users/builduser/buildslave/unity/build/Runtime/Export/Assertions/Assert/AssertPrimitiveTypes.cs:176)
Unity.Barracuda.PrecompiledComputeOps.Conv2D (Unity.Barracuda.Tensor X, Unity.Barracuda.Tensor K, Unity.Barracuda.Tensor B, System.Int32[] stride, System.Int32[] pad, Unity.Barracuda.Layer+FusedActivation fusedActivation) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/BarracudaPrecompiledCompute.cs:625)
Unity.Barracuda.StatsOps.Unity.Barracuda.IOps.Conv2D (Unity.Barracuda.Tensor X, Unity.Barracuda.Tensor K, Unity.Barracuda.Tensor B, System.Int32[] stride, System.Int32[] pad, Unity.Barracuda.Layer+FusedActivation fusedActivation) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/StatsOps.cs:69)
Unity.Barracuda.GenericWorker+d__29.MoveNext () (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/GenericWorker.cs:211)
UnityEngine.SetupCoroutine.InvokeMoveNext (System.Collections.IEnumerator enumerator, System.IntPtr returnValueAddress) (at /Users/builduser/buildslave/unity/build/Runtime/Export/Scripting/Coroutines.cs:17)
UnityEngine.MonoBehaviour:StartCoroutine(IEnumerator)
VNectBarracudaRunner:UpdateVNectModel() (at Assets/Scripts/VNectBarracudaRunner.cs:242)
VNectBarracudaRunner:Update() (at Assets/Scripts/VNectBarracudaRunner.cs:173)

avatar not animated on macbook air

Hi @yukihiko ,

Hope you are all well !

I gave a try to your script and applied the fix for using 3 channels but the avatar is not animated.

Do you have any solution for us ? That would be incredible.

Thanks for any insights or inputs on that issue.

Cheers,
Luc Michalski

Store Mocap data Instead of Using it in Real Time

Hello. I have been trying to store the mocap data (bones coordinates, for example) into a vector or a file instead of using it to animate the model in real time. To do that, instead of playing a video with the Play() command and rendering it with a camera with the target texture to a image to be used as the base texture to the BarracudaRunner script, I am using a "for" loop to run through the frames of the video, extract their textures and render them with the same camera that updates the Main Texture, which is used as the base texture to the BarracudaRunner. It is basically the same concept that is being used now at the code, but, instead of playing the video (which analyzes around one frame per update cycle), I am trying to run through all the frames in one update cycle.

My trial works in rendering the Main Texture with the same proprieties than the original code. However, when the BarracudaRunner code tries to use the image in the ExecuteModelAsync() Coroutine, I get one of this error for each frame of the video:

KeyNotFoundException: The given key was not present in the dictionary.
System.Collections.Generic.Dictionary`2[TKey,TValue].get_Item (TKey key) (at <437ba245d8404784b9fbab9b439ac908>:0)
Unity.Barracuda.GenericVars.PeekOutput (System.String name) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/GenericWorker.cs:1004)
Unity.Barracuda.GenericVarsWithReuse.PeekOutput (System.String name) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/GenericWorker.cs:1106)
Unity.Barracuda.GenericVars.GatherInputs (Unity.Barracuda.Layer forLayer) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/GenericWorker.cs:951)
Unity.Barracuda.GenericWorker+d__29.MoveNext () (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/GenericWorker.cs:176)
UnityEngine.SetupCoroutine.InvokeMoveNext (System.Collections.IEnumerator enumerator, System.IntPtr returnValueAddress) (at <480508088aee40cab70818ff164a29d5>:0)

Would anyone figure out what could be causing the problem?


I will let below a copy of the code I am using to run through the frames and try to extract information.
Obs. I jut run this code after the model is ready to work with the code.


using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Video;
using UnityEngine.UI;

public class ExtractsPartitions : MonoBehaviour
{

public VideoPlayer videoPlayer;
public VideoCapture videoCapture;
public VNectBarracudaRunner barracudaRunner;
public VNectModel referenceAvatar;

private RenderTexture videoTexture;


public bool extractFrames = true;

public bool framesToExtract= true;
public bool framesExtracting = false;
public bool framesExtracted = false;

public long startingFrame = 0;
public long finalFrame = 0;


Camera camera;


void Update()
{
    if (extractFrames)
    {
        if (framesToExtract && !videoPlayer.isPrepared)
        {
            InitMainTexture();
            PrepareFrames();

            framesToExtract = false;
        }

        if (videoPlayer.isPrepared)
        {                
            framesExtracting = true;

            ExtractPartitions();

            framesExtracting = false;
            framesExtracted = true;
        }

        if (framesExtracted)
        {   
            extractFrames = false;
        }
    }        
}



private void PrepareFrames()
{

    videoPlayer.renderMode = VideoRenderMode.APIOnly;
    videoPlayer.Prepare();
    videoPlayer.sendFrameReadyEvents = true;        
}    



private void ExtractPartitions()
{       

    videoPlayer.Pause();

    long framecount = (long)videoPlayer.frameCount;

    if (startingFrame < 0 || startingFrame > framecount || startingFrame > finalFrame)
    {
        startingFrame = 0;
    }

    if (finalFrame <= 0 || finalFrame > framecount || finalFrame < startingFrame)
    {
        finalFrame = framecount;
    }

    long numberOfFrames = finalFrame - startingFrame;

    for (long realFrame = 0; realFrame < numberOfFrames; realFrame++)
    {
        videoPlayer.frame = realFrame + startingFrame;
        
        videoTexture = (RenderTexture)videoPlayer.texture;     

        videoCapture.GetComponent<Renderer>().material.mainTexture = videoTexture;

        camera.Render();

        barracudaRunner.UpdateVNectModel();

        referenceAvatar.PoseUpdate();      
		
    }

}

void InitMainTexture()
{

    videoTexture = new RenderTexture((int)videoPlayer.clip.width, (int)videoPlayer.clip.height, 24);

    videoPlayer.targetTexture = videoTexture;

    var sd = videoCapture.VideoScreen.GetComponent<RectTransform>();
    sd.sizeDelta = new Vector2(videoCapture.videoScreenWidth, (int)(videoCapture.videoScreenWidth * videoPlayer.clip.height / videoPlayer.clip.width));
    videoCapture.VideoScreen.texture = videoTexture;

    var aspect = (float)videoTexture.width / videoTexture.height;

    videoCapture.VideoBackground.transform.localScale = new Vector3(aspect, 1, 1) * videoCapture.VideoBackgroundScale;
    videoCapture.VideoBackground.GetComponent<Renderer>().material.mainTexture = videoTexture;


    GameObject go = new GameObject("MainTextureCamera", typeof(Camera));

    go.transform.parent = videoCapture.transform;
    go.transform.localScale = new Vector3(-1.0f, -1.0f, 1.0f);
    go.transform.localPosition = new Vector3(0.0f, 0.0f, -2.0f);
    go.transform.localEulerAngles = Vector3.zero;
    go.layer = videoCapture._layer;

    camera = go.GetComponent<Camera>();
    camera.orthographic = true;
    camera.orthographicSize = 0.5f;
    camera.depth = -5;
    camera.depthTextureMode = 0;
    camera.clearFlags = CameraClearFlags.Color;
    camera.backgroundColor = Color.black;
    camera.cullingMask = videoCapture._layer;
    camera.useOcclusionCulling = false;
    camera.nearClipPlane = 1.0f;
    camera.farClipPlane = 5.0f;
    camera.allowMSAA = false;
    camera.allowHDR = false;

    videoCapture.MainTexture = new RenderTexture(videoCapture.bgWidth, videoCapture.bgHeight, 0, RenderTextureFormat.RGB565, RenderTextureReadWrite.sRGB)
    {

        useMipMap = false,
        autoGenerateMips = false,
        wrapMode = TextureWrapMode.Clamp,
        filterMode = FilterMode.Point,

        graphicsFormat = UnityEngine.Experimental.Rendering.GraphicsFormat.B5G6R5_UNormPack16,

    };


    camera.targetTexture = videoCapture.MainTexture;
}

}

AssertionException: Assertion failure. Values are not equal. Expected: 3 == 4

Hello, thanks for providing the community with such an awesome work.
After following every step in the documentation, and try to run the project, I get the following error:

AssertionException: Assertion failure. Values are not equal.
Expected: 3 == 4
UnityEngine.Assertions.Assert.Fail (System.String message, System.String userMessage) (at <480508088aee40cab70818ff164a29d5>:0)
UnityEngine.Assertions.Assert.AreEqual[T] (T expected, T actual, System.String message, System.Collections.Generic.IEqualityComparer`1[T] comparer) (at <480508088aee40cab70818ff164a29d5>:0)
UnityEngine.Assertions.Assert.AreEqual[T] (T expected, T actual, System.String message) (at <480508088aee40cab70818ff164a29d5>:0)
UnityEngine.Assertions.Assert.AreEqual (System.Int32 expected, System.Int32 actual) (at <480508088aee40cab70818ff164a29d5>:0)
Unity.Barracuda.PrecompiledComputeOps.Conv2D (Unity.Barracuda.Tensor X, Unity.Barracuda.Tensor K, Unity.Barracuda.Tensor B, System.Int32[] stride, System.Int32[] pad, Unity.Barracuda.Layer+FusedActivation fusedActivation) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/BarracudaPrecompiledCompute.cs:625)
Unity.Barracuda.StatsOps.Unity.Barracuda.IOps.Conv2D (Unity.Barracuda.Tensor X, Unity.Barracuda.Tensor K, Unity.Barracuda.Tensor B, System.Int32[] stride, System.Int32[] pad, Unity.Barracuda.Layer+FusedActivation fusedActivation) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/StatsOps.cs:69)
Unity.Barracuda.GenericWorker+d__29.MoveNext () (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/GenericWorker.cs:211)
UnityEngine.SetupCoroutine.InvokeMoveNext (System.Collections.IEnumerator enumerator, System.IntPtr returnValueAddress) (at <480508088aee40cab70818ff164a29d5>:0)
UnityEngine.MonoBehaviour:StartCoroutine(IEnumerator)
VNectBarracudaRunner:UpdateVNectModel() (at Assets/Scripts/VNectBarracudaRunner.cs:242)
VNectBarracudaRunner:Update() (at Assets/Scripts/VNectBarracudaRunner.cs:173)

More information about the NN structure?

ThreeDPoseUnityBarracuda is a sample source which read the onnx by Barracuda and do threeD pose estimation on Unity. the accuracy got better than pre model. *Be aware of that the target should be only one person. It does not work for multi target.

May I know the NN structure you are using and how did you train them? On which dataset?
Could you report the performance of the method?

Thank you so much!

MobileNetV2 working!!!!!!!!!!!

VNectBarracudaRunner.csを以下のものに書き換えて
https://github.com/digital-standard/ThreeDPoseTracker/tree/master/Assets/Model
をダウンロードして、もともとあったResnetモデルと交換すれば動きます。

using UnityEngine;
using UnityEngine.UI;
using System.Collections;
using System.Collections.Generic;
using Unity.Barracuda;

/// <summary>
/// Define Joint points
/// </summary>
public class VNectBarracudaRunner : MonoBehaviour
{
    /// <summary>
    /// Neural network model
    /// </summary>
    public NNModel NNModel;

    public WorkerFactory.Type WorkerType = WorkerFactory.Type.Auto;
    public bool Verbose = true;

    public VNectModel VNectModel;

    public VideoCapture videoCapture;

    private Model _model;
    private IWorker _worker;

    /// <summary>
    /// Coordinates of joint points
    /// </summary>
    private VNectModel.JointPoint[] jointPoints;
    
    /// <summary>
    /// Number of joint points
    /// </summary>
    private const int JointNum = 24;

    /// <summary>
    /// input image size
    /// </summary>
    public int InputImageSize;

    /// <summary>
    /// input image size (half)
    /// </summary>
    private float InputImageSizeHalf;

    /// <summary>
    /// column number of heatmap
    /// </summary>
    public int HeatMapCol;
    private float InputImageSizeF;

    /// <summary>
    /// Column number of heatmap in 2D image
    /// </summary>
    private int HeatMapCol_Squared;
    
    /// <summary>
    /// Column nuber of heatmap in 3D model
    /// </summary>
    private int HeatMapCol_Cube;
    private float ImageScale;

    /// <summary>
    /// Buffer memory has 2D heat map
    /// </summary>
    private float[] heatMap2D;

    /// <summary>
    /// Buffer memory has offset 2D
    /// </summary>
    private float[] offset2D;
    
    /// <summary>
    /// Buffer memory has 3D heat map
    /// </summary>
    private float[] heatMap3D;
    
    /// <summary>
    /// Buffer memory hash 3D offset
    /// </summary>
    private float[] offset3D;
    private float unit;
    
    /// <summary>
    /// Number of joints in 2D image
    /// </summary>
    private int JointNum_Squared = JointNum * 2;
    
    /// <summary>
    /// Number of joints in 3D model
    /// </summary>
    private int JointNum_Cube = JointNum * 3;

    /// <summary>
    /// HeatMapCol * JointNum
    /// </summary>
    private int HeatMapCol_JointNum;

    /// <summary>
    /// HeatMapCol * JointNum_Squared
    /// </summary>
    private int CubeOffsetLinear;

    /// <summary>
    /// HeatMapCol * JointNum_Cube
    /// </summary>
    private int CubeOffsetSquared;

    /// <summary>
    /// For Kalman filter parameter Q
    /// </summary>
    public float KalmanParamQ;

    /// <summary>
    /// For Kalman filter parameter R
    /// </summary>
    public float KalmanParamR;

    /// <summary>
    /// Lock to update VNectModel
    /// </summary>
    private bool Lock = true;

    /// <summary>
    /// Use low pass filter flag
    /// </summary>
    public bool UseLowPassFilter;

    /// <summary>
    /// For low pass filter
    /// </summary>
    public float LowPassParam;

    public Text Msg;
    public float WaitTimeModelLoad = 10f;
    private float Countdown = 0;
    public Texture2D InitImg;

    private delegate void UpdateVNectModelDelegate();
    private UpdateVNectModelDelegate UpdateVNectModel;

    private void Start()
    {

        InputImageSize = 224;
        HeatMapCol = 14;
        UpdateVNectModel = new UpdateVNectModelDelegate(UpdateVNect);
        _model = ModelLoader.Load(NNModel, Verbose);

        // Initialize 
        HeatMapCol_Squared = HeatMapCol * HeatMapCol;
        HeatMapCol_Cube = HeatMapCol * HeatMapCol * HeatMapCol;
        HeatMapCol_JointNum = HeatMapCol * JointNum;
        CubeOffsetLinear = HeatMapCol * JointNum_Cube;
        CubeOffsetSquared = HeatMapCol_Squared * JointNum_Cube;

        heatMap2D = new float[JointNum * HeatMapCol_Squared];
        offset2D = new float[JointNum * HeatMapCol_Squared * 2];
        heatMap3D = new float[JointNum * HeatMapCol_Cube];
        offset3D = new float[JointNum * HeatMapCol_Cube * 3];
        unit = 1f / (float)HeatMapCol;
        InputImageSizeF = InputImageSize;
        InputImageSizeHalf = InputImageSizeF / 2f;
        ImageScale = InputImageSize / (float)HeatMapCol;// 224f / (float)InputImageSize;

        // Disabel sleep
        Screen.sleepTimeout = SleepTimeout.NeverSleep;

        // Init model
        _worker = WorkerFactory.CreateWorker(WorkerType, _model, Verbose);

        StartCoroutine("WaitLoad");

    }

    private void Update()
    {
        if (!Lock)
        {
            UpdateVNectModel();
        }
    }

    private IEnumerator WaitLoad()
    {
        inputs[inputName_1] = new Tensor(InitImg);
        inputs[inputName_2] = new Tensor(InitImg);
        inputs[inputName_3] = new Tensor(InitImg);

        // Create input and Execute model
        yield return _worker.StartManualSchedule(inputs);

        // Get outputs
        for (var i = 2; i < _model.outputs.Count; i++)
        {
            b_outputs[i] = _worker.PeekOutput(_model.outputs[i]);
        }

        // Get data from outputs
        offset3D = b_outputs[2].data.Download(b_outputs[2].shape);
        heatMap3D = b_outputs[3].data.Download(b_outputs[3].shape);

        // Release outputs
        for (var i = 2; i < b_outputs.Length; i++)
        {
            b_outputs[i].Dispose();
        }

        // Init VNect model
        jointPoints = VNectModel.Init();

        PredictPose();

        yield return new WaitForSeconds(WaitTimeModelLoad);

        // Init VideoCapture
        videoCapture.Init(InputImageSize, InputImageSize);
        Lock = false;
        Msg.gameObject.SetActive(false);
    }

    private const string inputName_1 = "input.1";
    private const string inputName_2 = "input.4";
    private const string inputName_3 = "input.7";


    Tensor input = new Tensor();
    Dictionary<string, Tensor> inputs = new Dictionary<string, Tensor>() { { inputName_1, null }, { inputName_2, null }, { inputName_3, null }, };
    Tensor[] b_outputs = new Tensor[4];

    private IEnumerator ExecuteModelAsync()
    {
        // Create input and Execute model
        yield return _worker.StartManualSchedule(inputs);

        // Get outputs
        for (var i = 2; i < _model.outputs.Count; i++)
        {
            b_outputs[i] = _worker.PeekOutput(_model.outputs[i]);
        }

        // Get data from outputs
        offset3D = b_outputs[2].data.Download(b_outputs[2].shape);
        heatMap3D = b_outputs[3].data.Download(b_outputs[3].shape);
        
        // Release outputs
        for (var i = 2; i < b_outputs.Length; i++)
        {
            b_outputs[i].Dispose();
        }

        PredictPose();
    }

    /// <summary>
    /// Predict positions of each of joints based on network
    /// </summary>
    private void PredictPose()
    {
        for (var j = 0; j < JointNum; j++)
        {
            var maxXIndex = 0;
            var maxYIndex = 0;
            var maxZIndex = 0;
            jointPoints[j].score3D = 0.0f;
            var jj = j * HeatMapCol;
            for (var z = 0; z < HeatMapCol; z++)
            {
                var zz = jj + z;
                for (var y = 0; y < HeatMapCol; y++)
                {
                    var yy = y * HeatMapCol_Squared * JointNum + zz;
                    for (var x = 0; x < HeatMapCol; x++)
                    {
                        float v = heatMap3D[yy + x * HeatMapCol_JointNum];
                        if (v > jointPoints[j].score3D)
                        {
                            jointPoints[j].score3D = v;
                            maxXIndex = x;
                            maxYIndex = y;
                            maxZIndex = z;
                        }
                    }
                }
            }
           
            jointPoints[j].Now3D.x = (offset3D[maxYIndex * CubeOffsetSquared + maxXIndex * CubeOffsetLinear + j * HeatMapCol + maxZIndex] + 0.5f + (float)maxXIndex) * ImageScale - InputImageSizeHalf;
            jointPoints[j].Now3D.y = InputImageSizeHalf - (offset3D[maxYIndex * CubeOffsetSquared + maxXIndex * CubeOffsetLinear + (j + JointNum) * HeatMapCol + maxZIndex] + 0.5f + (float)maxYIndex) * ImageScale;
            jointPoints[j].Now3D.z = (offset3D[maxYIndex * CubeOffsetSquared + maxXIndex * CubeOffsetLinear + (j + JointNum_Squared) * HeatMapCol + maxZIndex] + 0.5f + (float)(maxZIndex - 14)) * ImageScale;
        }

        // Calculate hip location
        var lc = (jointPoints[PositionIndex.rThighBend.Int()].Now3D + jointPoints[PositionIndex.lThighBend.Int()].Now3D) / 2f;
        jointPoints[PositionIndex.hip.Int()].Now3D = (jointPoints[PositionIndex.abdomenUpper.Int()].Now3D + lc) / 2f;

        // Calculate neck location
        jointPoints[PositionIndex.neck.Int()].Now3D = (jointPoints[PositionIndex.rShldrBend.Int()].Now3D + jointPoints[PositionIndex.lShldrBend.Int()].Now3D) / 2f;

        // Calculate head location
        var cEar = (jointPoints[PositionIndex.rEar.Int()].Now3D + jointPoints[PositionIndex.lEar.Int()].Now3D) / 2f;
        var hv = cEar - jointPoints[PositionIndex.neck.Int()].Now3D;
        var nhv = Vector3.Normalize(hv);
        var nv = jointPoints[PositionIndex.Nose.Int()].Now3D - jointPoints[PositionIndex.neck.Int()].Now3D;
        jointPoints[PositionIndex.head.Int()].Now3D = jointPoints[PositionIndex.neck.Int()].Now3D + nhv * Vector3.Dot(nhv, nv);

        // Calculate spine location
        jointPoints[PositionIndex.spine.Int()].Now3D = jointPoints[PositionIndex.abdomenUpper.Int()].Now3D;

        // Kalman filter
        foreach (var jp in jointPoints)
        {
            KalmanUpdate(jp);
        }

        // Low pass filter
        if (UseLowPassFilter)
        {
            foreach (var jp in jointPoints)
            {
                jp.PrevPos3D[0] = jp.Pos3D;
                for (var i = 1; i < jp.PrevPos3D.Length; i++)
                {
                    jp.PrevPos3D[i] = jp.PrevPos3D[i] * LowPassParam + jp.PrevPos3D[i - 1] * (1f - LowPassParam);
                }
                jp.Pos3D = jp.PrevPos3D[jp.PrevPos3D.Length - 1];
            }
        }
    }

    /// <summary>
    /// Kalman filter
    /// </summary>
    /// <param name="measurement">joint points</param>
    void KalmanUpdate(VNectModel.JointPoint measurement)
    {
        measurementUpdate(measurement);
        measurement.Pos3D.x = measurement.X.x + (measurement.Now3D.x - measurement.X.x) * measurement.K.x;
        measurement.Pos3D.y = measurement.X.y + (measurement.Now3D.y - measurement.X.y) * measurement.K.y;
        measurement.Pos3D.z = measurement.X.z + (measurement.Now3D.z - measurement.X.z) * measurement.K.z;
        measurement.X = measurement.Pos3D;
    }

	void measurementUpdate(VNectModel.JointPoint measurement)
    {
        measurement.K.x = (measurement.P.x + KalmanParamQ) / (measurement.P.x + KalmanParamQ + KalmanParamR);
        measurement.K.y = (measurement.P.y + KalmanParamQ) / (measurement.P.y + KalmanParamQ + KalmanParamR);
        measurement.K.z = (measurement.P.z + KalmanParamQ) / (measurement.P.z + KalmanParamQ + KalmanParamR);
        measurement.P.x = KalmanParamR * (measurement.P.x + KalmanParamQ) / (KalmanParamR + measurement.P.x + KalmanParamQ);
        measurement.P.y = KalmanParamR * (measurement.P.y + KalmanParamQ) / (KalmanParamR + measurement.P.y + KalmanParamQ);
        measurement.P.z = KalmanParamR * (measurement.P.z + KalmanParamQ) / (KalmanParamR + measurement.P.z + KalmanParamQ);
    }

    private void UpdateVNect()
    {
        ExecuteModel();
        PredictPose();
    }

    private void ExecuteModel()
    {

        // Create input and Execute model
        input = new Tensor(videoCapture.MainTexture, 3);
        _worker.Execute(input);
        input.Dispose();

        // Get outputs
        for (var i = 2; i < _model.outputs.Count; i++)
        {
            b_outputs[i] = _worker.PeekOutput(_model.outputs[i]);
        }

        // Get data from outputs
        //heatMap2D = b_outputs[0].data.Download(b_outputs[0].shape);
        //offset2D = b_outputs[1].data.Download(b_outputs[1].shape);
        offset3D = b_outputs[2].data.Download(b_outputs[2].shape);
        heatMap3D = b_outputs[3].data.Download(b_outputs[3].shape);
    }


}

コードは適当に書きました。多分余計なところあります。ゆるしてちょ⩌ ̫ ⩌

Work with multiple camera angles

Hello,

I was wondering if there was a way you could extend this to use multiple camera view angles? The reason I would like this feature is because ground movements aren't picked up very well. Is there another better way to fix this to pick up ground movements?

Thanks!

AssertionException: Assertion failure. Values are not equal. Expected: 3 == 4

not sure if still maintained but this is the error I'm getting. Everything is up to date both Unity and Barracuda but still no fix found.
System:
MacBook Pro (16-inch, 2019)
2.6 GHz 6-Core Intel Core i7
32 GB 2667 MHz DDR4
AMD Radeon Pro 5500M 8 GB
Unity 2019 4 12 f1

AssertionException: Assertion failure. Values are not equal.
Expected: 3 == 4
UnityEngine.Assertions.Assert.Fail (System.String message, System.String userMessage) (at /Users/bokken/buildslave/unity/build/Runtime/Export/Assertions/Assert/AssertBase.cs:29)
UnityEngine.Assertions.Assert.AreEqual[T] (T expected, T actual, System.String message, System.Collections.Generic.IEqualityComparer`1[T] comparer) (at /Users/bokken/buildslave/unity/build/Runtime/Export/Assertions/Assert/AssertGeneric.cs:31)
UnityEngine.Assertions.Assert.AreEqual[T] (T expected, T actual, System.String message) (at /Users/bokken/buildslave/unity/build/Runtime/Export/Assertions/Assert/AssertGeneric.cs:19)
UnityEngine.Assertions.Assert.AreEqual (System.Int32 expected, System.Int32 actual) (at /Users/bokken/buildslave/unity/build/Runtime/Export/Assertions/Assert/AssertPrimitiveTypes.cs:176)
Unity.Barracuda.PrecompiledComputeOps.Conv2D (Unity.Barracuda.Tensor X, Unity.Barracuda.Tensor K, Unity.Barracuda.Tensor B, System.Int32[] stride, System.Int32[] pad, Unity.Barracuda.Layer+FusedActivation fusedActivation) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/BarracudaPrecompiledCompute.cs:625)
Unity.Barracuda.StatsOps.Unity.Barracuda.IOps.Conv2D (Unity.Barracuda.Tensor X, Unity.Barracuda.Tensor K, Unity.Barracuda.Tensor B, System.Int32[] stride, System.Int32[] pad, Unity.Barracuda.Layer+FusedActivation fusedActivation) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/StatsOps.cs:69)
Unity.Barracuda.GenericWorker+d__29.MoveNext () (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/GenericWorker.cs:211)
UnityEngine.SetupCoroutine.InvokeMoveNext (System.Collections.IEnumerator enumerator, System.IntPtr returnValueAddress) (at /Users/bokken/buildslave/unity/build/Runtime/Export/Scripting/Coroutines.cs:17)
UnityEngine.MonoBehaviour:StartCoroutine(IEnumerator)
VNectBarracudaRunner:UpdateVNectModel() (at Assets/Scripts/VNectBarracudaRunner.cs:242)
VNectBarracudaRunner:Update() (at Assets/Scripts/VNectBarracudaRunner.cs:173)

image

Working almost well, but lagging

Hi!
Its a great project and is working fine, except by the freezing each 2 seconds almost.
Other things are working good.

I have test just play a video without the pose detection in another scene and there is no lag.

IndexOutOfRangeException: Invalid kernelIndex (131) passed, must be non-negative less than 155.

I used .onnx from download.When i run Play mode.Error happen.

IndexOutOfRangeException: Invalid kernelIndex (131) passed, must be non-negative less than 155.
Unity.Barracuda.ComputeFunc..ctor (UnityEngine.ComputeShader[] cs, System.String kn) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/BarracudaReferenceCompute.cs:1780)
Unity.Barracuda.ReferenceComputeOps.TextureToTensorData (Unity.Barracuda.TextureAsTensorData texData, System.String name) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/BarracudaReferenceCompute.cs:722)
Unity.Barracuda.ReferenceComputeOps.Pin (Unity.Barracuda.Tensor X) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/BarracudaReferenceCompute.cs:687)
Unity.Barracuda.ReferenceComputeOps.Prepare (Unity.Barracuda.Tensor X) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/BarracudaReferenceCompute.cs:1676)
Unity.Barracuda.StatsOps.Unity.Barracuda.IOps.Prepare (Unity.Barracuda.Tensor X) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/StatsOps.cs:525)
Unity.Barracuda.GenericWorker.SetInput (System.String name, Unity.Barracuda.Tensor x) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/GenericWorker.cs:90)
Unity.Barracuda.GenericWorker.StartManualSchedule (System.Collections.Generic.IDictionary`2[TKey,TValue] inputs) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/GenericWorker.cs:128)
VNectBarracudaRunner+<WaitLoad>d__37.MoveNext () (at Assets/Scripts/VNectBarracudaRunner.cs:184)
UnityEngine.SetupCoroutine.InvokeMoveNext (System.Collections.IEnumerator enumerator, System.IntPtr returnValueAddress) (at <95ca2d9cf774493facdfcac0685ed8ab>:0)
UnityEngine.MonoBehaviour:StartCoroutine(String)
VNectBarracudaRunner:Start() (at Assets/Scripts/VNectBarracudaRunner.cs:165)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.