Code Monkey home page Code Monkey logo

Comments (10)

olokobayusuf avatar olokobayusuf commented on May 24, 2024

@joseph-o3h , what happens when you add the following line?

var modelData = ...
modelData.computeTarget = MLModelData.ComputeTarget.CPUOnly; // <-- add this
var model = new MLEdgeModel(modelData);

I suspect the time is being spent creating an NNAPI representation of the model for execution.

from natml-unity.

joseph-o3h avatar joseph-o3h commented on May 24, 2024

@joseph-o3h , what happens when you add the following line?

I cannot use computeTarget right now as it was added in 1.0.18, but we are still on 1.0.13 because of #35

from natml-unity.

olokobayusuf avatar olokobayusuf commented on May 24, 2024

@joseph-o3h working on #35 . We're batching a few things into the next update. Regarding this issue, we've changed the API to make model creation asynchronous:

// Fetch the model data
var modelData = await MLModelData.FromHub(tag);
// Create the model
var model = await MLEdgeModel.Create(modelData); // <-- this is offloaded to a native background worker

I'll update this thread once we have an ETA.

from natml-unity.

joseph-o3h avatar joseph-o3h commented on May 24, 2024

But even with the asynchronous model creation it will take the same amount of time won't it? It is just not going to block the main thread.

from natml-unity.

olokobayusuf avatar olokobayusuf commented on May 24, 2024

But even with the asynchronous model creation it will take the same amount of time won't it? It is just not going to block the main thread.

That's correct, though the time taken should be on the order of a few frames.

from natml-unity.

joseph-o3h avatar joseph-o3h commented on May 24, 2024

That's correct, though the time taken should be on the order of a few frames.

Is this when inference is run on the CPU? That might improve the model load/compile time but could also impact run time performance.

NNAPI supports caching of compiled models (https://developer.android.com/ndk/reference/group/neural-networks#aneuralnetworkscompilation_setcaching), is it possible to use it in NatML (if it is not being used already)?

from natml-unity.

olokobayusuf avatar olokobayusuf commented on May 24, 2024

Hey @joseph-o3h Happy New Year! I've got inline responses below:

Is this when inference is run on the CPU? That might improve the model load/compile time but could also impact run time performance.

That's correct!

NNAPI supports caching of compiled models (https://developer.android.com/ndk/reference/group/neural-networks#aneuralnetworkscompilation_setcaching), is it possible to use it in NatML (if it is not being used already)?

NatML doesn't use NNAPI caching unfortunately. Adding support for IR caching is on the mid-to-longer term roadmap.

from natml-unity.

olokobayusuf avatar olokobayusuf commented on May 24, 2024

We've had an engineering slowdown over the holidays, but we're picking back up now. ETA on the update with async model creation should be sometime next week.

from natml-unity.

olokobayusuf avatar olokobayusuf commented on May 24, 2024

Okay minor follow up on the caching question: we're likely gonna add support for this in the near-term, but to iOS, macOS, and Windows first.

from natml-unity.

olokobayusuf avatar olokobayusuf commented on May 24, 2024

Hey @joseph-o3h we've updated model creation to be async in the NatML 1.1 update. For device-specific delays in creating the model, the culprit is likely building the NNAPI representation, so you can either keep the model on the CPU (won't advise this); or you can hide the delay since the process is now async. I'm closing this issue; feel free to reopen another issue if you run into something similar.

from natml-unity.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.