Code Monkey home page Code Monkey logo

unityvision-ios's Introduction

UnityVision-iOS

This native plugin enables Unity to take advantage of specific features of Core-ML and Vision Framework on the iOS platform. This plugin is able to work together with Unity's ARKit plugin or without it. When using ARKit, image analysis is performed on ARKit's CoreVideo pixel buffer. If this is not available, the plugin also accepts native pointers to Unity textures.

Currently supported features:

  • Image classification

  • Rectangle detection

Installation

Requirements:

The plugin was tested using Unity 2018.1.0f2, but it should work with Unity 2017 as well, however this was never confirmed. Using Core-ML requires iOS 11.0 or above.

Follow the steps below to integrate the plugin to your Unity project:

  1. Copy the contents of UnityVision-iOS/Assets/Scripts/Possible to YourProject/Assets/Scripts/Possible.
  2. Set the following values in player settings:
    • Scripting backend: IL2CPP
    • Target minimum iOS Version: 11.0
    • Architecture: ARM64

Usage guide

For information on how to use the plugin, study the example scenes located in UnityVision-iOS/Assets/Examples. Please note that it is not possible to test the plugin's functionality by running the scenes in the editor. To see the plugin in action, please build and deploy one of the example scenes to device.

For image classification, the plugin uses the InceptionV3 machine learning model. This model is provided in this repository inside the MLModel folder. Add this model to your Xcode project (generated by Unity after building), by dragging the model into the project navigator. Make sure that the model is added to the Unity-iPhone build target.

The InceptionV3 machine learning model is quite large in file size. If you wish to use a different model, perhaps a smaller one, it is possible as long as it is an image classification model. In this case, you'll need to modify VisionNative.swift, located under UnityVision-iOS/Assets/Plugins/iOS/Vision/Native. The only change you'll have to make is modifying the VNCoreMLModel object's initialization parameter from InceptionV3( ).model to your model at line 49 of the source file. Again, this is only needed if you want to switch the underlying machine learning model. The plugin works with InceptionV3 out of the box.

Troubleshoot

If you get linker errors for ARKit when trying to build the XCode project, it means that the particular version of Unity you are using did not include the ARKit.framework in linked binaries for the generated project. Go to Build Phases / Link Binary With Libraries, and add ARKit.framework.

unityvision-ios's People

Contributors

adamhegedues avatar charstiles avatar tamaslosonczi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

unityvision-ios's Issues

2 different swift compiler errors

I'm getting
VisionNative.swift:147:17: Cannot find 'UnitySendMessage' in scope
and:
VisionNative.swift:53:35: Cannot find 'Inceptionv3' in scope

I tried adding Inceptionv3 by dropping it into the project and by adding it in different location under "build phases" for Unity.iPhone target.
But the documentation is a bit ambiguous about where exactly you should add the file (copy bundle resources? copy files? compule sources? etc)

Rectangle Detection is not accurate in IOS please help

Please help me I am working on a project where they want to detect a rectangle object this plugin does that but when we are in portrait mode in Ipad width of the rectangle fits properly but height doesn't fit similarly in landscape mode height fits properly but not the width.Pfa the image and if its fixed it will be really helpful.
Landscape
Portrait

Performance control

Hello,

This is a very loose / broad question -

From a high level - I was wondering if there are any ways of increasing performance?

Thanks in advance!

Oliver

Cant make a build getting this error

/Users/apple/Documents/UNITY BARCODE ML/UnityVision-iOS/Build/first/Libraries/Plugins/iOS/Vision/Native/VisionNative.swift:100:50: Use of undeclared type 'CoreImage'

Plugin crashes randomly if ARKitExample is used together with ARKitFaceTrackingConfiguration (Partially solved)

Hi, thanks for this nice plugin.

I just tried to use your ARKitExample code with a ARKitFaceTrackingConfiguration. It basically seems to work and I also initially get the right image classification from the front cam, however, after a short while the app crashes with an EXC_BAD_ACCESS exception.

More precisely, in:

int _vision_evaluateWithBuffer(CVPixelBufferRef buffer) {

    // In case of invalid buffer ref
    if (!buffer) return 0;
    
    // Forward message to the swift api
    return [[VisionNative shared] evaluateWithBuffer: buffer] ? 1 : 0;
}

the line return [[VisionNative shared] evaluateWithBuffer: buffer] ? 1 : 0;

produces this error:

Thread 1: EXC_BAD_ACCESS (code=1, address=0x94a2994c0)

Any suggestions?

Thanks a lot in advance!

Update:

additional testing shows that its not the RKitFaceTrackingConfiguration, but another plugin that ran in the background performing Voice recognition. Was quite surprising, as these two things don't seem to have anything to do with each other. I will contact the developer of the Voice recognition plugin.

Foundation?

hi.. did you test also with AR Foundation

Freezes up intermittently on only one iPhone

Hello,
The classification app works on many phones but on one iPhone XS it freezes up. I was wondering if anyone had some ideas why.

So here are the details:
iOS:12.2
The behavior is that it works for a few seconds then it freezes for 1-2 seconds and it works again, and this freezing continues.

I turned on Zombie Objects and get many of this error from the devices simulator console whenever it freezes
2019-03-20 12:58:34.740735-0400 snobBog3[7837:2621978] Execution of the command buffer was aborted due to an error during execution. Discarded (victim of GPU error/recovery) (IOAF code 5)
I noticed I also get this warning before
Tiled GPU perf. warning: RenderTexture color surface (1080x1920) was not cleared/discarded.

The problem is definitely is somewhere in the classification code, because all the AR stuff works when the vision script is disabled.

I have tried (that I can remember):

  • updating swift version to newest version
  • getting rid of all unnecessary shaders
  • lowering the maxClassificationResults (trying to not overload the GPU)
  • updating iphone system/closing all other apps/reinstalling etc

I tested it on a different iPhone XS and it works great.

I have never developed an app before that deals with swift or CoreML so anything you may seem as obvious to try may save me more weeks of headaches! Is there something I should be looking at that would tell me why it works on one iPhoneXS but not another?

Thanks for reading I anticipate & appreciate a response

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.