A C++ SDK for 3D computer vision, paired with Cocoa frameworks for TrueDepth-based 3D scanning, meshing, and ML landmarking models + analysis for foot and ear
Cloning into '/var/folders/wx/r0s7cmv54ys7r22ch2ft4r4h0000gn/T/d20220819-74502-ojohs'...
ERROR: Repository not found.
fatal: Could not read from remote repository.
I recently started testing the iPhone 14 to scan and the results are much worse than on previous iPhones. When you scan 360ΒΊ the object normally starts to shake and moves, I understand that it is due to loss of tracking. Does anyone have an idea how to solve the problem?
I want to revise the delay of scanning system.
I mean I want to reconstruct the frame every 5secs but, now it's done every 0.033...secs.
Who knows that how to set up the reconstruction time?
I am trying to simply build the framework successfully, and I ran into this error.
Here are the steps I took:
1- I cloned the git. (and verified that git lfs was installed)
2- ran this command in terminal: ./install-dependencies.sh
3- opened StandardCyborgSDK.xcworkspace
4- changed the sign in teams.
5- Clicked on build.
Was I supposed to take some other steps for this? I apologize in advance if the answer may seem to be obvious!
First of all, thank you for the Open Source Project, it is a really great and innovative library!
Unfortunately I have problems loading the zip (.obj export) into Blender. I have the feeling that the texture is not mapped correctly to the surface.
I only use the Example project and only run scene.mesh?.writeToOBJZip(atPath: documentsURL.appendingPathComponent("objzip").path) after creating the scene.
Loaded in Blender an MeshLab looks like:
am I doing something wrong?
Thanks in advance!
I am wondering that, is this repository is up to date? I mean;
-Was the framework binary file in [StandardCyborgCocoa] repository built from [StandardCyborgSDK]?
because in my projects when I used the framework binary file in [StandardCyborgCocoa] repository, All goes well. But when I use the framework binary which is archived and built by me from [StandardCyborgSDK] my application commonly crashes at reconstrouctmesh phase.
I tried to build the VisualTestiOS Scheme. And I got theses errors on xCode:
unable to read document: ../StandardCyborgSDK/StandardCyborgFusion/Public/SCEarTrackingModel.mlmodel
unable to read document: ../StandardCyborgSDK/StandardCyborgFusion/Public/SCFootTrackingModel.mlmodel
in SCFootTracking : ../StandardCyborgSDK/StandardCyborgFusion/Public/SCFootTracking.m:16:9 'SCFootTrackingModel.h' file not found
in SCEarTracking : ../StandardCyborgSDK/StandardCyborgFusion/Public/SCEarTracking.m:16:9 'SCEarTrackingModel.h' file not found
I ran every step described in the readme.md to enable the project:
./install-dependencies.sh
After analysis file list folder, the files SCFootTrackingModel.h and SCEarTrackingModel.h doesn't exists in the project on Github. And xCode can't read the mlmodel for SCEarTrackingModel.mlmodel and SCFootTrackingModel.mlmodel. It show the error: There was a problem decoding this Core ML document. validator error: unable to deserialize object.
As I understand, the current SDK lacks the necessary files or models for 3D point detection on a 3D model. Can you please clarify how it is possible to place points on a 3D model using only 2D coordinates?