Comments (3)
Yes. If think that it's a good way to strongly reduce the consuming part since we will limit frame callbacks calls to Filament Transformations(rotation+scale) without touching the Renderable/Material parts.
And I'm pretty sure that the Filament team will not go against this opinion.
To be completely sincere, I have already implemented it completely since I needed to target the center DepthPoint and InstantPlacement within a personal app.
But the truth is that even if a love helping and making evolves/fixes here, I really need money for personal reasons (closed company with debt to pay still now) and I prefer keeping the code personal until I publish the app.
Sponsoring could have helped but I only received 5$ which were canceled 5 minutes after :-)
Once I publish my app or get any sponsoring, I will push it and as you also said ARCore made an awesome work on DepthPoint and InstantPlacement and I can confirm that it rocks with SceneForm!!!
Even if I notice some consuming maybe due to the fact that I need very precise results (Orientation of the center point is detected to the very precise degree)
One or two more Screenrecord just cause I love it:
screen-20210714-224407_Trim.mp4
screen-20210714-224649_2_Trim_Trim.mp4
from sceneform-android.
This is an interesting question indeed and I hope that this discussion/issue can be the starting point of multiple users opinion/experiments.
First, I can confirm that we definitely must have a look at the memory/GPU consumption/performance/usage/release since the opened sources 1.16.0 introduced the replacement of a lot of things that (I think) haven't really been optimization considered.
Secondly, ARCore compatible devices number is quite increasing and less powerful targets are now possible Sceneform targets. The main issue for us with ARCore comes from the fact that everything on the SDK is opaque and I really fill when working on it that everything is made for not helping us inspecting anything inside it = obfuscated code, no javadoc-sources.jar, closed access to the NDK. (If someone from the ARCore development team read this, please give us access to the doc directly from Android Studio. We fill going back in the nineteens accessing the javadoc only from the website)
And finally, the Filament team has made great efforts on the rendering optimization part since the beginning which we must honor using it in the right way.
So, that being said, what should we do for grabbing access to most possible devices (or restrict the usage on too low performances devices) while increasing the Sceneform capabilities?
Since it's quite difficult to make AR tests using a mobile farm, we need everyone feedback here and we should probably make a quick note on how to inspect performances using the Android Studio and also on the on device debug infos for GPU usage (from what I know, some infos are only accessible from the device in order to not being inferred by the USB debug).
We should watch the instructions count, the CPU cycles and cycle per instructions to get consistent results.
@RGregat Can you provide your inspections steps ?
I'm currently updating a quite popular app (more than 500k mostly teenagers users) with an AR addition which could bring a great feedback but I don't know yet what logs should I put.
I personally do most of my tests on a Pixel 4a and develop using the emulator since it's quite annoying to turn around off of my desk to find a plane in the "real world" with a USB cable connected to the computer (I should definitely use the wireless adb).
I rarely encounter out of memory and I usually gauge the CPU usage by the phone temperature :-)
As a first step, I'm currently investigating the plane detection rendering part which is very consuming since new renderables/materials (the gray dots) are added/modified on every frame callback.
Even if they are quite lite meshes and are reused if users came back to an already detected plane, they are too much data oriented developed and doesn't yet completely take in consideration the benefits of the Filament ECS (entity-component system)
My first thinking about it was simply to remove the plane visualizer in favor to a centered ring just following the orientation of the current AR detected plane (like we can see in some AR Core OpenGL samples). Since we will only have to apply a rotation on the same ring object, it will have a very big impact on performances.
The question here is "What do you think of having an oriented ring visualizer instead of the dots plane in a user POV ?"
from sceneform-android.
Ha, I have no idea how i could miss the last part of your answer :O. I like the idea with the ring and as you found out, i already tried something like this in the past, but without any good results. But hey that is now 2 years ago and I hopefully got better in programming nowadays :P
I mean, even an orientated rectangle might be enough to say to the user, hey man you are placing your anchor on a grid
from sceneform-android.
Related Issues (20)
- [Critical] [Bug] Memory leaks [SOLUTION INCLUDED] [FIXED] HOT 16
- CAMERA_UNAVAILABLE on Samsung A23: black screen. HOT 4
- Migrating from 1.5 to this manteined version: java.lang.NoSuchMethodError HOT 2
- Change color model 3d HOT 2
- the object is too big not fit to face how can i fix it HOT 3
- incompatible Fragment type HOT 3
- The video in augmented-images sample is mirrored HOT 4
- How do I use sceneform1.16.0 to size the model HOT 1
- app crash when recreate AR activity HOT 2
- Using sceneform sdk in a dynamic module results in crash
- Avatar Hand rotation not working properly
- Background image coming from on Avatar HOT 5
- Issue with Face Model HOT 5
- Issue With Skybox HOT 1
- Issue with centered node HOT 2
- Shadow position is not correct, shadow appears at the middle of the object. HOT 2
- VideoNode Size Problem HOT 3
- Unauthorized Access Error for Sceneform Dependency Version 1.22.0 HOT 2
- What is Engine and how to use it? HOT 1
- How To Change Texture Images HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from sceneform-android.