Comments (32)
Hi there! Not sure if its possible to add a vote - but I need depth as well so that I can create training data for stereo pair.
Is there a branch where this work is underway? Would love to see about working with any start to it.
Thanks!
from com.unity.perception.
@JonathanHUnity Any update on this issue? Any upcoming release has this enhancement added?
from com.unity.perception.
Hi all,
Completely understand the importance of depth sensor. We're in the process of adding this to our roadmap. Drop me an email at [email protected] if you'd like to know more details around timeline, feature set we're planning etc.
from com.unity.perception.
Hey all! The Perception team just released 🎉 Perception 1.0 🎉, a major update to the toolset! We now have depth support built in that you could try out today!
There are a bunch of cool features to check out such as ray tracing, depth, a new output format SOLO, normals, and more! Here is the full list of changes: Perception 1.0 Changelog
from com.unity.perception.
Hi @jxw-tmp , depth output is a highly requested feature. We are looking into it this week, and depending on the amount of work it appears to be we will prioritize.
from com.unity.perception.
Hi @frederikvaneecke1997 and @FedericoVasile1, we have a depth labeler in the works that can generate 32-bit depth images in EXR format, where each pixel contains the actual distance in Unity units (usually meters) from the camera to the object in the scene.
If this seems of interest, feel free to drop an email to @shounakmitra ([email protected]) and he can take it from there!
from com.unity.perception.
Hi @FedericoVasile1, yes I think it will be possible. Inside the depth labeler, I had a DepthTextureReadback() function, which is invoked when the channel's output texture is readback during frame capture: it has 2 parameters, the first parameter is the frameCount of the captured frame, and the second parameter is the pixel data from the channel's depth output texture that was readback from the GPU. You can access the depth data from the second parameter.
from com.unity.perception.
Dear @FedericoVasile1,
Sorry for the miscommunication. The new enterprise offering is just now being rolled out and I'm not sure if there has been a formal announcement about it yet. And I am sorry for this miscommunication about our roadmap from earlier. In any case, I would definitely suggest contacting Phoebe and let me know if you have any further questions.
from com.unity.perception.
Hi,
I am a student and wanted to use RGB-D to generate synthetic data. I think the student license give me the same access as a professional license. However, I think the RGB-D sensor an enterprise-only feature. Is this true? Thanks in advance.
from com.unity.perception.
Ok. Very cool. We have re-architected the way that perception works internally with the 1.0 release. Prior to it, generation and serialization of the data was a tightly coupled process. As a part of the SOLO work, we re-architected the simulation state and dataset capture classes to use a new endpoint based architecture. So now the data is generated and passed to an endpoint, which by default is the solo endpoint which serializes the data to disk.
I have made internal projects where I created a new endpoint that received the data, and streamed directly to a python node. This way you can completely bypass writing any data to disk.
from com.unity.perception.
I ran out of time before the release, but I wanted to write a couple of examples of how to write alternative endpoints, for instance a COCO writer, and perhaps an endpoint example of streaming the data.
from com.unity.perception.
Thanks a lot. Good to hear that.
from com.unity.perception.
I need it as well, no responses so far
from com.unity.perception.
Sorry for the delay on the response. Right now we are currently at the point where we would like to build this in to the package but we don't currently have it on our immediate roadmap.
from com.unity.perception.
Any updates on when depth data can be expected? Providing depth data should become a priority in my opinion, it's part of so many computer vision applications.
from com.unity.perception.
Hi all,
any updates on this? Thank you.
from com.unity.perception.
Thank you for the update Ruiyu. Cool, this is exactly what I am looking for.
I hope to see this feature in your package soon 😄
from com.unity.perception.
@RuiyuZ I have on question regarding the depth labeler.
Would it be possible to access to the actual depth frame in the script instead of saving it to disk?
I mean, in my case I would need to access the current depth frame (which is a Texture2D or something similar I guess) in the Unity Update() function, will it be possible?
Thank you.
from com.unity.perception.
Hi RuiyuZ,
I tried to follow your advice and email Shounak, but I got this response:
I'd like to know what the roadmap looks like and when we can expect depth images. I'd also like advice on how I can proceed in the meantime - I need depth now!
I've managed to produce depth images with a replacement shader, but it interfered with the pixel-based keypoint labelling - occluded points were labelled as being in state 1.
Kind regards,
Steve
from com.unity.perception.
Hi Steve,
First, let me apologize for not being able to get in contact with Shounak, he is no longer part of the CV team. But hopefully I can help.
Although we do not support, nor is it on our roadmap, depth images in our free version of the perception package, we now have a paid enterprise offering, that supports depth labeling. For more information, please see the feature comparison chart here. This should be able to get you going with depth data, along with. many other labelers, immediately.
If you are interested in learning more about our enterprise offering, please contact Phoebe He ([email protected]).
I hope this information helps,
Steve
from com.unity.perception.
Dear @StevenBorkman,
where did you advertise about this paid enterprise offering? Is there a communication channel that I am missing?
I asked information about the depth labeler some weeks ago both here and via email with a member of your team, and for both cases they told me that is on the roadmap, unfortunately without mentioning this paid enterprise offering.
Anyway, I will contact Phoebe for more information about this.
Thank you
from com.unity.perception.
There is an example to calculate depth from "_ProjectionParams.y" and "_ProjectionParams.z" by the Immersive-Limit. The Shader code line is here.
Here is the approach:
depth01 = B pixel value (RGB in PNG) / ( 1 + near camera clip / far camera clip) + near camera clip / far camera clip
The accuracy of depth are varied.
There is another example by the Unity Technologies Japan, although I don't check the method.
from com.unity.perception.
Hi,
I am a student and wanted to use RGB-D to generate synthetic data. I think the student license give me the same access as a professional license. However, I think the RGB-D sensor an enterprise-only feature. Is this true? Thanks in advance.
I also have the same problem
from com.unity.perception.
Hi,
I am a student and wanted to use RGB-D to generate synthetic data. I think the student license give me the same access as a professional license. However, I think the RGB-D sensor an enterprise-only feature. Is this true? Thanks in advance.I also have the same problem
Maybe @StevenBorkman could shed some light on this question?
from com.unity.perception.
Hi @FedericoVasile1, yes I think it will be possible. Inside the depth labeler, I had a DepthTextureReadback() function, which is invoked when the channel's output texture is readback during frame capture: it has 2 parameters, the first parameter is the frameCount of the captured frame, and the second parameter is the pixel data from the channel's depth output texture that was readback from the GPU. You can access the depth data from the second parameter.
Hi there, I'm also trying to do this. Is there an example to access the texture through script?
from com.unity.perception.
@eugeneteoh, have you tried the latest Perception 1.0, we now have a depth labeler which produces a depth image each frame
from com.unity.perception.
@eugeneteoh, have you tried the latest Perception 1.0, we now have a depth labeler which produces a depth image each frame
Yes I'm using 1.0. Trying to access the images through C# script (PerceptionCamera) instead of saving to disk.
from com.unity.perception.
Ok, interesting. can you give me a brief explanation of what you are trying to accomplish? I might be able to make some suggestions based on that.
from com.unity.perception.
I'm trying to send images (depth, normal etc) from Perception to a standardised Observation
class (along with sensors from other packages/modalities), then pass it into an RL environment. Basically, all I want to do is to read and store the images into my custom Observation
class without going through the file system. I imagine there should be a way to do that from PerceptionCamera
and somewhere in TextureReadback
.
from com.unity.perception.
Thanks! I'm guessing it's related to this? It seems like I would have to create a C# node to be able to consume the data, which is an overkill for my use case. I believe what I need is even simpler than that, which is just reading real time captured images (Render Textures) into a variable in C#.
from com.unity.perception.
Ok I figured out. What I needed was just:
var outputTexture = perceptionCamera.GetChannel<DepthChannel>().outputTexture;
from com.unity.perception.
Great news!
from com.unity.perception.
Related Issues (20)
- Depth images in PNG HOT 1
- Perception camera RequestCapture throws Vulkan framebuffer attachment missing error when -runTest uses -batchmode
- Question: Project activity and roadmap HOT 1
- Question: Z option for ForegroundObjectPlacementRandomizer HOT 1
- Inquiry about the Perception Package's Current Status and Future Updates HOT 1
- Failed to get visualizer process ID after lauch
- Running on AWS cloud
- Visualizing Perception datasets with fiftyone HOT 1
- Blurry foreground object HOT 4
- Black Camera Issue HOT 5
- Support IK enabled and AvatarMasks for Animation Randomizer
- Occlusion Labeler for Fisheye Cameras?
- Unity Perception: Semantic Segmentation not capturing whole object
- No depth found HOT 3
- how to add perception camera?
- Unity Terrain not being rendered by the perception camera for semantic segmentation.
- access bounding box information from other script
- Semantic segmentation sky is always black
- 3D object labeler does not work with synthetic human
- Shader error in ":Parse error: syntax error, unexpected TVAL_ID, expecting TOK_SETTEXTURE or '}' at line
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from com.unity.perception.