Code Monkey home page Code Monkey logo

Comments (15)

ondys avatar ondys commented on July 17, 2024 1

I am not sure position correlates with normals in general, except in the simplest of cases.

What I meant was that our predictor computed geometric smooth normals using the underlying positions + connectivity data, taking into consideration crease angles for sharp edges.

In professionally created models for games, nearly no one actually creates normals by hand though and this is an important point. They specify the normals indirectly based on assuming first that the object is smooth but then specifying exceptions to that smooth surface with either tagging edges as "creases" (true, false, or a scalar) or by specifying smoothing groups.

That's a good point and Draco can already store per-face smoothing groups. Tagging edges would be somehow more difficult as we currently don't have a support for per-edge attributes, but that's something that can be added.

I am very curious. What paper reference do you have for that?

I'm not sure if there is a paper about octahedral coordinates for normals, but they have been used in many games lately. There is one paper about using octahedral coordinates for environment maps here and a website that shows how they can be used for normal vectors is here.

from draco.

bubnikv avatar bubnikv commented on July 17, 2024 1

https://diglib.eg.org/bitstream/handle/10.2312/vmv20171266/111-118.pdf
page 118

image

from draco.

jbrettle avatar jbrettle commented on July 17, 2024

Hey Ben. Thanks for the feedback and the requests. We'll run some tests that we can share and provide some comparisons!

from draco.

bhouston avatar bhouston commented on July 17, 2024

It would be cool to have bpv (bits per vertex) measures. Here is a really good comparative paper on 3d mesh compression that gives expected metrics in Table 1: http://liris.cnrs.fr/glavoue/travaux/revue/CSUR2015.pdf Given that your library uses edge breaker, we should expect an average of 2.1 bpv with triangle strip organized output, because while there are lower bpv compression schemes, they do not have the triangle strip output that Edge Breaker has -- and this is why I believe you chose the Edge Breaker approach.

We are very interested in adopting this tool btw if it is near optimal. I very very much like that it focuses just on a single mesh rather than a hierarchical format that includes scene graph, materials, etc., it makes it a good building block.

from draco.

hccampos avatar hccampos commented on July 17, 2024

I'd like to second what @bhouston just said. At work we are loading huge single meshes of buildings and sometimes even cities and draco looks very suitable for us, without all the cruft of scene graphs, materials, etc. It will be very interesting to see how it compares with other existing formats.

from draco.

bhouston avatar bhouston commented on July 17, 2024

@hccampos You can also use the mesh encoder that glTF uses independently of glTF: https://github.com/KhronosGroup/glTF/wiki/Open-3D-Graphics-Compression

from draco.

bhouston avatar bhouston commented on July 17, 2024

@hccampos The open 3DGC library also has the benefit of being really small compared to the current emscripten port of Darco.

from draco.

ondys avatar ondys commented on July 17, 2024

@hccampos We still need to do more comprehensive testing, but the preliminary results that we have indicate that the current version of Draco offers in general either slightly better or equivalent compression to O3DGC under the same quality settings for meshes with positions only. For meshes with texture coordinates and normals, we have observed about 1.1-1.2X compression gain.

Draco has been in general significantly faster (about 2-3X faster encoding, and 1.5-3X faster decoding, C++ only). We have not compared the javascript decoding performance yet, but we would expect the performance gain there to be either the same or even better compared to the C++ implementation.

Size of the Draco javascript decoder is indeed bigger and as stated in this Issue, we plan to make it smaller.

I don't currently have any comparison with OpenCTM, but we did some measurements quite long time ago and OpenCTM provided in general significantly worse compression (compared to both Draco and O3DGC) with about the same performance as Draco for decoding, but encoding was much slower.

from draco.

bhouston avatar bhouston commented on July 17, 2024

OpenCTM provided in general significantly worse compression

OpenCTM has different compression modes, some are not so good. The best I believe is named M2 -- the one that uses LZMA on top of quantization -- which I suspect should be similar in performance to Darco.

For meshes with texture coordinates and normals, we have observed about 1.1-1.2X compression gain.

My understanding is that the state of the art in normal compression is to use the derived normals of the surface based on its connectivity structure (e.g. smooth normals) and then also incorporate hard edges or creases and then have a correction factor on top of that to support arbitrary normals. This can result in almost no data required for normals. OpenCTM does most of this (derived normals from connectivity + correct factors) but skips the hard edge/creases technique. Does Darco do this? My reading of the code suggested it may not.

from draco.

bhouston avatar bhouston commented on July 17, 2024

I just checked the OpenCTM code and the good codec is MG2. Here is the smooth normal computation code along with the correction factors that I believe is partly responsible for the MG2's compression ratio performance:

https://github.com/Danny02/OpenCTM/blob/master/lib/compressMG2.c#L421
https://github.com/Danny02/OpenCTM/blob/master/lib/compressMG2.c#L487

from draco.

bhouston avatar bhouston commented on July 17, 2024

By default OpenCTM uses MG2 (the good one) and compression level 1 (the best compression ratio is 9, but 5 probably achieves most benefits.) I suspect the slower decompression may be related to normal computation, but I haven't studied it in detail.

from draco.

ondys avatar ondys commented on July 17, 2024

@bhouston The measurements we did in the past were indeed using the MG2 compression. While we don't have any official comparison numbers right now, you can check some benchmarks of OpenCTM vs. O3DGC done for glTF here.The difference was rather big, especially for scanned models, which was the same case for Draco as far as I remember.

As for normals, we did some experiments with predictors based on geometric normals (normals defined by the positions of vertices), but the results were mixed. As expected the technique worked well for models where the normals are strongly correlated with the geometry, but in practice we found out that people usually used normals only when they were significantly different from the geometric normals, in which case this predictor performed much worse compared to other available options. In the end we decided not to include this predictor in the public release, but we may add it on a later date. What we have for normals is a specialized encoder that transforms them into octahedral coordinates where we can encode them using efficient transforms.

On the other hand, we do use positions as an input to predictors for texture coordinates, which usually works better than other options (only for -cl 7 or higher)

from draco.

bhouston avatar bhouston commented on July 17, 2024

As for normals, we did some experiments with predictors based on geometric normals (normals defined by the positions of vertices), but the results were mixed. As expected the technique worked well for models where the normals are strongly correlated with the geometry, but in practice we found out that people usually used normals only when they were significantly different from the geometric normals, in which case this predictor performed much worse compared to other available options. In the end we decided not to include this predictor in the public release, but we may add it on a later date. What we have for normals is a specialized encoder that transforms them into octahedral coordinates where we can encode them using efficient transforms.

Interesting, I've never used predictors before in such a fashion -- neat idea. I understand what you mean and I can see their usefulness in reducing entropy.

I am not sure position correlates with normals in general, except in the simplest of cases.

In professionally created models for games, nearly no one actually creates normals by hand though and this is an important point. They specify the normals indirectly based on assuming first that the object is smooth but then specifying exceptions to that smooth surface with either tagging edges as "creases" (true, false, or a scalar) or by specifying smoothing groups. These are very simple measures that can be put into a geometric vertex normal computation which then generates the complex normals of the resulting object. Creases and smooth groups are very low entropy measures compared to the derived normals and they correspond to the real world design features in 3D objects. I am sure that they are immensely better than any method that compresses the raw normals unless the compression method is one that derives the underlying creases or smoothing groups.

When it comes to predictors I know that the angle across an edge between adjacent faces is a predictor of an edge being a crease or smoothing groups boundary. And then creases and smoothing groups are very often completely predictive of normals in the context of professionally created models.

FBX, the main transfer format for games, stores both edge crease and smoothing group data, rather than normals in most cases, both for compactness but also because that is the low entropy data artists want to work with.

Derived creases and smooth groups from a scanned objects (whose position data is already slightly erroneous and the triangles poorly structured) is likely one of those hard problems that I do not think anyone should invest too much time into.

My use case (professionally created models) is fairly different than yours.

What we have for normals is a specialized encoder that transforms them into octahedral coordinates where we can encode them using efficient transforms.

I am very curious. What paper reference do you have for that?

I do think that scanned objects are much harder to compression than professionally created CAD or polygon models that are common in engineering and video gaming. Your approaches are reasonable for that use case.

from draco.

bhouston avatar bhouston commented on July 17, 2024

Great answers.

Tagging edges would be somehow more difficult as we currently don't have a support for per-edge attributes, but that's something that can be added.

Just enumerate the edges based on first time each is encountered while traversing the facets in order. Creases are stored as scalars in FBX between 0 and 1, but it could be highly quantized just a few bits each (2 may be sufficient for most cases?) and then compressed using any method and it will add up to almost nothing while having massive flexibility.

Only 3DS Max creates smoothing groups. All other tools (Maya, Softimage, Blender, etc) use crease weights.

from draco.

r-lyeh avatar r-lyeh commented on July 17, 2024

Sorry to bring this up but... where were the comparisons after all? I can't find them neither in README nor homepage.

from draco.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.