Code Monkey home page Code Monkey logo

Comments (11)

ondys avatar ondys commented on August 15, 2024

Do you mean that k-th position and k-th normal should correspond to n-th position and n-th normal in the output (where k may be different from n)? I.e., the order of decoded positions should correspond to the order of decoded normals.

If so, then this should be certainly possible. For example, if your input is an .obj file and if you intent to use our obj decoder, then the order should be preserved, but as I said in the other post, both position and normal indices need to be specified on the faces... e.g.:

f 2//2 3//3 4//4

If you would like to preserve the order of positions / normals, e.g., k-th position in the input should be k-th position in the output, then it should still be possible but you would have to use a less efficient compression (try -cl 1)

If you have a specific example where it does not work, please let us know and we will see what we can do about it.

from draco.

yvoronov-artec avatar yvoronov-artec commented on August 15, 2024

Yes, you're right that k may be different from n, though k-th position and k-th normal should correspond to n-th position and n-th normal in the output. And I understand that faces may be used to connect points and normals.

However, my question is a bit different - it is about correspondence between points and normals regardless faces. That is, I am looking for such correspondence when faces are like that:

f 2 3 4
f 5 7 8

or contain some mix of normals, or else.

from draco.

bhouston avatar bhouston commented on August 15, 2024

Sometimes there are more normals that positions. Thus in the general case n >= p, where n is the number of normals and p is the number of positions, for things like OBJs and other general triangle and polygon forms. The same is true of uvs, u >= p, where u is the number of uvs and p is the number of positions.

from draco.

ondys avatar ondys commented on August 15, 2024

@yvoronov-artec To do something like that, you would really have to either use our API or write your own custom obj_decoder. In .OBJ file format, positions, normals and texture coordinates are simply not directly related to each other until they are assigned to a face. E.g., the first vertex position may not correspond to the first normal vector and I don't think Draco should make such assumption either (it may work for some models such as yours, but it's not general enough).

Other option may be to use a PLY file format, where the positions and normals usually correspond to each other.

from draco.

yvoronov-artec avatar yvoronov-artec commented on August 15, 2024

Actually I already have an API-based encorder for our internal surface structure.

Generally, our internal structure resembles OBJ format. I have an array of points, an array of normals (of exactly the same size as points array) and array of texture coords. And then I have an array of triangles, where every triangle can be textured (has a set of tex coords) or not.

So what I do is filling out draco::Mesh object in the same way it is done on obj_decoder.cc. Something like that:

    draco::Mesh mesh;
    
    GeometryAttribute geomAttr;
    geomAttr.Init( GeometryAttribute::POSITION, nullptr, 3, DT_FLOAT32, false, 3*sizeof(float), 0 );
    int attrId = mesh.AddAttribute( geomAttr, false, numPoints );
    if( attrId >= 0 ){
        vertexAttr = mesh.attribute( attrId );
        for( AttributeValueIndex::ValueType i = 0; i < numPoints; i++ ){
            vertexAttr->SetAttributeValue( AttributeValueIndex( i ), surface.points()[i] );
        }
    }

    GeometryAttribute geomAttr;
    geomAttr.Init( GeometryAttribute::NORMAL, nullptr, 3, DT_FLOAT32, false, 3*sizeof(float), 0 );
    int attrId = mesh.AddAttribute( geomAttr, false, numNormals );
    if( attrId >= 0 ){
        normalAttr = mesh.attribute( attrId );
        for( AttributeValueIndex::ValueType i = 0; i < numNormals; i++ ){
            normalAttr->SetAttributeValue( AttributeValueIndex( i ), surface.pointsNormals()[i] );
        }
    }

    [... some texture coords stuff ... ]

And later:

for( AttributeValueIndex::ValueType i = 0; i < numFaces; i++ ){
    const std::array<int, 3>& triangle = surface.getTriangles()[i];

    Mesh::Face face;
    for( int corner = 0; corner < 3; corner++ ){
        int cornerGlobal = 3 * i + corner;
        face[corner] = cornerGlobal;

        const PointIndex cornerId( cornerGlobal );

        if( vertexAttr ){
            vertexAttr->SetPointMapEntry( cornerId, AttributeValueIndex( triangle[corner] ) );
        }
        if( normalAttr ){
            normalAttr->SetPointMapEntry( cornerId, AttributeValueIndex( <some other value> ) );
        }

        [... some texture coords stuff ... ]
    }

    mesh.SetFace( FaceIndex( i ), face );
}

Then I compress this mesh object, send a file to a destination and decompress it back there.

As one can see, I have a presumption for my suface object that k-th point corresponds to k-th point normal. And I would like to have the same correspondence after decompression of the DRACO file (k may not be equal to n, as mentioned earlier).

Can you please help me to achieve this goal? There is a method Mesh::SetAttributeElementType(int att_id, MeshAttributeElementType et) - can it be useful for this?

Thanks!

from draco.

ondys avatar ondys commented on August 15, 2024

Is there any reason why the point map entry is set with a different mapping for the positions and for the normals? You mentioned that in your data k-th vertex position correspond to k-th normal so I would think that the mapping should be set like this:

        if( vertexAttr ){
            vertexAttr->SetPointMapEntry( cornerId, AttributeValueIndex( triangle[corner] ) );
        }
        if( normalAttr ){
            normalAttr->SetPointMapEntry( cornerId, AttributeValueIndex( triangle[corner] ) );
        }

assuming the triangle[corner] is the index of vertex position/normal on a given corner.

If the mesh is set up like this it should preserve the relative ordering between positions and normals.

The only exception that I can think of would be if you deduplicate attribute values (which we do in the obj loader but you should not do it in your code.. look for function
out_point_cloud_->DeduplicateAttributeValues(); in the obj_decoder.cc)

from draco.

yvoronov-artec avatar yvoronov-artec commented on August 15, 2024

Is there any reason why the point map entry is set with a different mapping for the positions and for the normals?

This is the exact point of my question. I clearly understand that I can preserve a correspondence between points and normals through the faces setup. Obviously, if I do this, I don't even need any sequence preservation - I can reorder normals programmatically based on faces info.

So yes, I do have a reason to set the point map entry with a different mapping for the positions and for the normals. And my question is about possibility of preserving correspondence between them in these particular circumstances.

Thanks!

from draco.

ondys avatar ondys commented on August 15, 2024

I have to admit that I'm still not quite sure what problem you are trying to solve, but let's discuss three different scenarios:

  1. As you stated earlier, you assume that k-th vertex position corresponds to k-th normal. This would imply that the k-th entry (k-th AttributeValueIndex) in your PointAttribute for positions corresponds to k-th entry in the PointAttribute for normals. For this scenario, the SetPointMapEntry() should be initialized the way I stated in the previous post. (I assume you can't do this, but I still don't understand the reasons why not).

  2. If k-th position entry does not correspond to k-th normal entry (i.e., SetPointMapEntry() is different for positions and normals), then the correspondence is still preserved through the point id in the decoded mesh (i.e., the encoded value will still be mapped to the same PointIndex and the corresponding values can be retrieved through PointAttribute::mapped_index() method. This is the standard way how the attribute values are processed after decoding.

  3. If the assumptions from scenario 1. are correct (k-th position correspond to k-th normal), but if for whatever reason you don't want to preserve this correspondence (i.e., if you want to map k-th position to point i and k-th normal to point j) and if you still want to get the correspondence after decoding then you would have to do two things:

    • use sequential encoding method, e.g., through setting encoding options SetEncodingMethod(options, MESH_SEQUENTIAL_ENCODING); (the sequential method is much worse than our regular edgebreaker encoding, but it preserves the order of points... e.g., i-th point on the encoder side will be i-th point on the decoder side)

    • the client app would still have to know that i-th point's position correspond to j-th point's normal. That's not something that the decoder can currently preserve

We have plans to add support for preserving order of attribute values to the sequential encoder, but that's not something that is supported right now (also doing that will often result in even worse compression because if we preserve the order of attribute values, then we need to encode the point to attribute mapping, which can add a significant overhead.

from draco.

yvoronov-artec avatar yvoronov-artec commented on August 15, 2024

The reason I am asking for seemingly strange things is that our meshes are not always perfect. We receive them from scanning process in raw conditions, so they may be partially filled with normals, partially textured, even partially triangled sometimes. So I'd like to preserve as much details as possible, even when they are not nice and shine.

In my mind, SetPointMapEntry() does not define correspondence between point and normal. According to its definition, this method defines correspondence between a face corner and an attribute value (point, normal, etc). Keeping in mind that the face corner is not the same as point (vertex), we need to map a face corner to a point (vertex) and to a normal separately.

This is why (in my mind) direct correspondence between k-th vertex position and k-th normal by no means imply any business with faces and/or face corners, that is, with SetPointMapEntry() method. In our data such direct correspondence between k-th vertex position and k-th normal is organized by using the same index k for vertices and normals. As far as DRACO conversion does not preserve the sequence of vertices and normals inside its attributes, my question arises.

===
I went through the source code and reread your comments one more time and now I understand the key reason for our misunderstanding. It is in the word 'point' - I mean it is 'vertex', you mean it is 'face corner'. Sorry for that. And I can rephrase my question in the following way -

Is it possible to map an AttributeValueIndex of one Attribute to an AttributeValueIndex of another Attribute with no use of any PointIndex?

According to what you've posted, the answer is 'impossible'.

===
So I have another idea. I'd like to init GeometryAttribute for POSITION and NORMAL with 4 floats instead of 3, and fill three of them with normal components (as usual) and fill the fourth float with k index. So I will have an original index value (k) after decompression for all vertices and normals and I will be able to easily restore their original correspondence. Will this work?

Thanks!

from draco.

ondys avatar ondys commented on August 15, 2024

I went through the source code and reread your comments one more time and now I understand the key reason for our misunderstanding. It is in the word 'point' - I mean it is 'vertex', you mean it is 'face corner'. Sorry for that. And I can rephrase my question in the following way -

This is not exactly accurate. In Draco, a point is essentially a collection of attribute values across all attributes. For example, in case of positions, normals and texture coordinates, one point would represent a collection of one position, one normal and one texture coordinate. This is what the SetPointMapEntry() method is used for as it specifies which attribute values are mapped to the given point.

Now, for meshes, a single corner is mapped to a certain point (PointIndex), but in many cases, all corners that are attached to a single vertex are going to be mapped to the same PointIndex (this is always true if all attributes are defined per-vertex, because in such case the attribute values are always the same for all corners attached to a single vertex). The only case where there would be different points attached to a single vertex would be when a vertex lies on an attribute seam (such as texture seam, or normal crease).. in this case the attribute values on different corners of a single vertex differ therefore they need to be represented by a different PointIndex.

So I have another idea. I'd like to init GeometryAttribute for POSITION and NORMAL with 4 floats instead of 3, and fill three of them with normal components (as usual) and fill the fourth float with k index. So I will have an original index value (k) after decompression for all vertices and normals and I will be able to easily restore their original correspondence. Will this work?

Well, theoretically yes, but only if you don't use quantization, in which case the attribute values would not be really compressed at all (in our current implementation at least).

from draco.

yvoronov-artec avatar yvoronov-artec commented on August 15, 2024

Thank you for explanations.

from draco.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.