Code Monkey home page Code Monkey logo

cgltf's People

Contributors

abwood avatar bcomb avatar black-cat avatar bobbyanguelov avatar bowald avatar chemecse avatar dmuir-g avatar eugen2891 avatar evansds avatar hkva avatar jkuhlmann avatar jmousseau avatar kuranes avatar lxlasso avatar mikejsavage avatar pezcode avatar pixeljetstream avatar prideout avatar raysan5 avatar rgriege avatar romainguy avatar sanghoon avatar schrottfresser avatar sergof avatar shawnhatori avatar stephomi avatar sultim-t avatar wsw0108 avatar zeitt avatar zeux avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cgltf's Issues

Validation errors are very generic

I have a tentative commit that addresses this issue, just thought I'd post an issue before posting the PR.

In Filament we use cgltf_validate in debug builds and it's great for triaging glTF issues that are actually issues in the asset rather than an issue with Filament.

It would be nice if cgltf_validate printed a unique message for each type of failure but I don't think we should bloat the cgltf library with lots of strings that need to be maintained.

The solution I have adds a little macro that lets you optionally trigger an assert when a validation failure occurs. If the macro is defined to assert(false), then you'll see a line number that tells you what specific condition triggered the failure.

Vendor extensions

Hope it hasn't been asked before, I didn't find anything when searching. We're about to launch the next version of Capture with cgltf and our own CAPTURE_model extension which has a very narrow target group. What's the lay of the land in terms of vendor extensions - manage them in our own branches, bring all into cgltf or introduce a structured way of hooking into cgltf to read / write them?

Errors opening files > 2GB in size

On Windows 64 bit, processing a file > 2GB in gltfpack, fails to open then fails to validate.

	if (file_size == 0)
	{
		fseek(file, 0, SEEK_END);

		long length = ftell(file);
		if (length < 0)
		{
			fclose(file);
			return cgltf_result_io_error;
		}

		fseek(file, 0, SEEK_SET);
		file_size = (cgltf_size)length;

size could be passed in to this function, but in this default code, long is signed 32 bit so does not return a correct value.
A temp fix for Windows is

long long length = _ftelli64(file);

The second problem is cgltf_json_to_int() being used to read values stored in cgltf_size values, like buffer sizes and buffer offsets.

I locally created cgltf_json_to_size() that uses atoll() and replaced instances of cgltf_json_to_int() with it.

cgltf_load_buffer_base64 returns cgltf_result_io_error for what seems to valid base64

I have been using this cgltf_load_buffer_base64 to decode images from Khronos glTF samples repo. Images in some of the embedded glTF files like DamagedHelmet and TextureCoordinateTest can't be decoded using cgltf_load_buffer_base64.

Surprisingly If I just copy past the base64 data from the glTF file into https://www.base64decode.org it also complains but then https://gltf-viewer.donmccurdy.com happily accepts the file and loads the models correctly.

At this point I am not sure if its actually cgltf_load_buffer_base64 that doesn't know this specific encoding or gltfviewer doing some magic that makes it work. I thought I should ask perhaps you know more.

Indices exceed number of vertices in .gltf files with .bin files

I'm using cgltf to load models but have run into an issue with models with associated .bin files, where some indices exceed the number of vertices. The result is the below image.

Broken

The image below is occurs when I take that model above and export it blender; the exported copy has no .bin file.
NotBroken

cgltf_load_buffers is being called.

Heap Corruption in Debug mode - Windows 10

Hello.

The program reporting heap corruption when cgltf_free is called in the following function

int ImportMainMenuGLTF (const char* Filename, Mesh** Meshes, uint32_t* MeshCount)

At this file in the repository
https://github.com/amadlover/Against/blob/master/Against/ImportAssets.c

This happens in debug mode only and not release mode.

The GLTF is parsed into the cgltf_data. The cgltf_data is looped over for the relevant data (in this case meshes) and memory for positions, uvs and indices are allocated during this loop.

If the loop is commented out there is no crash. Does it have something to with allocations for the attributes?

Build Environment.
Windows SDK Version: 10.0
Platform Toolset: Visual Studio 2019 v142

Please let me know if you need more information.

Thank you
Nihal

File system options

First off, thanks for an incredible library! Very much enjoying using it :)

Today I had a platform issue with the fopen call, probably something to do with UWP. This is easily avoided with all-in-one gltf files by just directly using cgltf_parse instead of cgltf_parse_file and passing file data I've loaded myself. Unfortunately, cgltf_load_buffer_file also uses fopen directly! This trips up any gltf files with references in them.

I would love to see something like file read callbacks in cgltf_options! This would solve my problem quite nicely :)

Best way to deal with dummy buffers?

For the sake of this issue, "dummy" buffer is a buffer with no data (cgltf_buffer::data == NULL).

This can happen in one of the following ways:

  • cgltf_load_buffers wasn't called on the file
  • File has buffer declaration with no uri, and the buffer isn't part of the GLB file.

There was some semi-recent discussion about the intended semantics of the latter, please refer to KhronosGroup/glTF#1686 (and linked issue 1684) for details. The TL;DR is that the expectation is that buffers with no uri are valid, but if they are referenced by the scene, their behavior is left to extensions / future glTF versions to define.

In cgltf, this comes up as a very simple problem - a file where a dummy buffer is referenced by an accessor loads fine, validates (via cgltf_validate) fine, but then cgltf_accessor_* functions crash when dereferencing a NULL pointer.

I can see a few options to resolve this.

  1. Add a data == NULL check to all cgltf_accessor_* functions, and return false. This seems unambiguously good, but doesn't seem satisfactory - it may obscure the error (such as forgetting to load buffer data), and it would be nice to be able to call some function up-front to not have to deal with this. Additionally, cgltf_accessor_read_index can't fail, so it must return 0.

  2. Change cgltf_validate to return "invalid data" when buffer data is NULL. The problem with this is that this makes it mandatory to call validate on a file with in-memory contents, which may not fit some use cases well.

  3. Move buffer data validation into cgltf_validate_buffers or some such, and in that function require that data is non-NULL. This also cleans up a weird ambiguity with validate - it's currently safe to call it before or after cgltf_load_buffers, but if you call it before, the buffer contents isn't validated.

  4. Add a bitmask flags to cgltf_validate where we can specify what parts of data to validate - initially we could have a single bit mask, "validate buffers", that will effectively do what (3) suggests, but within the same function.

Opinions? Any options I am missing? I am leaning to doing 1 anyway, and doing either 3 or 4, with weak preference for 4 since that allows us to incorporate more optional checks in the future.

Why is cgltf_calc_size not exposed

I realize that the small number of API functions exposed is a feature, but when trying to get strides and size for my primitive.attributes.accessor I am finding myself in need of the call. Unless there is another way that I am missing

Are there any examples?

Are there any examples for using cgltf? Sadly I couldn't find any in the repo, and the structs alone aren't very useful as documentation for anyone who hasn't deeply studied the gltf specs. (which is what I was hoping to avoid in the first place by using a library)

The following use cases would be interesting to see as examples:

  • load up a model and push all triangles and their vertices positions of the default pose/ignoring animation as a listing to stdout/a terminal

  • load up a model and enumerate all animation names, material names, and texture names and print that out on the terminal

  • load up a model with animation data and everything, but all from memory (all files are provided as buffer pointer + buffer length size_t) instead of from disk

  • print out the skeleton rig of a model listing all bones and what vertices they influence with what weight

  • load up a specific animation frame and calculate all bone positions (if cgltf can do that mathematical part, can it?) and print out the transform matrix for each bone on the terminal

None of these tasks seem obvious to me right now. I see lots of buffers and accessors and what not, but how do they go together? I can't easily tell, an in-depth example would really be eye opening.

Provide utility for data conversion?

This is a proposal for a utility function in the same vein as cgltf_node_transform_world and cgltf_node_transform_local.

Ideally all glTF buffers are blobs that get sent to the GPU and never need CPU-side processing. However in practice clients often need to convert some of the buffers (especially animation and skinning data) into simple arrays of 32-bit floats to avoid dealing with normalized integers, stride, etc. on the CPU.

The prototype could be:
cgltf_result cgltf_accessor_convert(const cgltf_accessor* in_data, cgltf_float* out_floats);

Flexible extension support

We've got two different mechanisms for extensions as of now:

  1. A number of KHR extensions are directly built into cgltf.
  2. Unknown extensions can be accessed via the cgltf_extension array in most objects.

As the number of official extensions is growing and more vendor extensions pop up, we should consider implementing a more flexible and modular approach.
I'm open for ideas on how we could change this.

This is currently my best idea here:

  • We add a map to each node that can have extensions. It maps extension name to a void* that can be casted to a struct that holds the respective extension's data.
  • Extensions can provide convenience functions taking the struct as a parameter to do the casting and map lookup in a safer way.
  • In order to fill these structs, extensions setup callback functions for a given extension name and node type in cgltf_options. These callbacks are then called when cgltf is parsing glTF data in order to allocate and fill the structs that are stored in the nodes.
  • Extensions probably need to provide deallocation/destroy functions for the structs they stored in the nodes.

At the bottom line, that would mean for all nodes that an extension can extend, it needs to support three functions: one to allocate/initialize the struct holding the extension data, one to retrieve it from the extension map in the node, one to free/destroy it.

The extension map could be, for now at least, just an array mapping a hash of the extension name to the data. Considering we probably won't have more than a handful of extensions on each node, this should be sufficient.

What do you think? Did a miss something? Do you have a better idea?

Proposal for cgltf_write.h

I've made a good bit of progress on a tentative cgltf_write.h. It includes cgltf.h in order to re-use the POD structures.

This is similar to how stb_image_write.h is a complement to stb_image.h.

I will keep working on it, and I will make a PR when I'm done, but please feel free to steer me on a course.

Expected error handling semantics?

I'd like to make sure that errors are handled consistently, with an eventual goal of fuzzing the parser to make sure it is memory safe.

When cgltf_parse returns cgltf_result_success, cgltf_free frees all the data correctly (hopefully? :)).

However, when cgltf_parse returns an error, several issues may occur:

  • The error can be returned too early, in which case it's unsafe to use cgltf_free since some pointers aren't even filled!
  • The error can be returned mid-parse, in which case pointers aren't fixed up so it's possibly unsafe (?) to use cgltf_free - not 100% sure about this, I haven't studied the internal structures in detail.
  • Failure to call cgltf_free after a failed parse may or may not result in a memory leak.

I'd like to clean that up but I'm not sure what the expectations are. I can see two possibilities:

  • cgltf_free assumes a fully constructed correctly parsed object; cgltf_parse must clean up after itself in event of any error (possibly by calling some internal function, or even cgltf_free itself, in strategic places)
  • cgltf_parse makes sure that the data structures are coherent wrt pointers that cgltf_free may access, cgltf_free is safe to call in all cases

What's the expected behavior here?

Change cgltf_extra to copy data during parsing?

I'm looking into optimizing the amount of memory gltfpack spends on processing very large glTF files (e.g. 400 MB .gltf 400 MB .bin). One thing that I'd like to explore is unloading the glTF data early.

The way gltfpack works is that it parses glTF file into memory, loads the buffers, then extracts the data for meshes and animations into separate data structures, and then proceeds to optimize the scene data and output the result.

During this process, it needs to keep cgltf_data* around - I'd like to continue doing that, since a lot of complex glTF structure, especially around materials, is easier to preserve that way.

However, it can in theory unload the JSON data very early (immediately after parsing) and unload the buffer data earlier (after extracting mesh/animation/image data).

Unfortunately, it's currently impossible to unload cgltf_data::json because cgltf_extras references offsets in that blob. Right now it's the only explicit reference to JSON data, although #114 adds this for extensions as well.

Would it be reasonable to change extras storage to store char* (or a pair of char* + size_t), and copy the data out of raw JSON storage during parsing? This will slightly increase the number of memory allocations and the peak memory usage for cases where the JSON data isn't unloaded, but it would allow dropping the entire buffer before application processing. For .glb parsing we will have to keep the allocation around since it stores the binary chunk, although there's probably a way to deal with this as well.

JSMN_PARENT_LINKS might be important to define

jsmn is currently compiled without JSMN_PARENT_LINKS; when profiling gltfpack I found that the bottleneck on some relatively large files (2.6 MB JSON, 41 MB binary) is json parsing, specifically this backtracking loop:

image

I don't really understand jsmn but I suspect this is effectively quadratic for some JSON structures where there's a large array of objects, and whenever each object ends we end up scanning back for the start of the array.

jsmn_parse takes 229ms on this file (which is ~10 MB/sec which isn't great for a JSON tokenizer).

Enabling JSMN_PARENT_LINKS on this file results in jsmn_parse taking just ~5ms at a modest memory impact. I haven't checked if with this define set all test files parse correctly.

README.md clarification

Hi,

please can you modify the includes in the examples in the README to:


#define CGLTF_IMPLEMENTATION
#include "cgltf.h"

Just so its clear that CGLTF_IMPLEMENTATION is required

Provide an installable shared library

I understand that for some people a single-header "library" is easier to use, but for some use-cases a proper shared library would be better. Would this be something you'd like to consider?

I think it would basically boil down to a simple .c file with CGLTF_IMPLEMENTATION defined compiled to a shared library.

I'm willing to contribute and maintain the shared library, if that helps.

Deeper file validation

This is moving the discussion from #6, incorporating the points from @zeux as well. The idea is to provide two separate validation levels:

  • Checking for basic file validity during cgltf_parse(). The goal of ensuring the user code can trust basic data pointers/ranges and indices in the imported data structures, while keeping these checks efficient enough to avoid significant slowdown of import time. In particular:

    • Ensuring all attributes required by the specs are present (or is this already done?)
    • Ensuring all data ranges in accessors, buffer views and buffers are in bounds of the actual buffer (this is one thing that tinygltf doesn't do and it's one of the main pain points as the user code has to check explicitly on each access)
    • Ensuring node, material, mesh, animation, ... indices are in bounds as well
    • Anything else?

    All those checks need only access to the JSON file (not to the data buffers). Failure in any those checks would result in a hard parse error.

  • Providing further validation via a hypothetical new cgltf_validate() API. Goal of this API is deeper validation (maybe in-line with what the official online glTF Validator does?), having mainly checks that would be slow to perform during the initial parse or checks needing more than just the JSON driver file. Examples:

    • Checking for vertex attribute arrays having the same number of elements
    • Checking for cycles in the node tree
    • Checking that mesh index buffer doesn't have out-of-bounds index values
    • Checking that mesh vertex data (animation data, ...) are in specified min/max bounds (if min/max is specified)
    • ... more?

    Some of these checks would need access to the data buffers. Besides passing the buffers I can imagine having a flags parameter allowing the user to specify what all to check. Failure in any of these would not be fatal, as the user code could be able to recover from most of these.

Small warning when compiling with emscripten

Just a small detail, when compiling with emscripten I got a warning:

cgltf.h:446:3: warning: redefinition of typedef 'cgltf_node' is a C11 feature [-Wtypedef-redefinition]

typedef can be just avoided in the struct definition because it has already been defined previously.

I can send a PR if you want.

cgltf_parse_file can crash with custom file_open, should not call memory_free

cgltf_parse_file crashes when the parsing fails with a custom read/release callback in use.
A pointer acquired from file_read cannot be passed to memory_free, but that is done here.

	void* file_data = NULL;
	cgltf_size file_size = 0;
	cgltf_result result = file_read(&options->memory, &options->file, path, &file_size, &file_data);
	if (result != cgltf_result_success)
	{
		return result;
	}

	result = cgltf_parse(options, file_data, file_size, out_data);

	if (result != cgltf_result_success)
	{
                // BUG? this should be file_release as the file_data was acquired by file_read
		memory_free(options->memory.user_data, file_data); 
		return result;
	}

cgltf_accessor_read_float and others should verify buffer boundaries for the `index` parameter to avoid crashes

cgltf_accessor_read_float and the other functions of that type should verify the buffer boundaries. I get that this probably wasn't done for performance reasons, but just look at the typical workflow people will use:

  1. Get mesh indices via cgltf_accessor_read_index
  2. Get the mesh attribute of type cgltf_attribute_type_position
  3. Stuff the indices into cgltf_accessor_read_float to read actual position

Who is going to manually verify the mesh indices especially given the buffer accessor arithmetics are complicated enough that cgltf_accessor_read_float and such exist in the first place?

I think that leads to the obvious conclusion that the most common use case for cgltf_accessor_read_float involves the offset being directly input from whatever unchecked numbers written in the mesh indices data. This means a malformed model could easily crash, which I think is worth preventing.

(And I get that the library cannot reasonably prevent crashes due to errors on the engine side, but I think it should try to prevent crashes from malformed models. If only to make debugging broken models easier for artists, and to allow catching invalid load attempts sanely rather than everything exploding with a segmentation fault.)

Correct handling of non-required attributes that are non-zero

First of all thanks for the great library and all the active work that is going on here. The library was super easy to integrate and works like a charm.

Second, I just wanted to file an issue I discovered ;)

If the primitive mode is not specified in serialized json, the mode does not correctly take on the default of '4' (see 'https://github.com/KhronosGroup/glTF/tree/master/specification/2.0' look for 'primitive.mode').

I find this often happens with the khronos blender GLTF exporter, and you'll see something like the following in the source .gltf file (notice no explicit mode in there):

            "name" : "Cylinder.001",
            "primitives" : [
                {
                    "material" : 0,
                    "indices" : 29,
                    "attributes" : {
                        "POSITION" : 25,
                        "NORMAL" : 21,
                    }
                }]

My local fix was to put this line into cgltf_parse_json_primitive before entering the main loop:
out_prim->type = cgltf_primitive_type_triangles;

I've verified this does the trick for me. Unfortunately, I'm not setup for PRs right now - but hopefully someone else can get this in.

Eyeballing the code in a few places, it looks like this may represent a class of issues for a few other attribute where the default is non-required and the value non zero. eg. sampler.wrapS, sampler.wrapT, so it may be worth doing a general pass over the spec to uncover more.

Thanks again
-Justin

Consider exposing cgltf_load_buffer_base64

For Filament's WebGL build we were able to use cgltf as-is with this one exception.

We easily wrote our own alternative to cgltf_load_buffers (since the filesystem does not exist), but we didn't want to write our own base64 decoder, so we had to publicize the one in our local copy of cgltf.

Wide string support

Would it be possible to get const wchar_t* support for most functions that take string parameters as const char_t*?

Thanks.

Support sparse accessors?

I know we don't want to add complexity to the library, but it would be soooo great if cgltf_accessor_read_float could somehow handle sparse accessors.

All spec-compliant clients of cgltf need to support sparse accessors one way or another, and it involves CPU work only. Therefore I think this would be a high-value feature, and would fit well with the library's scope.

I'm willing to have a stab at implementing this myself, but first I'm interested in other people's take on this idea.

Proposal for a simpler attribute structure

Right now the structure of attribute data is as follows:

typedef struct cgltf_attribute
{
	cgltf_attribute_type name;
	cgltf_accessor* data;
} cgltf_attribute;

typedef struct cgltf_morph_target {
	cgltf_attribute* attributes;
	cgltf_size attributes_count;
};

typedef struct cgltf_primitive {
	cgltf_attribute* attributes;
	cgltf_size attributes_count;
};

This correctly reflects GLTF spec, but has two issues:

  • It's inconvenient to consume - you basically need to implement helper functions that find an attribute by type
  • It results in deeply nested structures for primitives and morph targets, requiring more allocations.

I was thinking that we can switch to an alternative structure - if cgltf_attribute_type enum had a count entry, we could instead use:

typedef struct cgltf_morph_target {
	cgltf_accessor* attributes[cgltf_attribute_type_count];
};

typedef struct cgltf_primitive {
	cgltf_accessor* attributes[cgltf_attribute_type_count];
};

This fixes both issues with the current approach, however it makes it impossible to support "flexible" attribute types - that is, attribute types with names not recognized by the current spec or any extensions. For example, if you export a mesh with more UV sets than 2, I'm guessing you'd get TEXCOORD_2 or TEXCOORD_3. Right now we could support this in a generic fashion by adding char* custom to cgltf_attribute but if we used arrays we'd need to extend an enum. However, extending an enum is easy :)

Thoughts? I'm mildly leaning towards reworking this to a flat array but not 100% sure.

Handling escaped json strings

๐Ÿ‘‹

cgltf doesn't decode encoded json strings, leaving escaped characters (some whitespace, unicode codepoints, etc.) in names. I've implemented this outside of cgltf, but I feel like this is inherently a library task to deal with the underlying format's string encoding.

Would you accept a PR that adds a function to perform the string decoding? This would be an entirely optional function to use in-place (similar to cgltf_decode_uri) on user-facing strings (mostly names and URIs). The glTF spec guarantees that a valid glTF file doesn't use encoded strings for keys and enums, so no changes needed to parsing code.

For reference, the allowed escape characters:

Possible memory leak with cgltf_data not cleaned up

Hello.

I was using the Visual Studio 2019 Diagnostic Tools' Memory Usage tool on Windows 10.

After a bit of digging and prodding I ended up with this test, where I call a function to import a gltf and return. This is from the main function.

There are two breakpoints, 1 before the importing function is called and 2 after the function is called. Snapshots of the heap allocated are taken and it shows the following. I am interpreting this as the cgltf_data that was allocated is not being freed correctly.

Main function
main_func

The which imports the gltf into a cgltf_data and frees the data.
import_gltf_func

The snapshot of the heap at the breakpoint at the return statement. Ideally there should be no values here.
snapshot

This is the call stack snapshot.
snapshot_call_stack

Tried with imported 3 gltf files and the "count" parameter in the snapshot goes to 3.

Please let me know if you need more information, and if there is another way to verify this.

Best regards.
Nihal Kenkre.

Suggestion: add cgltf_load_buffers_memory, and rewrite cgltf_load_buffers to use it

I just checked out more in-depth in how to load up cgltf from memory / a VFS library (like PhysFS).
It appears to me for loading files with cgltf with all external resources there are two choices:

  1. Use cgltf_load_buffers (which requires file access)

  2. Duplicate the entire content of cgltf_load_buffers, especifically also the loop recognizing & handling data: URIs, and dealing with the data->buffers detail members which doesn't seem particularly future-proof, and adjust it to load from memory or wherever else

The second approach seems unnecessarily more complicated, especially since I assume most more versatile game engines and frameworks will need this. It also seems counter-intuitive to me that reading from memory isn't the regular case with file reading being a specialization case, rather than the other way around.

Therefore I suggest the following:

  1. There could be an implementation alike to this:

    cgltf_result cgltf_load_buffers_memory(
        const cgltf_options* options, cgltf_data* data,
        void (*data_callback)(const char *uri, cgltf_size *buffer_size, const void **buffer_data, void *userdata),
        void *userdata
    )
    

    which does what cgltf_load_buffers does now, except it calls the data_callback to retrieve the file contents instead of disk access. The data_callback shall set *buffer_size = 0 on failure to obtain the data, and shall otherwise point *buffer_data = ... to an existing data buffer. It shall be up to the cgltf user, who supplied data_callback, to dispose of whatever the data_callback allocated during its use after the entire run of cglt_load_buffers_from_memory has returned, and all data was safely copied into the cgltf structures.

  2. Once step 1 is implemented, possibly reimplement cgltf_load_buffers on top by just using a callback to read the files supplied to the new cgltf_load_buffers_memory to avoid code duplication

  3. (Very optional and nasty to do, I know. But just as an idea:) possibly fix the whole naming scheme with some deprecation plan. It's cgltf_parse and cgltf_parse_file right now, so shouldn't cgltf_load_buffers actually be cgltf_load_buffers_file? With possibly this new cgltf_load_buffers_memory function eventually becoming cgltf_load_buffers as consistently also the memory-based variant (or alternatively, maybe prefix all the memory-based loading functions with something like _memory, e.g. cgltf_parse_memory)

parsing gltf file

Hi,
I am trying to decode triangles from gltf file.
Unfortunately, I don't manage to reach vertex data , what may be wrong ?

for (int m=0;m < mesh->primitives_count;m++)
  {
    cgltf_primitive *primitive = &mesh->primitives[m];

      if (primitive->type == cgltf_primitive_type_triangles)
      {
      index_accessor = primitive->indices;
      for (int i = 0; i < primitive->attributes_count; i++) {
                cgltf_attribute attr = primitive->attributes[i];
                switch (attr.type) {
                case cgltf_attribute_type_position:
                        { 
                        position_accessor = attr.data;
                        const float* position_data = (float *)position_accessor->buffer_view->buffer->data;

                        if (index_accessor->component_type == cgltf_component_type_r_16u)
                        {
                          for (int indice=0;indice <index_accessor->count ;indice  ++)
                          {                         
                            unsigned short t_indice = cgltf_accessor_read_index(index_accessor,indice);
                             std::cout << "vertex1 " << *(position_data+t_indice) << " / " << *(position_data+t_indice+1) << " / " <<  *(position_data+t_indice+2) << std::endl;
                          }
                        }
                        break;
                case cgltf_attribute_type_normal:
                        normal_accessor = attr.data;
                        break;
                case cgltf_attribute_type_texcoord:
                        uv_accessor = attr.data;
                        break;
                }
        }
 
        if (index_accessor == nullptr || position_accessor == nullptr || normal_accessor == nullptr)
                throw std::runtime_error("Gltf model invalid or unsupported");
 
      }  
  }
    }

Support custom malloc()/free() for memory allocation

Similar to stb libraries, it would be nice to support custom memory allocators:

#if defined(CGLTF_MALLOC) && defined(CGLTF_FREE)
// ok
#elif !defined(CGLTF_MALLOC) && !defined(CGLTF_FREE) 
// ok
#else
#error "Must define both or none of CGLTF_MALLOC and CGLTF_FREE
#endif

#ifndef CGLTF_MALLOC
#define CGLTF_MALLOC(sz)       malloc(sz)
#define CGLTF_FREE(p)          free(p)
#endif

cgltf_free uses file release callback for base64 buffers

cgltf_load_buffers loads base64-encoded buffers with the memory.alloc callback, but cgltf_free frees all buffers with file.release. This is incorrect for cases where memory.free actually frees memory but file.read doesn't (e.g. using static embedded files), leading to a memory leak.

_ftelli64 does not compile on Windows when using zig cc (clang, mingw64)

I hit a compile error when compiling with zig cc - _ftelli64 is msvc thing so the check for it should be:

#if defined(_WIN32) && defined(_MSC_VER)

Note.
zig cc by default uses clang and mingw64 so it defines __clang__ and __GNUC__.
On the other hand, cl-clang on Windows defines __clang__ and _MSC_VER.

Image and Buffer read

Hello please look at providing image and buffer data as part of cgltf_data.

Having said this, Is the image and buffer data part of cgltf_data when reading a GLB file.

Thank you!!!

Simplifying targets for extras/extensions?

As I've been reading extras/extensions code more I've been wondering:

Would it be worthwhile to reduce the scope of extras/extensions by removing them from places where they aren't obviously necessary?

Three specific targets come to mind:

  • cgltf_accessor_sparse, as I don't think extending sparse accessors is particularly interesting
  • cgltf_texture_view, as use of extras/extensions here makes adds friction to adding more texture views in terms of code size
  • cgltf_pbr_metallic_roughness (which has extras but not extensions), as it's inconsistent with other material structs

The motivation for the last two is that it would make it slightly less painful to add support for new KHR_ material extensions, since adding texture views and new material structs wouldn't require as much extra code.

I'm not aware of existing extensions that extend either of the above, except for KHR_texture_transform which is supported natively. Obviously there may exist unpublished extensions that do so...

(I had one other crazy idea but that's probably not actually a very good one, where extensions/extras could be stored in separate arrays along the lines of struct { void* key; char* name; char* value } where key would point to the cgltf_ element that it's attached to, which would make it easier to generalize the parsing and destruction of associated data - but that would make it harder to use these so it's probably not a great tradeoff)

Material AlphaCutoff triggers validator warnings

When setting materials to opaque this writer causes the gltf validator to systematically throw this warning :

                "code": "MATERIAL_ALPHA_CUTOFF_INVALID_MODE",
                "message": "Alpha cutoff is supported only for 'MASK' alpha mode.",
                "severity": 1,
                "pointer": "/materials/0/alphaCutoff"

This is because the validator does not expect to see a value for the alpha_cutoff in the JSON dictionary in opaque modes
(see validator code here)

The simplest fix is probably to wrap the alpha cutoff output in a simple conditional :

static void cgltf_write_material(cgltf_write_context* context, const cgltf_material* material)
{
	cgltf_write_line(context, "{");
	cgltf_write_strprop(context, "name", material->name);
	cgltf_write_floatprop(context, "alphaCutoff", material->alpha_cutoff, 0.5f); // <--- make this conditional on alpha mode

Improved error output

๐Ÿ‘‹

cgltf error output is rather minimal, which can be an issue for user-facing glTF importing. 95% of broken files out there end up with a non-descriptive "invalid glTF" without any more context.

I'm mostly hoping to get a discussion started in this issue. How to improve human-readable error output without unnecessarily complicating or slowing down cgltf?

A non-exhaustive list of possible changes:

  • A CGLTF_ERROR macro that takes a string literal. By default does nothing, so the literal should be compiled away. Either letting the user override the macro and handle the error string, or store it in cgltf_data if some preprocessor define is set. Needs a bit of work to retrospectively add error messages everywhere, so could be made optional.
  • Making CGLTF_PTRFIXUP_REQ overridable so you can print somewhat useful errors like "data->textures[i].image (7) out of bounds for data->images (size 6)"
  • Return line (and preferrably column) number for all errors. This should be rather straight-forward to map from the jsmn parser state. Could be set in cgltf_data or an optional parameter to the cgltf_parse functions

I'd love to get some feedback on this, anything from "not needed" to concrete ideas. We're certainly willing to contribute work towards a PR since this has become a bit of a pain point.

KHR_texture_transform: how to check existence of texCoord?

texCoord inside KHR_texture_transform is not mandatory and replaces the normal texCoord only if present. However, I don't see a way to check if it exists or not.

Problematic example scenario: the original texCoord is 3 and texCoord in KHR_texture_transform is unset, but it becomes 0 in cgltf_texture_transform. Setting the texture channel to 0 would be wrong.

Not sure what would be a good way to expose this. Default value of -1? Extra bool?

Meshes with more than 65535 verticies (that need 32-bit index buffer) not supported?

When I load a mesh with more than 65535 verticies, cgltf discards all the extra verticies. The indices structs component_type stays at cgltf_component_type_r_16u and the position attribute structs count says 65535.

GLTF does support more than 65545 verticies, for example when I export and import back such a mesh in blender, it all works. Are you planning on making this work?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.