Code Monkey home page Code Monkey logo

gltf-pipeline's Issues

Uniform function templates

As we continue adding pipeline stages, it is going to be important to keep a uniform template for pipeline stage functions so that they can be used in the same way.

I would propose the following:

/**
 * Performs some operation on a glTF hierarchy.
 *
 * @param {Object} gltf An object holding a glTF hierarchy.
 * @param {Object} [options] Defines more specific behavior for this stage.
 *
 * @returns {Promise} A promise that resolves when the stage completes.
 */
function stage(gltf, options) {
    ...
}

For #130:
I would classify the following as "stages" and should be converted to use this template. Anyone can feel free to do them in any order; it doesn't need to be one big pull request.

  • addDefaults
  • removeUnused (and all substages of removeUnused)
  • mergeDuplicateVertices
  • removeUnusedVertices
  • mergeDuplicateAccessors
  • removeDuplicatePrimitives
  • convertDagToTree
  • combineMeshes
  • cacheOptimization
  • quantizedAttributes
  • encodeImages

the translation is "baked" into the geometry data

I convert a gltf model into the glb using gltf-pipeline,and then loaded into Cesiumjs.But the textures of glb model are in chaos.Here is the model .gltf and .glb format
I have already asked in glTF.javagl suggest me that the translation is "baked" into the geometry data:
image
Thanks.

Cleanup

Here is a rough list of tasks for bringing this repo up to production. Feel free to edit or add new items to the list:

  • We should have a common set of models that we use for testing new features, especially models that have created problems in the past. However I'm not sure if they should be hosted in this repo.
  • Refactor gltf-pipeline as a series of stages with dependencies
    • #99
    • #75
    • Use promises instead of callbacks
  • General cleanup, api standardizing, file renaming, folder organization
    • Many functions expect an options object. Instead of sending in {} in a lot of places, just do it the Cesium way and create the options object if it's not defined with options = defaultValue(options, defaultValue.EMPTY_OBJECT);
  • Use recommended libraries, #178
  • Use JSDoc everywhere
  • Progress events: #92
  • Something like jQuery for glTF to eliminate redundant glTF data structure traversal
  • Note: If there are any changes to the api we will need to update OBJ2GLTF and the website model converter

Progress events

Especially when gltf-pipeline is ran on a server, we'll want to be able to provide progress events to users so they know much time is left, especially for slow stages like AO baking.

Gracefully handle missing textures

Currently, when loading uris, if a file is missing, the pipeline throws an error. If that missing file is a texture, the reference should remain the same and maybe print a warning.

Build currently breaks due to browserify

Duplicate: see #12

Build is currently based on browserify which seems to have issues with some of our code dependencies. @lasalvavida suggested it may be an issue he encountered where browserify can't handle dynamically loaded dependencies.
"build": "browserify index.js --standalone gltfPipeline -o build/gltf-pipeline.js",

Browserify doesn't run on Windows

Richard@Richard-PC /cygdrive/c/Users/Richard/src/gltf-pipeline
$ npm run build

> [email protected] build C:\Users\Richard\src\gltf-pipeline
> browserify index.js --standalone gltfPipeline -o build/gltf-pipeline.js

Error: Cannot find module './process' from 'C:\Users\Richard\src\gltf-pipeline\node_modules\cesium\node_modules\requirejs\bin'
    at C:\Users\Richard\AppData\Roaming\npm\node_modules\browserify\node_modules\resolve\lib\async.js:55:21
    at load (C:\Users\Richard\AppData\Roaming\npm\node_modules\browserify\node_modules\resolve\lib\async.js:69:43)
    at onex (C:\Users\Richard\AppData\Roaming\npm\node_modules\browserify\node_modules\resolve\lib\async.js:92:31)
    at C:\Users\Richard\AppData\Roaming\npm\node_modules\browserify\node_modules\resolve\lib\async.js:22:47
    at FSReqWrap.oncomplete (fs.js:82:15)

npm ERR! Windows_NT 6.1.7601
npm ERR! argv "C:\\Program Files\\nodejs\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "run" "build"
npm ERR! node v4.2.4
npm ERR! npm  v2.14.12
npm ERR! code ELIFECYCLE
npm ERR! [email protected] build: `browserify index.js --standalone gltfPipeline -o build/gltf-pipeline.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] build script 'browserify index.js --standalone gltfPipeline -o build/gltf-pipeline.js'.
npm ERR! This is most likely a problem with the gltf-pipeline package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     browserify index.js --standalone gltfPipeline -o build/gltf-pipeline.js
npm ERR! You can get their info via:
npm ERR!     npm owner ls gltf-pipeline
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     C:\Users\Richard\src\gltf-pipeline\npm-debug.log

Stage to convert DAG node hierarchy to a tree

For a test model and discussion, see CesiumGS/cesium#1754

For the unit tests, please make a few test models to cover all the cases instead of reusing that model. Note that we originally supported DAGs in glTF, but the final spec does not, so we need to convert it to a tree as part of the pipeline.

I did not implement this, but here are my notes, which I'm pretty confident will work:

Given a Directed Acyclic Graph (in this case, a glTF node hierarchy), how do we convert it into a tree such that nodes with multiple incident edges are duplicated to only have one incident edge? This is used for converting a 3D model data structure to a data structure used for rendering, e.g., to compute transforms based on each node's ancestors.

Assuming each node has a visited property, initialize this to false for all nodes. Traverse the graph breadth-first. If a node's visited property is true, duplicate it and the subgraph it is the root of (all nodes in this subgraph should have visited === false), and update its parent's pointer. Otherwise, set visited to true.

For example, D is visited twice below.

 A   // Pretend these have downward facing arrows
/ \
B C
\ /
 D

And the converted tree is:

  A
 / \
B   C
|   |
D   D1   // D1 is the only copy made

In the next graph, G is visited four times.

 A
/ \
B C
\ /
 D
/ \
E F
\ /
 G

The converted tree is:

   A
  /  \
 B     C
 |     |
 D     D1
/ \   /  \
E F   E1 F1
| |   |  |
G G1  G2 G3

Exposing helpers

As discussed offline, a lot of potentially useful gltf-pipeline helpers aren't being exposed right now. We can keep a running list here. Some ideas for exposing them:

  • export a utils.js file to gltf-pipeline that exposes useful utils. this to avoid clutter in index.js
  • expose requested utilities as "_private"
  • just expose everything? Expose a selection

Wishlist:

  • readAccessor
  • getUniqueId
  • packArray
  • byteLengthForComponentType

How to get MBR from one gltf?

How to get MBR from one gltf?

Mbr 's format is below(
"root": {
"boundingVolume": {
"region": [
-1.2960028825819643,
0.7068325938171807,
-1.286308749045535,
0.7141023380151126,
-11.892070104139751,
547.7591871983744
]
)

Improve reference doc

To build on #157, most doc need

  • More precise descriptions, e.g., more than "generates normals" or "oct-encodes normals"
  • Example code

JCGT paper on gltf-pipeline

@leerichard42 this is something we could consider writing and submitting in the fall if you are interested. JCGT is a great pragmatic journal. The paper would need to be very data focused, showing before/after size/performance for various models with various pipeline stages.

Review npm dependencies

  • Since we depend on bluebird, there's no reason to also depend on promise
  • fs-extra means we don't need mkdirp
  • There's no good reason to depend on underscore
  • object-values is literally 11 lines of code and provides no useful benefit
  • I'm pretty sure datauri usage is unnecessary and can be replaced with one line of code.

There might be more, but this is good low hanging fruit for anyone looking to contribute.

Speed up removeUnusedVertices

removeUnusedVertices and by proxy mergeDuplicateVertices are a major bottleneck for speed in the main pipeline right now.

My impression is that removeUnusedVertices is so slow because mergeDuplicateVertices ends up creating lots of holes in the buffer that need to be closed. In order to close those holes, the entire buffer has to be moved over which can be very time consuming for large models.

This could be faster by dividing the buffer up into chunks, closing the gaps in the smaller chunks, then concatenating the used parts of the chunks. It may also be faster to iteratively read through and copy chunks into a new buffer, though probably less memory efficient.

Cleanup

I can do these at the code sprint if we don't get to them sooner.

  • Move OptimizationStatistics.prototype.print to a function in bin/gltf-pipeline.js since it doesn't make sense in the context of the browserify build.
  • To reduce code duplication, create a helper function for at least this part of unused object removal:

https://github.com/AnalyticalGraphicsInc/gltf-pipeline/blob/master/lib/removeUnusedImages.js#L22-L42

This will require replacing properties like stats.numberOfImagesRemoved with something like stats.numberRemoved['image'].

removeUnusedVertices Removing Used Vertices

Testing the gltfPipeline with more involved models than the CesiumBox, I identified removeUnusedVertices seems to be removing all primitives except the main primitive of the model (is my terminology correct here? Still getting a feel for glTF).

See attached images

gltfPipeline with removeUnusedVertices step
removedprimitivesjpeg

gltfPipeline without removeUnusedVertices
notremovedjpg

`removeUnusedVertices` relies on byte strides exclusively

I found out that if you run the pipeline on a model that has accessors with byteStride set to 0 but type set to VEC3, compressBuffers in removeUnusedVertices strips out too much information and causes an indexOutOfBounds error soon after.

From my reading of the gltf spec (this part) it seems like the pipeline should be able to observe from "byteOffset": 0 and "type": "VEC3" that the data is tightly packed vec3s.

Undefined variable "czm_sunDirectionEC"

I tried to convert a simple OBJ model to glTF, using https://github.com/AnalyticalGraphicsInc/OBJ2GLTF

The resulting fragment shader code contained a line

vec3 l = normalize(czm_sunDirectionEC);

but the variable czm_sunDirectionEC has not been declared at this point. After a bit of tracing and backward search, it appears that this variable is inserted in the shader by the gltf-pipeline, here:

https://github.com/AnalyticalGraphicsInc/gltf-pipeline/blob/master/lib/modelMaterialsCommon.js#L417

(At least this variable name appears nowhere else, and I can't imagine how the shader that is constructed there should be valid when this line is inserted)

BrainStem sample model combinePrimitives throws exception

Probably related to #83

buffer.js:835
throw new TypeError('value is out of bounds');
^

TypeError: value is out of bounds
at checkInt (buffer.js:835:11)
at Buffer.writeUInt16LE (buffer.js:893:5)
at mergeAccessors (C:\Users\rtaglang\AGI\git\gltf-pipeline\lib\combinePrimitives.js:233:36)
at combinePrimitiveGroup (C:\Users\rtaglang\AGI\git\gltf-pipeline\lib\combinePrimitives.js:91:36)
at combineMeshPrimitives (C:\Users\rtaglang\AGI\git\gltf-pipeline\lib\combinePrimitives.js:49:33)
at combinePrimitives (C:\Users\rtaglang\AGI\git\gltf-pipeline\lib\combinePrimitives.js:19:43)
at processJSONWithExtras (C:\Users\rtaglang\AGI\git\gltf-pipeline\lib\gltfPipeline.js:49:5)
at C:\Users\rtaglang\AGI\git\gltf-pipeline\lib\gltfPipeline.js:63:9
at C:\Users\rtaglang\AGI\git\gltf-pipeline\lib\readGltf.js:42:13
at C:\Users\rtaglang\AGI\git\gltf-pipeline\lib\loadGltfUris.js:18:9

gltfPrimitiveToCesiumGeometry doesn't account for compressed attributes

Currently, it just reads the values off of the buffer, even though the normals might be oct-encoded, the texture coordinates might be compressed, or any attribute may be quantized.

This creates bad geometry data and as a result, processes like ao that depend on geometry will produce bad results or fail entirely.

If you try to run ao on a quantized box model, the pipeline hangs.

Compress Integer attributes

JOINT attributes for example are integer values that are currently stored as floats in a lot of models. A compressIntegerAttributes stage could be written that checks if an accessor is marked as float, but only contains integers, then choosing a suitable int size (and sign) to contain the range.

combineNodes has problems with oct-encoded or quantized normals

combineNodes doesn't oct-decode or unquantize the existing normals before apply transforms on them. One solution is to do some checks in combineNodes to decode them, transform, then re-encode. Another way could be to store the decoded normals in the extras when the gltf is loaded.

AO Roadmap

  • Experiment to determine well-balanced settings for baking building AO
  • For baking to vertices: assess advanced baking approximations like least squares baking
  • Add an option to decouple the uniform grid cell size from maximum ray distance
  • treat the grid as a sparse voxel structure, voxelize rays and check for intersections along the line of voxels? Dispense with rays altogether? This starts to resemble voxel cone tracing
  • use a different datastructure, like an oct tree?
  • Apply some kind of inside-outside algorithm to correct model normals? (possibly not in this stage)
  • For baking to vertices: assess and document payload improvements of quantization
  • For baking to vertices: assess ways to reduce the number of attributes and varyings that need to be added to shaders
  • for AO on a diffuse texture, detect and use any existing TEX_COORD instead of a new varying
  • detect places AO can be packed in other attributes (VEC4 positions, VEC3/VEC4 normals)
  • For uniform grids, voxels, or octrees: experiment with space filling curves for improved triangle access.
  • For baking to textures: use conservative rasterization when sampling over triangles
  • Replace the current random ray generator (dependent on Javascript's random number generator) with something like a sobol sequence
  • For baking to textures: Add a stage to generate texture coordinates
  • possibly, unique AO texture coordinates with separate AO textures
  • also requires shader modification for texture baking
  • There's lots of opportunities for obvious parallelism, including GPU parallelism, is it worth it to improve the latency of baking one model?

-o option should recognize extension

-o example.glb should output Binary glTF, not regular glTF with a .glb extension.

Throw an exception if the -b option is provided and the extension doesn't match.

Address Memory Usage

This is something we didn't really account much for in originally writing the pipeline and something that I have tried to be better about more recently.

Particularly, AccessorReader should be used where it is possible instead of readAccessor. The difference is that AccessorReader reads off accessor elements one-by-one, while readAccessor reads off the entire accessor at once. Sometimes this is necessary, when a whole view of the accessor data is required, but if we're operating on an accessor an element at a time, it would be best not to need to allocate space for all of those elements.

Roadmap

@leerichard42

  • Stage to remove unused objects:
    • Execute these in the right order, and add a test case that removes an unused node, which then causes other newly unused objects down the hierarchy to be removed
    • accessor
    • buffer
    • bufferView
    • camera
    • image
    • material
    • mesh
    • node
    • program
    • sampler
    • shader
    • skin
    • technique
    • texture
  • Stage to convert glTF with separate resources to glTF with embedded resources
    • Update obj2gltf to use this.
    • Stage to go in the reverse order
  • Stage to convert glTF to binary glTF (also see binary-gltf-utils)
    • Stage to go in the reverse order
  • Stage to convert DAG to a tree, #40
  • Stage to remove unused vertices
  • Stage to combine primitives in a node, #43
  • Stage to combine nodes (that are not targeted for animation), KhronosGroup/glTF#20, #43

@pjcozzi

  • Set up command-line tool, tests, jshint, etc.
  • Add third-party licenses to LICENSE.md
  • Set up coverage
  • Set up browserify
  • Stage to add defaults. Replace gltfDefaults in Cesium.
    • Separate glTF 0.8 to 1.0 into another stage
    • Separate materials extension?
  • Stage to generate shaders/techniques/materials from KHR_materials_common. Move modelMaterialsCommon from obj2gltf and Cesium to here.
    • Update obj2gltf and Cesium to use gltf-pipeline via npm and browserify.
  • Streamline use of Cesium npm module, CesiumGS/cesium#2524.
  • Figure out in-memory representation so each stage doesn't have to write to and read from disk
  • Run jasmine and istanbul through gulp and karma
  • Reference doc
  • Publish to npm
  • Consider using ES6 features with Node 4. https://github.com/bevacqua/es6

Images are bigger after encoding with Jimp

For example, the CesiumMilkTruck.png from the glTF sample models is 902 KB.

If you run it through Jimp and re-encode it, it ends up as 1,441 KB.

This is obviously not ideal, and it would also be useful to be able to optimize texture images for the web in general.

I did take a brief look at imagemin this morning, but I was getting a failure that I think is probably due to AGI's firewall. The imagemin-optipng module has a vendor folder on git with native executables in it that is missing when I get it as an npm module.

Imagemin seems to be the most widely used solution for this, but if we can avoid it, I'd really rather not depend on node modules that have native dependencies.

Use extras._pipeline

We often stash data in a glTF object's extras object, e.g.,

bufferView.extras.id = // ...

This is the right approach, but it could overwrite application-specific data stored in extras if an app's content pipeline adds metadata to extras objects before using the glTF pipeline. To avoid this, let's create and use a _pipeline sub-object, e.g.,

bufferView.extras._pipline.id = // ...

This could also simplify clean up where we could have a final stage that walks all glTF objects and deletes extras._pipeline and also extras if deleting extras._pipeline makes extras empty. This way we would not need a lot of custom clean up code in each stage (an exception will be if a large amount of data is stored in extras._pipeline - we would want to delete this right away).

Buggy model gets mangled by pipeline

While testing my own branch, I have found a model that is mangled in my branch as well as master:

Quickly commenting out each step and seeing the results seems to show that the model is getting mangled by the combinePrimitives step.

buggy.gltf.zip

goodbuggy
mangledbuggy

Combine primitives/nodes

@leerichard42 let's do this next. It is a great performance optimization since it reduces the number of draw calls required at runtime and some exporters can generate really sub-optimal models. For example, I've seen the SketchUp export COLLADA where each triangle needed a separate draw call! After running an optimization like this, the model only needed one draw call.

  • Stage to combine primitives in a node (if they have the same attribute formats, material, and mode; this will also require combining accessors/bufferviews/buffers)
  • Stage to combine meshes in a node
  • Stage to combine nodes (if they are not targeted for animation and have the same material), KhronosGroup/glTF#20

@tfili any additional advice here?

Coverage doesn't work on Windows

Fails when run through npm, passes when run directly.

Richard@Richard-PC /cygdrive/c/Users/Richard/src/gltf-pipeline
$ npm run coverage

> [email protected] coverage C:\Users\Richard\src\gltf-pipeline
> ./node_modules/istanbul/lib/cli.js cover -x **/specs/** ./node_modules/jasmine-node/bi                                         n/jasmine-node specs

'.' is not recognized as an internal or external command,
operable program or batch file.

npm ERR! Windows_NT 6.1.7601
npm ERR! argv "C:\\Program Files\\nodejs\\node.exe" "C:\\Program Files\\nodejs\\node_mod                                         ules\\npm\\bin\\npm-cli.js" "run" "coverage"
npm ERR! node v4.2.4
npm ERR! npm  v2.14.12
npm ERR! code ELIFECYCLE
npm ERR! [email protected] coverage: `./node_modules/istanbul/lib/cli.js cover -x **/s                                         pecs/** ./node_modules/jasmine-node/bin/jasmine-node specs`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] coverage script './node_modules/istanbul/lib/                                         cli.js cover -x **/specs/** ./node_modules/jasmine-node/bin/jasmine-node specs'.
npm ERR! This is most likely a problem with the gltf-pipeline package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     ./node_modules/istanbul/lib/cli.js cover -x **/specs/** ./node_modules/jasm                                         ine-node/bin/jasmine-node specs
npm ERR! You can get their info via:
npm ERR!     npm owner ls gltf-pipeline
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     C:\Users\Richard\src\gltf-pipeline\npm-debug.log

Richard@Richard-PC /cygdrive/c/Users/Richard/src/gltf-pipeline
$ ./node_modules/istanbul/lib/cli.js cover -x **/specs/** ./node_modules/jasmine-node/bin/jasmine-node sp                        ecs
....................

Finished in 0.046 seconds
20 tests, 62 assertions, 0 failures, 0 skipped


=============================================================================
Writing coverage object [C:\Users\Richard\src\gltf-pipeline\coverage\coverage.json]
Writing coverage reports at [C:\Users\Richard\src\gltf-pipeline\coverage]
=============================================================================

=============================== Coverage summary ===============================
Statements   : 73.41% ( 243/331 )
Branches     : 48.63% ( 89/183 )
Functions    : 91.3% ( 21/23 )
Lines        : 73.41% ( 243/331 )
================================================================================

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.