Code Monkey home page Code Monkey logo

composer-suite's Introduction

Hi! ๐Ÿ‘‹

I'm an open source developer from Germany working on tooling for web-first game development. Here's an overview of my active projects. Follow my public roadmap to see what I'm working on right now, and what I will be working on next.

If you enjoy my work, please consider sponsoring me on GitHub!

Demos, Games and Examples

space scene asteroid floating island fog boids revade wonkout pong miniplex demo splodybox

Libraries

State Management

  • ๐Ÿค– Miniplex, an Entity Component System library designed for ease-of-use and development ergonomics. Includes React bindings, but can also be used without a framework.
  • ๐Ÿšœ State Composer, a high-level finite state machine library for macro state in React applications.
  • ๐ŸŽƒ Eventery, a lightweight, dependency-free, typed publish-subscribe event emitter for JavaScript/TypeScript.
  • ๐Ÿ Statery, a simple proxy-based state container for React.
  • โฐ Timeline Composer, a small collection of React components for orchestrating timelines.

Graphics

  • ๐ŸŒˆ Shader Composer, a library for creating GLSL shaders from a tree of JS primitives (think ShaderGraph et al, but code.)
  • ๐ŸŽ† VFX Composer, a high-performance, game-ready visual effects library for Three.js and react-three-fiber.
  • ๐Ÿ–ผ Render Composer, a preconfigured, customizable render pipeline for react-three-fiber games.

Noteworthy projects from the past

Get in touch!

There's a bunch of ways you can get in touch with me:

composer-suite's People

Contributors

codyjasonbennett avatar dependabot[bot] avatar github-actions[bot] avatar hmans avatar souporserious avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

composer-suite's Issues

bug: Particle simulations advance when tab is paused

eg. when the tab is in the background for a longer period of time, the browser will throttle it (ie. it will stop calling requestAnimationFrame), but we're using elapsedTime so this is ignored. To fix this, we need to move away from using elapsedTime, and use an accumulated delta time instead. (This will also allow for effects to integrate into time scaling.)

feat: Move from Context to HOC

At the moment, emitters connect to MeshParticle instances through context. This works great as long as you don't have overlapping contexts, which is bound to happen quickly in more complex effects. We should refactor how our stuff is set up to what we used to do in Instanza:

  • Provide a higher-order function createParticles that returns a collection of components (Root, Particle, Emitter, ...)
  • Provide a hook useParticles that creates a memoized version of this

[shadenfreude] Verbatim GLSL expressions via tagged template literals

Summary

Currently, we need to declare variable dependencies using the input property, which also makes these variables available as local variables with friendly names:

const Add = (a: Float, b: Float) => Float("a + b", inputs: { a, b })

We could use tagged template literals to both automate the dependency management, and also remove all the local variable declarations in the GLSL:

const Add = (a: Float, b: Float) => Float(expr`${a} + ${b}`)

Here, expr would be a function that builds the final string, renders the provided values to their GLSL representations, and makes sure the referenced variables and snippets are injected into the variable's dependencies list (currently the input object, which we could now change into a dependencies array.)

Advantages:

  • Makes verbatim snippets clearer because it becomes more obvious where the outside dependencies are
  • We no longer need to explicitly provide inputs
  • We no longer need to explicitly declare snippet dependencies

feat: Support pre-spawning of particles

For this, we need to be able to set c.delay to negative numbers, which is currently not possible because the u_time uniform is being filled with Three's clock.elapsedTime. We need to change this to the uniform uses performance.now() or similar.

Roadmap

Tracking the various bits and bobs that need doing. There's probably more to be found in the complete list of issues (filtered by help wanted); the issues tracked here are the ones that I currently deem most important (or possibly even absolutely critical in order to make this library useful.)

Features & Improvements

A lot of these go beyond my own understanding of WebGL, shaders et al, and would hugely benefit from some outside help, be it a complete PR, or just some pointers towards a potential implementation:

Bugs:

feat: Composable Shaders

As is:

As of now, the library uses two monolithic shader chunks that are injected into the built-in materials of Three.js through three-custom-shader-material. All animation input is passed to this shader code by way of vertex attributes attached to the material.

This approach has a couple of limitations:

  • The same shader code needs to support all possible effects, which is bound to make it more complex than we'd want it to be.
  • Since the only way to configure it is through uniforms and attribute buffers, we will quickly run into their respective limits.
  • All animations are currently implemented using linear interpolation; supporting easing functions would need additional uniforms/buffers; see above.

To be:

For these reasons (and, admittedly, because it feels like a fun challenge), one of the goals for this library has always been to change this into a system that generates per-effect shader code. Basically, each effect would declaratively build a pipeline of operations that would then be compiled to shader code. The individual steps might be:

  • Billboarding
  • Velocity + Acceleration/Damping (with or without easing)
  • Animating Opacity
  • Animating Color
  • etc.

bug: Effects don't start up properly when using r3f/drei loaders

Using useTexture (or other use* loader hooks) in the component that renders a <MeshParticles> component will cause that effect to break (stay invisible) on first render. When something causes a re-render of that component, the effect will appear correctly, and things will continue to work fine.

This might be a misunderstanding of how useTexture et al tie into loading and <React.Suspense> in particular, but from what I've gathered so far, we're using it "as intended".

How it should work:

export const MyEffect = () => {
  const texture = useTexture("/textures/particle.png")

  return (
    <VisualEffect>
      <MeshParticles>
        <planeGeometry />

        <ParticlesMaterial
          baseMaterial={MeshStandardMaterial}
          map={texture}
          billboard
        />

What we currently need to do as a workaround

// (this, or any other loading mechanism)
const globalTexture = new TextureLoader().load("/textures/particle.png")

export const MyEffect = () => {
  return (
    <VisualEffect>
      <MeshParticles>
        <planeGeometry />

        <ParticlesMaterial
          baseMaterial={MeshStandardMaterial}
          map={globalTexture}
          billboard
        />

[shadenfreude] Roadmap: Minimum Useful Package

Which nodes (and/or features) does shadenfreude need to reach a "Minimum Useful Package" state?

Nodes

Value Nodes:

  • Float
  • Vector2
  • Vector3
  • Color It's just a Vector3
  • Vector4
  • Mat3
  • Mat4

Variable Nodes:

  • Uniform
  • Attribute
  • Varying

Input Nodes:

  • VertexNormal
  • VertexPosition
    • configurable to different spaces, or with separate output values per space?
  • Time
  • Texture
  • Texture3D

Math Nodes:

  • Operator
  • Add
  • Subtract
  • Multiply
  • Divide
  • Mix
  • Sin
  • Cos
  • Tan
  • Blend
  • Lerp (== Mix)
  • Inverse Lerp
  • Clamp
  • Saturate (Clamp 0, 1)
  • Min
  • Max
  • OneMinus
  • Fraction
  • Round
  • Truncate
  • Floor
  • Ceil
  • Sign
  • Step
  • Smoothstep
  • Waves (Sine, Cosine, Triangle, Sawtooth, Square)

Easing:

  • all of them! Aaaaah

Vectors:

  • Split
  • Join
  • Normalize
  • Distance
  • Cross
  • Dot

Noise Nodes:

  • ...?

Effect Nodes:

  • Fresnel

Master Nodes:

  • CustomShaderMaterialMasterNode (for CustomShaderMaterial)
  • ShaderMaterialMasterNode

Other Stuff

  • Maybe remove the type shortcuts Float, Bool etc. again? They don't communicate well that they represent values, but people might think they represent variables, considering we have variable creation helpers of the same names.
  • Make Uniform useful
  • Remove/rework only? - nah, it's fine. We will keep iterating on it.
  • title and name -> name and slug
  • Maybe rename inputs to something like variables or locals? (They are not inputs, but dependencies!) - nah, inputs is fine
  • Provide a mechanism for injecting functions into the GLSL without causing conflicts
    • Ideally, multiple instances of the same node should only declare the function once (global-scope id generator?)
    • The function name should be dynamically scoped (function name generator?)
  • Provide a good README
  • Provide a good starter CSB - https://codesandbox.io/s/github/hmans/shadenfreude-sandbox?file=/src/App.js
  • Move this library to its own repository
  • Find the final name (or stay with Shadenfreude)

feat: Make `<Emitter>` a scene object

<Emitter> should be a scene object (ie. <object3D> or <group>) and, more importantly, inherit its transform to emitted particles.

In the configuration object passed to the particle setup function, the position prop should be set to the position inherited from the emitter. Then it is up to the user if they want to modify or completely reset it.

[examples] Water Example / Perlin Noise broken on Chrome for Android

It's looking very broken. Tested on Pixel 3a and OnePlus 6T. All other platforms/browsers I've tried so far don't have the issue.

Things to try:

  • Is this maybe related to the new postprocessing?
  • Could it be related to the new precision setting?
  • It only happens with the extra FBM noise. If I remove that, everything is fine (but of course the waves look a lot smoother, because the extra noise is missing.)
  • A possible culprit might be our Perlin Noise implementation, which the FBMNoise unit is using.

image

Project: That Big Composable Refactor

Summary:

Tracking progress on the refactor that will allow the library to gain some important features, including:

Bonus goals:

Draft:

  • Remove MeshParticles, we probably don't need it. All we will ever do is control an instance of InstancedMesh. All the code that currently lives in MeshParticles can live in ParticlesMaterial or some other class/component.
  • ParticlesMaterial currently doesn't do anything beyond injecting our custom, monolithic shader code into Three's built-in materials via TCSM. We could just as well let the user use TCSM directly here, and provide a nice API for having our code inject uniforms and shader chunks.
  • The composition feature needs to work around a "configuration object" that is mutated by a series of "configuration mutators", in a similar fashion to how we assemble controllers in controlfreak. This series of sequential steps could then alternatively be expressed as React components for declarative goodness.

feat: Node-based Shader Composition

Three.js will eventually gain node-based materials, which will make all of the crazy custom shader shenanigans we do here obsolete, but they're not ready for use that, so let's build our own riff on them (we've done it before, let's do it properly now!)

Features:

  • Define shader nodes as objects (not classes)
  • Sitting on CSM
  • Vertex/Fragment header chunks
  • Vertex/Fragment bodies
  • Bodies are automatically scoped
  • Variables (they're objects)
  • Dependencies
  • Uniforms
  • Automatically Scoped Uniforms
  • Attributes
  • Varyings
  • Master nodes?!
  • When encountering an input variable with no node attached, instead of throwing the "dependency not found" error, just render the variable No longer necessary.

feat: Point Particles

Up until now, all of this library's particle effects are based on instanced meshes. It would be good to implement an alternative that is purely gl.POINTS based, for situations where you really only need points of any size (or unscaled sprites.)

bug: Examples don't reset in production

In production -- and only there -- the examples from the examples app don't reset when switching between them using the navigation. It looks like the materials and/or emitters don't get properly un- and re-mounted. Not sure if this is an issue with the library, or just the examples app.

Consolidate `Vec3` and `vec3` (and 2, and 4, ...)

We currently have two constructors for each vector type: Vec3 (capitalized) and vec3 (non-capitalized).

Vec3 assumes a single input argument (which can be another Vec3 unit, or a THREE.Vector3), plus an optional unit configuration object.

vec3 assumes three input arguments (x, y and z), each of which can be a Float unit or a number value, plus an optional unit configuration object.

(The same for vector4s and vector2s.)

Naturally, it's not great that we have two functions (per vector type) doing mostly the same thing, and it would be nice to consolidate these, but it will probably require some typing stunts to do it nicely.

Only put units with `varying: true` into varyings if they are used within the fragment shader

As is: any time any program uses a unit that is configured with varying: true, the value is written into a varying, even if it's not actually used in the fragment program.

To be: only if such a unit is actually used in the fragment program should it be made available as a varying.

Note: it is unclear if creating a varying that isn't actually declared and sourced in the fragment program actually causes the varying space to be used at all. In some naive tests on M1 Mac + Chrome, I was able to create literally hundreds of varyings in the vertex shader, but not use them in the fragment shader, without any problems. I have no idea if this behavior is generally applicable across platforms, but it can't hurt to optimize this.

bug: Intermittent visual artifacts

  • Looks like something in the vertex shader is getting confused.
  • Seems to happen randomly. At the very least, I have not found a way to reliably reproduce this.
  • Happens a lot more often when using post processing effects.
  • Seems to happen only with textured particles.
  • Confirmed in: Chrome, Firefox
Screen.Recording.2022-06-20.at.14.24.04.mov

(Video recorded in Chrome for macOS on a M1 Mini runing OS X Monterey.)

CPU-controlled particles/instances

Sometimes you may wish for simulating particles on the CPU instead of the GPU. <Particles> is only a thin abstraction over Three's InstancedMesh class, so there's nothing that's stopping us from updating its instance matrix buffer directly.

Some ideas/notes:

  • We probably need a component that establishes a scene object (so it has a transform we can work with) and wraps an <Emitter>, or at least calls the particle system's emit function.
  • Unlike other particles, this controlled instance will update its instance matrix every frame. This clashes with our optimized code that will only update buffers for newly spawned particles. We have at least the following options:
    • Introduce a flag in Particles to disable the optimizations, and make the mesh upload all buffers in their entirety every frame (at least when there has been a change.) This probably comes at a significant performance cost.
    • Make Particles even more intelligent (is that even possible?!) by teaching it to upload multiple buffer chunks in a single frame. I remember storing some links somewhere to articles that were toying with similar concepts around explicitly calling gl.bufferSubData.

docs: Initial Documentation!

For the 0.2.0 milestone, we should have an initial set of documentation. It can live in the repository's README file.

feat: Make the library usable from vanilla Three.js

It would be nice to make this library usable from vanilla React. A couple of notes/ideas/etc.:

  • Currently, the only effect runner, <MeshParticles>, is implemented as a React component, assuming react-three-fiber. In #11, this component will probably undergo heavy refactoring. It would be a good opportunity to refactor the relevant code into a normal MeshParticles class that inherits from InstancedMesh. This class could then still be exposed to React via extend (or a normal component.)
  • For materials, we're using three-custom-shader-material, which already lets you use its functionality imperatively, so there's probably nothing we need to do here.
  • <Emitter> very conveniently is only a very thin wrapper around the spawnParticles function, which can also be called imperatively. There is probably no need to provide a vanilla equivalent to this React component.
  • The library currently contains a lot of React niceties for declaratively creating animation waterfalls through the <Delay>, <Repeat> and <Lifetime> components. It should not be a goal to replicate these for imperative use. When using this library imperatively, users will most likely have their own animation primitives they can use.

feat: Flipbook animations

Certain particle effects (like smoke, fire, ...) benefit from the particle texture being animated. Ideally we would be able to attach a flipbook texture as a uniform and cycle through its tiles in a frequency specified either as a uniform, or a per-particle attribute value.

Milestone: Initial Good Version

Functionality:

  • Implement basic functionality
  • Implement module system
  • Optimize buffer uploads so only the updated parts get uploaded
  • Allow the use of textures (map on MeshStandardMaterial et al)
  • Find a good pattern for reusing materials/shaders across multiple particle meshes

Animation Modules:

  • Particle Lifetime
  • Animate Scale
  • Animate Velocity
  • Animate Acceleration
  • Animate Color
  • Animate Alpha
  • Billboard
  • Soft Particles: #146
  • Rotation
  • Rotational Motion (could be the same as rotation after translation?)

Emitters:

  • Allow count to be a function, executed every time particles are emitted
  • Allow the user to specify the update priority at which to run useFrame

Others:

  • Provide easing functions (but should probably go into shader-composer-toybox?)
  • Default to MeshStandardMaterial for the VFXMaterial's baseMaterial
  • Lifetime should be ParticleLifetime

[shadenfreude] Dependency Pruning

We already have the only option and it does what it's supposed to do, but it causes the problem that sometimes, the fragment shader will use a node like VertexPosition that sets a varying -- and in these situations, that variable must still run.

Possible solution: Instead of stopping the compiler recursion when encountering a variable with an only setting, set some compiler state and keep recursing; now only render variables that have varying set.

[shadenfreunde] Rework compiler

At the moment, the compiler runs twice, once for each program, and iterates through all nodes and dependencies recursively. This complicates branch pruning somewhat, since there are nodes that still need to be rendered in one program even though they are only used in the other, like any nodes that create varying variables.

Let's rework the whole thing and make it simpler. Instead of the current recursive approach, how about the following:

  • First, we recurse through all the dependencies and mark where nodes are being used.
    • For variable dependencies (dependencies inherit the only status)
    • For vertex chunks
    • For fragment chunks
  • Now we should have a single array of dependencies (nodes, snippets, expressions) that we can loop through to build both programs. A decision on whether to include each of these entities can be made for every single one.

Draft: R3F component structure

Summary

With VFX Composer's imperative core, we want to leave integration details to the user, who will just create instances of Particles and VFXMaterial explicitly and then invoke emit as they please. In R3F projects, we need to provide a little more structure to make things nice, and make some things that are important for VFX as easy as possible, including:

  • Enforcing/encouraging material reuse (and making sure materials are ready before the first effect instance is created)
  • Running multiple instances of the same effect within the same effect mesh (1 draw call) while still allowing the user to declare stand-alone effect instances that can move around in the world

The normal mesh/material/geometry structure that is typical for most R3F use cases is not enough here.

Suggested Structure

Given an Sparks object that is created through a createEffect() factory, representing a specific effect type (sparks), let's imagine the following structure:

/* This allows us to get a Root, Mesh and Emitter component that are already bound to each other */
const Sparks = createEffect({
  geometry: <planeGeometry />
  material: <VFXMaterial />
})

<Sparks.Root>
  {/*
  An Effect instance wraps a Particles mesh with the generated material already assigned to it.
  It will also receive an instance of the geometry configured for the effect.
  We declare a singular root effect that will house all particles with a "world" scope.
  It will always remain at the scene origin, identity quaternion, scale 1.
  */}

  <Sparks.Effect />

  {/* Any emitter we encounter now will use it by default: */}

  <Sparks.Emitter setup={...} />

  {/*
  If we're happy with all particles existing in the top-level global effect, then we can stop
  here. But sometimes, we may want to create individual instances of effects that have their own
  transform so we can move, rotate, scale them individually. To enable this, we can simply create
  more instances of the effect mesh:
  */}

  <Sparks.Effect>
    {/* Emitters inside this mesh will use it instead of the top-level one: */}
    <Sparks.Emitter setup={...} />
  </Sparks.Effect>
</Sparks.Root>

Notes:

  • Sparks.Root might already include the creation of the root Sparks.Effect so the user doesn't have to do it themselves.
  • We might provide a <VFXComposerRoot> root that provides makeEffect and automatically creates the required root/mesh/etc. components.

feat: `<Delay>` component

It would be nice to have a <Delay> component to declaratively declare a dastardly delay. This will make effect waterfalls nicely composable (compared to manually juggling initialDelay props.)

feat: Allow re-use of existing materials

As of now, the materials for particles are always created from scratch (using <ParticlesMaterial>, which uses three-custom-shader-material to patch our shader code into newly created instances of Three's built-in materials.)

Ideally, we'd be able to patch existing materials. This is important for eg. using materials imported from a GLTF file.

This probably requires a PR to three-custom-shader-material in order for it to accept existing instances of materials.

demo: Saturn:[ Looping Animation, live config] (Meshes)

Simple common animations like gears spinning, the fountain demo etc.

The specific one I want to recreate is: https://codepen.io/Yakudoo/pen/qbyga
Screen Shot 2022-06-07 at 08 11 34

as its not thousands I'm not sure we need to instance it or exactly how to play that out, I'll start with octo shapes or something.

  • Span
  • Persist
  • Animate
  • Respond to controls
  • size variance
  • color variance

Nice to have:

  • Shadows
  • Collisions (even fake)
  • User interaction

bug: Billboarding in vertex shader is not correct

Currently, the (optional) billboarding is implemented in the vertex shader like this:

vec3 billboard(vec2 v, mat4 view){
   vec3 up = vec3(view[0][1], view[1][1], view[2][1]);
   vec3 right = vec3(view[0][0], view[1][0], view[2][0]);
   vec3 p = right * v.x + up * v.y;
   return p;
}

void main() {
  /* ... */

  if (u_billboard) {
    csm_Position = billboard(csm_Position.xy, viewMatrix);
  }
}

This doesn't appear to be good enough, as the vertices start being projected into the wrong position once your camera starts rotating far away enough from the identity quaternion.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.