Code Monkey home page Code Monkey logo

Comments (7)

tychedelia avatar tychedelia commented on July 18, 2024 3

@mitchmindtree Some notes more notes from my research.

True, one thing that comes to mind is that today by default we target an intermediary texture for each window (rather than targeting the swapchain texture from the draw pipeline directly) where the idea is that we can re-use the intermediary texture between frames 1. for that processing-style experience of drawing onto the same canvas and 2. for the larger colour channel bit-depth. I wonder if enough bevy parts are exposed to allow us to have a similar setup as a plugin 🤔

Bevy's view logic uses the same intermediate texture pattern, maintaining two internal buffers in order to prevent tearing, etc. You can disable the clear color to get the sketch like behavior.

Color depth isn't configurable, but using an hdr camera provides the same bit depth as our default (Rgba16Float) . Otherwise, bevy uses Rgba8UnormSrgb. Maybe they'd accept a contribution here, although I'd bet these two options work for a great majority of users.

They don't support MSAA 16x, not sure why.

In terms of pipeline invalidation, you can see all the options that would cause a pipeline switch in bevy's mesh pipeline. Basically, the key is generated and used to fetch the pipeline, so if the key changes, a new pipeline is created. I believe this supports everything we track: topology, blend state, and msaa.

Scissoring seems to be the main thing that isn't supported by default in the mesh pipeline. I think it might be simple to implement as a custom node in the render graph though? Definitely need to do more investigation here. It's supported in their render pass abstraction, just isn't used in the engine or in any examples.

I'm like... 70% of the way through implementing our existing rendering logic, but as I read through the bevy source in doing so, I'm continually like, oh they're doing the exact same thing already.

Yeah currently I think our draw API just reconstructs meshes each frame anyways, but I think we do re-use buffers where we can, but maybe not so crazy to reconstruct meshes each frame?

Yeah, I don't think the performance would be worse than our existing pattern, so this is likely totally fine.

Hmm. 🤔 Much to consider. I'm definitely enjoying getting into the fiddly wgpu bits of the renderer, but it would also be great to reduce the amount of custom rendering code we need to maintain as that's kinda the whole point of this refactor.

from nannou.

tychedelia avatar tychedelia commented on July 18, 2024 3

It lives!

image

Will push my PoC to a branch in a bit. Here's some details about what I've done:

  • We are using a ViewNode, which means we hook into Bevy's windowing. So we attach our nannou specific components to a view, and are able to target that. This works really well and integrates cleanly with the renderer. This render sits at the end of bevy's core 3d pass. Still need to experiment more with mixing in bevy meshes just to see what happens, but it potentially "just works", which would be so cool.
  • We're using bevy's camera system and view uniforms here, which is nice.
  • This also means that bevy is managing all the textures for us. We write to one of their view target textures in place of the intermediate texture, and bevy writes to the swapchain automatically. Need to investigate whether that also means we can hook into MSAA.

There's a few outstanding issues to deal with in my PoC:

  • Right now, I'm just binding a mesh instance per window, with no support for binding textures. It's not clear to me whether we want to keep the same pattern of passing mesh + commands to the final render, or whether we might want to explore creating multiple meshes and baking the texture info into them.
  • Not clear to me where the best place to compute the raw vertex data is. Should we compute it in the main world, or extract our main world components into the render world and compute it there. This probably doesn't matter for performance and is more an architecture concern.
  • We can handle scissoring in the ViewNode 👍.
  • No idea about the text stuff, anything that deals with assets we'll want to lean on bevy for.
  • A bunch of misc. code organization / pattern questions I still have.

TLDR: Sans some outstanding questions about feature parity, this approach is working surprisingly well, and while it still requires us to manage some wgpu stuff, the surface area is reduced a lot and improved by some patterns bevy offers. It would still be really interesting to explore hooking into bevy's pbr mesh stuff completely, but this is definitely a viable approach that demonstrates some of the benefits of our refactor.

from nannou.

tychedelia avatar tychedelia commented on July 18, 2024 1

Have started doing a poc of directly porting the existing rendering algorithm into bevy's mid-level render api. Most of the existing code maps pretty directly to the different pieces of bevy's api, but there's a bit of complexity when it comes to rendering the view within bevy's existing render graph. Namely, it requires using bevy's camera system (i.e. camera = view).

In my previous comment (#954 (comment)) I mentioned that we might need to implement our own render node, but going this far basically means we live entirely outside of bevy's renderer, and I'm concerned will make it more difficult to take advantage of features like windowing. It also may lead to strange interactions if users want to both use our draw api as well as bevy's mesh api.

One option I'm exploring is to just use bevy's orthographic camera and hooking into their view uniform for our shaders. This is pretty straightforward, but may mean we also need to do things like spawn lights, etc.

Another alternative is to explore just using bevy's existing pbr mesh pipeline. A simple example of what this might look like:

fn setup(
    mut commands: Commands,
    mut meshes: ResMut<Assets<Mesh>>,
    mut materials: ResMut<Assets<StandardMaterial>>,
) {
    commands.insert_resource(AmbientLight {
        color: Color::WHITE,
        brightness: 1.0,
    });
    commands.spawn(Camera3dBundle {
        transform: Transform::from_xyz(0.0, 0.0, -10.0).looking_at(Vec3::ZERO, Vec3::Z),
        projection: OrthographicProjection {
            ..Default::default()
        }
            .into(),
        ..Default::default()
    });

    let tris = vec![
        Vec3::new(-5.0,  -5.0, 0.0).to_array(),
        Vec3::new(-5.0,  5.0, 0.0).to_array(),
        Vec3::new(5.0, 5.0, 0.0).to_array(),
    ];
    let indices = vec![0, 1, 2];
    let colors = vec![
        Color::RED.as_linear_rgba_f32(),
        Color::RED.as_linear_rgba_f32(),
        Color::RED.as_linear_rgba_f32(),
    ];
    let uvs = vec![Vec2::new(1.0, 0.0); 3];
    let mesh = Mesh::new(PrimitiveTopology::TriangleList)
        .with_inserted_attribute(Mesh::ATTRIBUTE_POSITION, tris)
        .with_inserted_attribute(Mesh::ATTRIBUTE_COLOR, colors)
        .with_inserted_attribute(Mesh::ATTRIBUTE_UV_0, uvs)
        .with_indices(Some(Indices::U32(indices)));

    println!("{:?}", mesh);
    let mesh_handle = meshes.add(mesh);

    commands.spawn(PbrBundle {
        mesh: mesh_handle,
        /// the pbr shader will multiply our vertex color by this, so we just want white
        material: materials.add(Color::rgb(1.0, 1.0, 1.0).into()),
        transform: Transform::from_xyz(0.0, 0.0, 0.0),
        ..default()
    });
}

The issue here is that we either need to cache geometry or clear it the meshes every frame. This may or may not be a big deal but bevy definitely doesn't assume that meshes are drawn in a kind of immediate mode.

I think it's worth trying to complete an as-is port of the renderer to the mid-level bevy api just to see what it looks like, but my experience so far is definitely generating more questions. Ultimately, seeing actual could will probably help clarify!

TL:DR;

  1. Can we get away with just using bevy's high level based pbr mesh api? What would need to change in our draw api to do so?
  2. If we want to use their mid-level render api, how can we interact with the parts of the renderer we want to use, windowing, input, etc.
  3. Will it be possible to support both our draw api and users doing arbitrary bevy ops, spawing meshes, etc.

from nannou.

tychedelia avatar tychedelia commented on July 18, 2024 1

The bevy asset system is actually incredibly helpful for getting user uploaded textures to work. When a user uploads a texture, bevy by default creates a Sampler, Texture, TextureView, etc. This means that we can just import these already instantiated into our render pipeline. Configuration (i.e. for the sampler) is handled by bevy, so we may need to figure out how to manage additional configuration options there. One thing to note is that assets upload asynchronously, so there's a bit of additional complexity there.

from nannou.

tychedelia avatar tychedelia commented on July 18, 2024

Doing a little more research — seems like we'll also need to implement a node in the render graph so we can manage some features of the render pass itself. There's an example of that here, but the main 3d pipeline source in the engine itself may be a better resource.

from nannou.

mitchmindtree avatar mitchmindtree commented on July 18, 2024

In my previous comment (#954 (comment)) I mentioned that we might need to implement our own render node, but going this far basically means we live entirely outside of bevy's renderer,

I love the idea of attempting to use bevy's camera and fitting the Draw API in at the highest level possible in order to work nicely alongside other bevy code, but I wouldn't be too surprised if it turns out we do need to target some mid or lower level API instead due to the way that Draw kind of builds a list of "commands" that translate to fairly low-level GPU commands (e.g. switching pipelines depending on blend modes, setting different bind groups, changing the scizzor, etc).

and I'm concerned will make it more difficult to take advantage of features like windowing.

True, one thing that comes to mind is that today by default we target an intermediary texture for each window (rather than targeting the swapchain texture from the draw pipeline directly) where the idea is that we can re-use the intermediary texture between frames 1. for that processing-style experience of drawing onto the same canvas and 2. for the larger colour channel bit-depth. I wonder if enough bevy parts are exposed to allow us to have a similar setup as a plugin 🤔

The issue here is that we either need to cache geometry or clear it the meshes every frame. This may or may not be a big deal but bevy definitely doesn't assume that meshes are drawn in a kind of immediate mode.

Yeah currently I think our draw API just reconstructs meshes each frame anyways, but I think we do re-use buffers where we can, but maybe not so crazy to reconstruct meshes each frame? Hopefully this turns out to gel OK with bevy 🙏

Looking forward to seeing where your bevy spelunking takes this !!

from nannou.

tychedelia avatar tychedelia commented on July 18, 2024

Closing this and opening a new ticket to move us to the "mid level" render APIs.

from nannou.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.