Code Monkey home page Code Monkey logo

three-d's People

Contributors

adhami3310 avatar asny avatar bananaturtlesandwich avatar bonsaiden avatar breynard0 avatar bwanders avatar coderedart avatar d1plo1d avatar dbuch avatar emilk avatar finomnis avatar haraldreingruber-dedalus avatar hoijui avatar iwanders avatar jbrd avatar joshburkart avatar mdegans avatar mikayex avatar paul2t avatar raevenoir avatar ronan-milvue avatar rynti avatar silverlyra avatar stepancheg avatar surban avatar swiftcoder avatar thundr67 avatar wlgys8 avatar x3ro avatar yeicor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

three-d's Issues

Sound engine

Hi,

Really cool project you got there :)

As I am searching a 3D game engine that would be WASM compatible and easy to use, three-d seems to check all the boxes. I am just wondering about audio. Did you know if there was any attempt to add audio to three-d? It surely is possible but I would like to avoid to reinvent the wheel here. So if you happen to have done experiments in this area or know someone who did just let me know

strange egui Window behavior

The egui SidePanel, as demonstrated in examples/lighting, has been very useful for me, and I'm now adding more egui features on other parts of my interface. Adding an egui Window causes strange behavior:

  • mouseclick coordinates seem to be mis-mapped
  • contained text is drawn outside the window
  • the window changes aspect ratio as it is dragged

This can be reproduced with example code easily: at examples/lighting/main.rs:220, add

egui::Window::new("My Window").show(gui_context, |ui| {
    ui.label("Hello World!");
});

The result will look something like this:
Screenshot_2021-04-05_11-00-07

Headless window

Possible to create a new graphics context (and maybe event handling) without the actual window. In that case there is not any screen to write to (you have to write to a render target) so the Screen struct should probably be coupled to the window.

The CameraState First doesn't seem to work.

I have tried it in both my own project and the examples. When switching to the state in the examples the rendering seems to freeze, and when a project starts with the camerastate set to FIRST there seems to be nothing rendered. Running on the latest commit on the master branch (b04c17f).

Improve error handling

  • Improve the error types and make them more consistent - suggestions are welcome; should it be one ThreeDError struct with an enum for defining the type or several types and in that case at what level?
  • Avoid unwrap() and other panics
  • Use thiserror for less boiler plate code

See also #118

[Question] Compiler wasm fail

Hi: @asny Thanks for your creative work. ๐Ÿ˜„

I am reimplement triangle demo. Import three-id in cargo.toml as below

[dependencies.three-d]
version="0.6.2"
default-features = false
features = ["glutin-window", "canvas"]

and build as below

wasm-pack build --target web --out-name me --out-dir pkg

but I got error

error[E0422]: cannot find struct, variant or union type `WindowSettings` in this scope
 --> src/lib.rs:8:30
  |
8 |     let window = Window::new(WindowSettings {
  |                              ^^^^^^^^^^^^^^ not found in this scope

My code is same with example/triangle

And I can't find WindowSettting struct in crate.io doc structs ๐Ÿ˜‚

Is anything I have got it wrong?

egui scrollable windows draw features outside window bounds

Some parts of an egui window's contents are still drawn when scrolled out of the window.

window_scroll_bug

Also notice the titlebar in the right image, which is partially overdrawn with text from inside the window.
This occurs in desktop and web mode.

Note: An easy workaround is to disable window scrolling, if the UI can be made to fit.

How to implement FXAA?

Hi,

Me again :)

The game I am making with three-d starts to look great. Yet, while I target WebGL and most browsers don't provide MSAA my game is heavily aliased, especialy since I have very long edge that goes across the screen. Since my OpenGL is kinda rusty (pun not intended) I would like to know if it possible to add FXAA to solve this. The shader itself looks quite simple as shown here but I am not sure how it fit the rendering pipeline. Is this something you have experience with? If yes, do you mind adding a sample to the repo to see how to use it?

Simple objects (square, box, sphere etc.)

There should be a way to draw dots and lines (which is helpful for getting started). I have not found a way to do that, but it must be easy somehow. Perhaps, triangle example can be extended with these.

Also I guess there's a way to render text (because it can render GUI) which is helpful for debugging (e. g. to print various elements on the screen). Again, example of this might be helpful for ramping up.

example/texture: inconsistent aspect ratio (in browser)

Aspect ratio is inconsistent during browser window resizing with example/texture (and probably other examples):

It seems that the aspect ratio is determined (indirectly) in texture/main.rs with
camera.set_size(frame_input.screen_width as f32, frame_input.screen_height as f32);
which uses the browser window/tab geometry. However, the actual render area is determined in index.html:
<canvas id="canvas" height="720" width="1280" />

When resizing the window with this fixed canvas, some 'unintuitive' scaling effects can be observed.

Is there an easy way to replace the frame_input.screen_* values with the actual render area dimensions? This would be useful not only for pages with a fixed canvas size, but also one with a border, sizebar, or any other difference between window and canvas size.

shading module is not published in crates.io

The entire shading module is missing from the crate such as other implementations like
object/Mesh.rs

    pub fn new_with_material(
        context: &Context,
        cpu_mesh: &CPUMesh,
        material: &Material,
    ) -> Result<Self, Error> {
        let mut mesh = Self::new(context, cpu_mesh)?;
        mesh.material = material.clone();
        Ok(mesh)
    }

Edit: typo

Support for non-integer device pixel ratios

When setting up the FrameInput structure, the device pixel ratio is cast from f32 to usize, so if you happen to be using a display scaling factor of say, 1.5, this will be cast to 1.

This leads to a bug in which egui layouts (such as TopPanel) doesn't fill the entire screen, because there is a disparity between the viewport size and the window size being passed to the egui integration.

vsync error message (+ partial fix?)

On the latest master commit (bd52fac), when running examples/texture, I get

thread 'main' panicked at 'called Result::unwrap() on an Err value: WindowCreationError(CreationErrors([NoAvailablePixelFormat, OsError("Couldn't find any available vsync extension")]))', examples/texture/main.rs:8:47

I was able to fix this by adding .with_vsync(true) to glutin_window.rs:70 :

            Ok(ContextBuilder::new()
                .with_vsync(true)
                .build_windowed(window_builder, event_loop)?)

I don't understand why explicitly turning vsync on would solve a problem of a missing vsync extension, but it works.

missing pixel formats: RGBA32F + RGBA8

My project uses RGBA32F and RGBA8 pixel formats.
example:

let tmp_texture = ColorTargetTexture2D::new(
        &context,
        frame_input.viewport.width,
        frame_input.viewport.height,
        Interpolation::Nearest,
        Interpolation::Nearest,
        None,
        Wrapping::ClampToEdge,
        Wrapping::ClampToEdge,
        Format::RGBA32F,
    ).unwrap();

    let out_texture = ColorTargetTexture2D::new(
        &context,
        frame_input.viewport.width,
        frame_input.viewport.height,
        Interpolation::Nearest,
        Interpolation::Nearest,
        None,
        Wrapping::ClampToEdge,
        Wrapping::ClampToEdge,
        Format::RGBA8,
    ).unwrap();

Both of these formats have both disappeared from cpu_texture's Format enum sometime in the last 2 weeks. Can they be easily re-added?

possible regression: web view adaptively resizes too large for current window

I updated my project to three-d 0.7.3 and tested web mode. (I admit, I haven't tested web mode in too long, so I'm not sure how old this effect is.)

When I resize my browser window, the three-d rendering area resizes with it, but now, it is always slightly too large for the window and scrollbars are always visible regardless of the window size.

Screenshot_20210607_180834

I don't think I've changed any code related to the three-d window setup in quite a while, so I'm guessing it's a new three-d behavior.

Camera Controls not working on Mobile (Wasm)

It looks like the input RefCell is mutably borrowed somewhere else, here's the trace :

    at std::panicking::rust_panic_with_hook::h9d91fa0fae16500f (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[3427]:0x30db4a)
    at std::panicking::begin_panic_handler::{{closure}}::h2d82b5b2db976f3f (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[5345]:0x362b65)
    at std::sys_common::backtrace::__rust_end_short_backtrace::h24539600b7e2452c (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[14215]:0x4175f6)
    at rust_begin_unwind (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[12798]:0x4054ef)
    at core::panicking::panic_fmt::h91d2023e5afe1929 (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[14217]:0x41765a)
    at core::option::expect_none_failed::h203584ddbc785ca2 (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[6247]:0x3802e2)
    at core::result::Result<T,E>::expect::h2cecc8562387c163 (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[4601]:0x346426)
    at core::cell::RefCell<T>::borrow_mut::hac48a91c65b3f340 (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[5226]:0x35e8d0)
    at WASM::window::canvas::Window::add_touchmove_event_listener::{{closure}}::h664de35dc4bfa048 (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[240]:0xeaf13)
    at <dyn core::ops::function::FnMut<(A,)>+Output = R as wasm_bindgen::closure::WasmClosure>::describe::invoke::h7ed178cfd3254e2f (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[3649]:0x3197c3)
    at __wbg_adapter_24 (http://192.168.1.115:3000/static/js/0.chunk.js:898:174)
    at HTMLDocument.real (http://192.168.1.115:3000/static/js/0.chunk.js:860:14)

Every touch event / add_touch* function in canvas::Window throws this.
The Wasm VM should be single threaded, so I really do not understand how two RefCells can simultaneously be borrowed.

Consider splitting the crate

This crate is two projects with clear border between them:

  • platform-independent OpenGL bindings
  • higher level utilities like Mesh or light/shadows

Perhaps separating them would make it better.

Improve CameraControl

@asny This issue is mostly to report that I pulled bbe2a99 and tested all the examples on linux desktop: every example works, no problems were encountered.

I noticed that the mousewheel zoom step has changed; in practice, this makes fog, lighting, fractal, texture, and wireframe a bit hard to use. I'm using an exponential zoom model in my project and it works well (for 2D data); this might make sense for these examples.

API request: RenderTarget functions

@asny: related to discussion #93
I've implemented a custom multipass render mode (averaging pixel colors for overlapping photos) using three-d's RenderTarget and ImageEffect structs. However, I had to temporarily extend three-d's API locally.

RenderTarget uses copy and copy_to_screen (and related _color and _depth functions) to render its content to another RenderTarget or to the screen display buffer, respectively. They do this with

  • an ImageEffect to run a copy fragment shader on every pixel
  • a RenderStates struct

To implement my custom render mode, I made more general variants of the color-only versions of these functions, which I tentatively called apply_effect_color and apply_effect_to_screen_color. These differ from the copy functions only in that the ImageEffect and RenderStates are provided as parameters, instead of being set inside the function.

pub fn apply_effect_color(&self, other: &Self, effect: &ImageEffect, viewport: Viewport) -> Result<(), Error>
pub fn apply_effect_to_screen_color(&self, effect: &ImageEffect, render_states: &RenderStates, viewport: Viewport) -> Result<(), Error>

My request is to add RenderTarget functions that expose this functionality: however, we can probably do better than the interface I used here. I'm not sure if this is consistent with the API's style, so feel free to ignore or modify this if it doesn't match:

All of the RenderTarget::copy* functions could be expressed as one function, like:

pub fn apply_effect(&self, buffers: &Buffers, destination: &Destination, image_effect: &ImageEffect, render_states: &RenderStates, viewport: Viewport) -> Result<(), Error>

with something like

pub enum Buffers {
	Color,
	Depth,
	ColorAndDepth,
}
pub enum Destination {
	Screen,
	Other(&RenderTarget),
}

and possibly enums to switch between default and custom values for the ImageEffect and RenderState, if non-awkward names can be found.

This could be used to reimplement the existing functions, as well.

Feature Request: Mesh with "material groups"

A building-mesh needs a different material for the roof, a box may have 6 different side textures
and a large static game level may have really many faces, but with some different materials.
This could be done by multiple meshes to, but that takes more memory (Safari browser get angry)
and CPU load. May be even GPU load?

In ThreeJS a mesh may have "Groups"; in BabylonJS meshes have SubMeshes. And three-d?
There always is an vector of Vertices and an vector of indices, each 3 defining a face.
Actually all get the same Material (One may do funny things with UV, but not this time).

Feature Request:
The mesh material may also be a vector of materials.
A mesh may have a vector of materialGroups (name is a template).
A Group has a face start index, a face count and a material index.
For safety, the method addMaterialGroup also may have a parameter faceCount
to check if the mesh and all groups fit together.

Now I start guessing (about WebGL):
The materials, vertices and indices are send to the GPU only once.
But for each material, a draw call to the GPU is needed in the render cycle.
So the faces showing a certain material should be sorted to one group by the user.
Is it really needed to do each draw call separate?
Or could one send a vector/list of them to the GPU to only "call" this list?
If yes, we could draw a game level with a single "call" :-)

See also #106

Specify UV coordinates for a texture

Hi, great project btw! I'm using this as a base for a proof-of-concept web map editor for an old game. I managed to get models from that game rendering with this code ๐ŸŽ‰

image

But I can't seem to figure out textures, I have rendered a texture onto a mesh but it seems there's no way to specify a UV map for the mesh. Is this possible or not right now?

Here's what it should look like:

image

And one other slightly related thing is that we have some meshes with multiple textures and thus multiple sets of indices for specifying surfaces, is there a way to specify this using the current codebase?

Thanks!

FailedToFindUniform error

In bd52fac, I get a the following error from examples/mandelbrot

thread 'main' panicked at 'called Result::unwrap() on an Err value: FailedToFindUniform { message: "Failed to find uniform normalMatrix" }', examples/mandelbrot/main.rs:55:75

and this from examples/statues

thread 'main' panicked at 'called Result::unwrap() on an Err value: FailedToFindUniform { message: "Failed to find uniform normalMatrix" }', examples/statues/main.rs:58:12

I'm also getting something similar in my own project code, as I'm replacing the phong renderer with code based on Mesh::render. I tried to look for a cause and I haven't found one yet; hopefully the cause is recognizable.

request: add mouse position to Event::MouseWheel

My project's UI recenters zoom actions such that the cursor remains on a constant location in the displayed data (like OSM/google maps): for this, I need the mouse position when handling mousewheel events. Right now, I'm recording it when it comes in from move and click events, but it would be more convenient to get it in the MouseWheel event itself.

examples/texture shader problem

Hi, let me start by saying thanks for writing three-d. I'm hoping to use it (or something like it) for a project that renders panorama photos with a measurement overlay, and it looks like an easy way to get a web implementation going quickly.

I'm looking through the examples for code I can use, and I'm getting a shader error from examples/texture:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: FailedToFindUniform { message: "Failed to find uniform viewProjectionInverse" }', examples/texture/main.rs:94:16

I noticed viewProjectionInverse in the most recent commit (https://github.com/asny/three-d/commit/4340b3ea1240d22454b41ff747022ab7a0e74224); is this still a work-in-progress?

change render_states::BlendParameters::default() values

Current defaults for BlendParameters act as a non-blending overwrite even if blending is enabled. There's nothing technically wrong with this, but it might be convenient to set the values to the traditional intuitive blend settings:

impl Default for BlendParameters {
    fn default() -> Self {
        Self {
            source_rgb_multiplier: BlendMultiplierType::SrcAlpha,
            source_alpha_multiplier: BlendMultiplierType::One,
            destination_rgb_multiplier: BlendMultiplierType::OneMinusSrcAlpha,
            destination_alpha_multiplier: BlendMultiplierType::Zero,
            rgb_equation: BlendEquationType::Add,
            alpha_equation: BlendEquationType::Add
        }
     }
}

examples on linux desktop: "InvalidNumberOfSamples" error

On linux desktop, I tried the fog, fireworks, and lighting examples in ace624a, and they all fail to start with an error like
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: InvalidNumberOfSamples', examples/fog/main.rs:6:56.
It's a new error code that was added since 0.6.0 and only exists in glutin_window.rs.

Possible to clear each color channel in a render target separately

@asny
I'm investigating some strange transparency problems that may only exist on web+linux. In areas of my scene which use transparency, I can see through my browser window and read text in other windows behind it. To continue my diagnosis, I need a way to clear the entire display buffer to 1.0 alpha without touching the RGB contents of the buffer. Does three-d have some way to do this? Or, can the display buffer be changed to RGB-only?

I'm not sure if color mask applies to a buffer clear operation, but that might be part of the solution.

Originally posted by @kklibo in #60

Apple support issues

I recently uploaded a small web demo of a three-d -based web application, and several people tried it from a few different platforms and browsers. Almost everyone saw the expected functionality, but there were two people who used Apple products that had some strange problems:

  • One Apple user was using Safari on an iPad: after installing the WebGL2 extension, my code was able to run, but the egui sidebar shrank as the program ran. I think it started at the correct size, but slowly decreased its width and font size until it was no longer legible. Also, when testing the three-d lighting example on this same platform, UI touch inputs seemed to be mapped to the wrong parts of the screen (it's possible that this happened after a runtime switch to portrait mode, but I'm not sure).
  • The other Apple user was using Safari on a desktop machine: after installing the WebGL2 extension, the program ran, but anything that was rendered with a three-d Mesh was missing.

I wish I could have tracked down a more definite set of steps to reproduce these bugs, but I don't use any Apple products personally.

Forest example panics

9c580fe
On start of examples/forest:

thread 'main' panicked at 'called Result::unwrap() on an Err value: FailedToFindAttribute { message: "The attribute uv_coordinates is sent to the shader but never used." }', examples/forest/main.rs:45:39

.3d files?

Hi Asny, this is a very cool tool, thanks a lot for sharing.

This is not really an issue but more a question. I see in some of your examples you're using ".3d" files for your assets which I don't have a clue what kind of format it is. I am a noob in 3d modelling so it doesn't help much. Do you mind giving some clarification? Which tool are you using to create those 3d assets?

Offscreen Canvas Support for WASM

It would be really nice if this crate would offer support for rendering using the offscreen canvas api, so that the potentially compute intensive work can be offloaded to e.g. a web worker and thus run on a seperate thread than the main thread.

If I see it correctly, this just would require something like the current canvas rendere, just with the difference that the context can get passed in insted of the renderer creating the canvas by itself.
And any calls to JS-APIs that are not available in a web worker ( window, document, etc...) should be avoided.

The rest (transfering control/context to offscreen renderer, setting up the worker, etc...) could be handeled by the user of this crate

Aside from the offscreen use case, I think this change could also be useful for users who want to/need to setup the <canvas> in their application code, and thus don't need three-d to create the canvas element (e.g. I currently have the case that I want to integrate three-d in a react app, where I would create the canvas element via react, and basically just want to pass that canvas element then via wasm-bindgen to three-d)

Fix statues example

There are multiple errors in the statues example:

  • In the 0.7.3 release, only one of the statues have shadows (fixed in 7be0866)
  • On master, mip mapping is not working for the statues textures in chrome (fixed in 3381bde)
  • On master, the lighting is very dark (fixed in 604fa45)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.