asny / three-d Goto Github PK
View Code? Open in Web Editor NEW2D/3D renderer - makes it simple to draw stuff across platforms (including web)
License: MIT License
2D/3D renderer - makes it simple to draw stuff across platforms (including web)
License: MIT License
Hi,
Really cool project you got there :)
As I am searching a 3D game engine that would be WASM compatible and easy to use, three-d
seems to check all the boxes. I am just wondering about audio. Did you know if there was any attempt to add audio to three-d
? It surely is possible but I would like to avoid to reinvent the wheel here. So if you happen to have done experiments in this area or know someone who did just let me know
RenderStates::BlendParameters::TRANSPARENCY
(actually, its equivalent before refactor) was originally set to the "traditional alpha blend equations" described here: https://www.khronos.org/opengl/wiki/Blending#Colors
In bc2c3c3, the source and destination alpha values were reversed. Was this intentional? I haven't extensively tested this default mode, but it seems like it would not work properly for most normal blending tasks.
The egui SidePanel, as demonstrated in examples/lighting, has been very useful for me, and I'm now adding more egui features on other parts of my interface. Adding an egui Window causes strange behavior:
This can be reproduced with example code easily: at examples/lighting/main.rs:220, add
egui::Window::new("My Window").show(gui_context, |ui| {
ui.label("Hello World!");
});
Possible to create a new graphics context (and maybe event handling) without the actual window. In that case there is not any screen to write to (you have to write to a render target) so the Screen struct should probably be coupled to the window.
I have tried it in both my own project and the examples. When switching to the state in the examples the rendering seems to freeze, and when a project starts with the camerastate set to FIRST there seems to be nothing rendered. Running on the latest commit on the master branch (b04c17f).
Hi: @asny Thanks for your creative work. ๐
I am reimplement triangle
demo. Import three-id
in cargo.toml as below
[dependencies.three-d]
version="0.6.2"
default-features = false
features = ["glutin-window", "canvas"]
and build as below
wasm-pack build --target web --out-name me --out-dir pkg
but I got error
error[E0422]: cannot find struct, variant or union type `WindowSettings` in this scope
--> src/lib.rs:8:30
|
8 | let window = Window::new(WindowSettings {
| ^^^^^^^^^^^^^^ not found in this scope
My code is same with example/triangle
And I can't find WindowSettting
struct in crate.io doc structs ๐
Is anything I have got it wrong?
Some parts of an egui window's contents are still drawn when scrolled out of the window.
Also notice the titlebar in the right image, which is partially overdrawn with text from inside the window.
This occurs in desktop and web mode.
Note: An easy workaround is to disable window scrolling, if the UI can be made to fit.
Hi,
Me again :)
The game I am making with three-d starts to look great. Yet, while I target WebGL and most browsers don't provide MSAA my game is heavily aliased, especialy since I have very long edge that goes across the screen. Since my OpenGL is kinda rusty (pun not intended) I would like to know if it possible to add FXAA to solve this. The shader itself looks quite simple as shown here but I am not sure how it fit the rendering pipeline. Is this something you have experience with? If yes, do you mind adding a sample to the repo to see how to use it?
There should be a way to draw dots and lines (which is helpful for getting started). I have not found a way to do that, but it must be easy somehow. Perhaps, triangle example can be extended with these.
Also I guess there's a way to render text (because it can render GUI) which is helpful for debugging (e. g. to print various elements on the screen). Again, example of this might be helpful for ramping up.
Aspect ratio is inconsistent during browser window resizing with example/texture (and probably other examples):
It seems that the aspect ratio is determined (indirectly) in texture/main.rs with
camera.set_size(frame_input.screen_width as f32, frame_input.screen_height as f32);
which uses the browser window/tab geometry. However, the actual render area is determined in index.html:
<canvas id="canvas" height="720" width="1280" />
When resizing the window with this fixed canvas, some 'unintuitive' scaling effects can be observed.
Is there an easy way to replace the frame_input.screen_* values with the actual render area dimensions? This would be useful not only for pages with a fixed canvas size, but also one with a border, sizebar, or any other difference between window and canvas size.
The entire shading module is missing from the crate such as other implementations like
object/Mesh.rs
pub fn new_with_material(
context: &Context,
cpu_mesh: &CPUMesh,
material: &Material,
) -> Result<Self, Error> {
let mut mesh = Self::new(context, cpu_mesh)?;
mesh.material = material.clone();
Ok(mesh)
}
Edit: typo
When setting up the FrameInput
structure, the device pixel ratio is cast from f32
to usize
, so if you happen to be using a display scaling factor of say, 1.5, this will be cast to 1.
This leads to a bug in which egui layouts (such as TopPanel) doesn't fill the entire screen, because there is a disparity between the viewport size and the window size being passed to the egui integration.
On the latest master commit (bd52fac), when running examples/texture, I get
thread 'main' panicked at 'called
Result::unwrap()
on anErr
value: WindowCreationError(CreationErrors([NoAvailablePixelFormat, OsError("Couldn't find any available vsync extension")]))', examples/texture/main.rs:8:47
I was able to fix this by adding .with_vsync(true)
to glutin_window.rs:70 :
Ok(ContextBuilder::new()
.with_vsync(true)
.build_windowed(window_builder, event_loop)?)
I don't understand why explicitly turning vsync on would solve a problem of a missing vsync extension, but it works.
My project uses RGBA32F and RGBA8 pixel formats.
example:
let tmp_texture = ColorTargetTexture2D::new(
&context,
frame_input.viewport.width,
frame_input.viewport.height,
Interpolation::Nearest,
Interpolation::Nearest,
None,
Wrapping::ClampToEdge,
Wrapping::ClampToEdge,
Format::RGBA32F,
).unwrap();
let out_texture = ColorTargetTexture2D::new(
&context,
frame_input.viewport.width,
frame_input.viewport.height,
Interpolation::Nearest,
Interpolation::Nearest,
None,
Wrapping::ClampToEdge,
Wrapping::ClampToEdge,
Format::RGBA8,
).unwrap();
Both of these formats have both disappeared from cpu_texture's Format enum sometime in the last 2 weeks. Can they be easily re-added?
I updated my project to three-d 0.7.3 and tested web mode. (I admit, I haven't tested web mode in too long, so I'm not sure how old this effect is.)
When I resize my browser window, the three-d rendering area resizes with it, but now, it is always slightly too large for the window and scrollbars are always visible regardless of the window size.
I don't think I've changed any code related to the three-d window setup in quite a while, so I'm guessing it's a new three-d behavior.
It looks like the input RefCell is mutably borrowed somewhere else, here's the trace :
at std::panicking::rust_panic_with_hook::h9d91fa0fae16500f (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[3427]:0x30db4a)
at std::panicking::begin_panic_handler::{{closure}}::h2d82b5b2db976f3f (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[5345]:0x362b65)
at std::sys_common::backtrace::__rust_end_short_backtrace::h24539600b7e2452c (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[14215]:0x4175f6)
at rust_begin_unwind (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[12798]:0x4054ef)
at core::panicking::panic_fmt::h91d2023e5afe1929 (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[14217]:0x41765a)
at core::option::expect_none_failed::h203584ddbc785ca2 (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[6247]:0x3802e2)
at core::result::Result<T,E>::expect::h2cecc8562387c163 (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[4601]:0x346426)
at core::cell::RefCell<T>::borrow_mut::hac48a91c65b3f340 (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[5226]:0x35e8d0)
at WASM::window::canvas::Window::add_touchmove_event_listener::{{closure}}::h664de35dc4bfa048 (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[240]:0xeaf13)
at <dyn core::ops::function::FnMut<(A,)>+Output = R as wasm_bindgen::closure::WasmClosure>::describe::invoke::h7ed178cfd3254e2f (http://192.168.1.115:3000/2c704d02c033148a506a.module.wasm:wasm-function[3649]:0x3197c3)
at __wbg_adapter_24 (http://192.168.1.115:3000/static/js/0.chunk.js:898:174)
at HTMLDocument.real (http://192.168.1.115:3000/static/js/0.chunk.js:860:14)
Every touch event / add_touch* function in canvas::Window throws this.
The Wasm VM should be single threaded, so I really do not understand how two RefCells can simultaneously be borrowed.
This crate is two projects with clear border between them:
Mesh
or light/shadowsPerhaps separating them would make it better.
@asny This issue is mostly to report that I pulled bbe2a99 and tested all the examples on linux desktop: every example works, no problems were encountered.
I noticed that the mousewheel zoom step has changed; in practice, this makes fog, lighting, fractal, texture, and wireframe a bit hard to use. I'm using an exponential zoom model in my project and it works well (for 2D data); this might make sense for these examples.
Use scissor
Needs to load shaders runtime.
@asny: related to discussion #93
I've implemented a custom multipass render mode (averaging pixel colors for overlapping photos) using three-d's RenderTarget and ImageEffect structs. However, I had to temporarily extend three-d's API locally.
RenderTarget uses copy and copy_to_screen (and related _color and _depth functions) to render its content to another RenderTarget or to the screen display buffer, respectively. They do this with
To implement my custom render mode, I made more general variants of the color-only versions of these functions, which I tentatively called apply_effect_color and apply_effect_to_screen_color. These differ from the copy functions only in that the ImageEffect and RenderStates are provided as parameters, instead of being set inside the function.
pub fn apply_effect_color(&self, other: &Self, effect: &ImageEffect, viewport: Viewport) -> Result<(), Error>
pub fn apply_effect_to_screen_color(&self, effect: &ImageEffect, render_states: &RenderStates, viewport: Viewport) -> Result<(), Error>
My request is to add RenderTarget functions that expose this functionality: however, we can probably do better than the interface I used here. I'm not sure if this is consistent with the API's style, so feel free to ignore or modify this if it doesn't match:
All of the RenderTarget::copy* functions could be expressed as one function, like:
pub fn apply_effect(&self, buffers: &Buffers, destination: &Destination, image_effect: &ImageEffect, render_states: &RenderStates, viewport: Viewport) -> Result<(), Error>
with something like
pub enum Buffers {
Color,
Depth,
ColorAndDepth,
}
pub enum Destination {
Screen,
Other(&RenderTarget),
}
and possibly enums to switch between default and custom values for the ImageEffect and RenderState, if non-awkward names can be found.
This could be used to reimplement the existing functions, as well.
I am wondering after using the wasm, how much is the performance gain compared to the origin?
A building-mesh needs a different material for the roof, a box may have 6 different side textures
and a large static game level may have really many faces, but with some different materials.
This could be done by multiple meshes to, but that takes more memory (Safari browser get angry)
and CPU load. May be even GPU load?
In ThreeJS a mesh may have "Groups"; in BabylonJS meshes have SubMeshes. And three-d?
There always is an vector of Vertices and an vector of indices, each 3 defining a face.
Actually all get the same Material (One may do funny things with UV, but not this time).
Feature Request:
The mesh material may also be a vector of materials.
A mesh may have a vector of materialGroups (name is a template).
A Group has a face start index, a face count and a material index.
For safety, the method addMaterialGroup also may have a parameter faceCount
to check if the mesh and all groups fit together.
Now I start guessing (about WebGL):
The materials, vertices and indices are send to the GPU only once.
But for each material, a draw call to the GPU is needed in the render cycle.
So the faces showing a certain material should be sorted to one group by the user.
Is it really needed to do each draw call separate?
Or could one send a vector/list of them to the GPU to only "call" this list?
If yes, we could draw a game level with a single "call" :-)
See also #106
Hi, great project btw! I'm using this as a base for a proof-of-concept web map editor for an old game. I managed to get models from that game rendering with this code ๐
But I can't seem to figure out textures, I have rendered a texture onto a mesh but it seems there's no way to specify a UV map for the mesh. Is this possible or not right now?
Here's what it should look like:
And one other slightly related thing is that we have some meshes with multiple textures and thus multiple sets of indices for specifying surfaces, is there a way to specify this using the current codebase?
Thanks!
In bd52fac, I get a the following error from examples/mandelbrot
thread 'main' panicked at 'called
Result::unwrap()
on anErr
value: FailedToFindUniform { message: "Failed to find uniform normalMatrix" }', examples/mandelbrot/main.rs:55:75
and this from examples/statues
thread 'main' panicked at 'called
Result::unwrap()
on anErr
value: FailedToFindUniform { message: "Failed to find uniform normalMatrix" }', examples/statues/main.rs:58:12
I'm also getting something similar in my own project code, as I'm replacing the phong renderer with code based on Mesh::render. I tried to look for a cause and I haven't found one yet; hopefully the cause is recognizable.
My project's UI recenters zoom actions such that the cursor remains on a constant location in the displayed data (like OSM/google maps): for this, I need the mouse position when handling mousewheel events. Right now, I'm recording it when it comes in from move and click events, but it would be more convenient to get it in the MouseWheel event itself.
HI @asny :
I wanna watch some document div click event. May you guide me to capture those event... I can't find similar things in EVENT CODE. Is that still not support ?
Hi, let me start by saying thanks for writing three-d. I'm hoping to use it (or something like it) for a project that renders panorama photos with a measurement overlay, and it looks like an easy way to get a web implementation going quickly.
I'm looking through the examples for code I can use, and I'm getting a shader error from examples/texture:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: FailedToFindUniform { message: "Failed to find uniform viewProjectionInverse" }', examples/texture/main.rs:94:16
I noticed viewProjectionInverse in the most recent commit (https://github.com/asny/three-d/commit/4340b3ea1240d22454b41ff747022ab7a0e74224); is this still a work-in-progress?
Current defaults for BlendParameters act as a non-blending overwrite even if blending is enabled. There's nothing technically wrong with this, but it might be convenient to set the values to the traditional intuitive blend settings:
impl Default for BlendParameters {
fn default() -> Self {
Self {
source_rgb_multiplier: BlendMultiplierType::SrcAlpha,
source_alpha_multiplier: BlendMultiplierType::One,
destination_rgb_multiplier: BlendMultiplierType::OneMinusSrcAlpha,
destination_alpha_multiplier: BlendMultiplierType::Zero,
rgb_equation: BlendEquationType::Add,
alpha_equation: BlendEquationType::Add
}
}
}
On linux desktop, I tried the fog, fireworks, and lighting examples in ace624a, and they all fail to start with an error like
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: InvalidNumberOfSamples', examples/fog/main.rs:6:56
.
It's a new error code that was added since 0.6.0 and only exists in glutin_window.rs.
See the this thread.
Add support to texture and render target
@asny
I'm investigating some strange transparency problems that may only exist on web+linux. In areas of my scene which use transparency, I can see through my browser window and read text in other windows behind it. To continue my diagnosis, I need a way to clear the entire display buffer to 1.0 alpha without touching the RGB contents of the buffer. Does three-d have some way to do this? Or, can the display buffer be changed to RGB-only?
I'm not sure if color mask applies to a buffer clear operation, but that might be part of the solution.
I recently uploaded a small web demo of a three-d -based web application, and several people tried it from a few different platforms and browsers. Almost everyone saw the expected functionality, but there were two people who used Apple products that had some strange problems:
I wish I could have tracked down a more definite set of steps to reproduce these bugs, but I don't use any Apple products personally.
9c580fe
On start of examples/forest:
thread 'main' panicked at 'called
Result::unwrap()
on anErr
value: FailedToFindAttribute { message: "The attribute uv_coordinates is sent to the shader but never used." }', examples/forest/main.rs:45:39
Hi Asny, this is a very cool tool, thanks a lot for sharing.
This is not really an issue but more a question. I see in some of your examples you're using ".3d" files for your assets which I don't have a clue what kind of format it is. I am a noob in 3d modelling so it doesn't help much. Do you mind giving some clarification? Which tool are you using to create those 3d assets?
It would be really nice if this crate would offer support for rendering using the offscreen canvas api, so that the potentially compute intensive work can be offloaded to e.g. a web worker and thus run on a seperate thread than the main thread.
If I see it correctly, this just would require something like the current canvas rendere, just with the difference that the context can get passed in insted of the renderer creating the canvas by itself.
And any calls to JS-APIs that are not available in a web worker ( window
, document
, etc...) should be avoided.
The rest (transfering control/context to offscreen renderer, setting up the worker, etc...) could be handeled by the user of this crate
Aside from the offscreen use case, I think this change could also be useful for users who want to/need to setup the <canvas>
in their application code, and thus don't need three-d to create the canvas element (e.g. I currently have the case that I want to integrate three-d in a react app, where I would create the canvas element via react, and basically just want to pass that canvas element then via wasm-bindgen to three-d)
Is the tree-d Loader in a binay app able to load from the Web?
let _url = std::path::Path::new("https://www.osmgo.org/test2.pbf");
Loader::load(
&[ _url, ],
move |loaded| {
let bytes = loaded.bytes(_url).unwrap();
thread 'main' panicked at 'called Result::unwrap()
on an Err
value: FailedToLoad { message: "Could not load resource: https://www.osmgo.org/test2.pbf" }', src/main.rs:57:44
Use the gltf crate
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.