webgpu / webgpu-samples Goto Github PK
View Code? Open in Web Editor NEWWebGPU Samples
Home Page: https://webgpu.github.io/webgpu-samples/
License: BSD 3-Clause "New" or "Revised" License
WebGPU Samples
Home Page: https://webgpu.github.io/webgpu-samples/
License: BSD 3-Clause "New" or "Revised" License
The samples worked few days ago. However, after updating Chrome Canary to Version 89.0.4387.3 (Official Build) canary (64-bit), suddenly, none of the samples work with the following error (same for all samples):
TypeError: Cannot read property 'requestDevice' of null
at rotatingCube-6f3dd6430b02cabc591c.js:1
at l (1be8be7c52e2349f8660686032d2042ae0b6bb9e.08d4626ed8f6bcf6d877.js:1)
at Generator._invoke (1be8be7c52e2349f8660686032d2042ae0b6bb9e.08d4626ed8f6bcf6d877.js:1)
at Generator.next (1be8be7c52e2349f8660686032d2042ae0b6bb9e.08d4626ed8f6bcf6d877.js:1)
at r (5e050aff538434ccde2cc97438110cf81079bf6a.910cf116a21df707471b.js:1)
at s (5e050aff538434ccde2cc97438110cf81079bf6a.910cf116a21df707471b.js:1)
it seems that it can get webgpu context but not the device.
I'd like to turn these samples into performance tests that we can check into WebKit, but this repository doesn't appear to have any license.
I hope you can pick a license that's compatible with WebKit so we can use these to check our performance.
I was trying to understand what the row pitch computation was doing so I built the textured cube sample locally and tried with a different image. I simply scaled down the provided image from 512 x 512 px to 500 x 500 px. Here is the result:
I'm not sure what is going on as the row pitch parameter is not described in the WebGPU
Editor’s Draft. Is this a normal behavior in the current state of implementation?
On my display I usually use a browser scale factor >100%, which means the difference between the MSAA and non-MSAA triangles is hard to see (edges are blurred either way). If we set the canvas size according to devicePixelRatio then we'll get sharper rendering and clearer results.
Doing this properly will require the canvas to change size at runtime though, and Chrome at least might still have issues with that, I'm not sure.
Hi, your code helps me a lot. I have a question for you.
I hope to add a case study of texturedSphere example.
Thanks a lot !
And it would be nice to have a toggle button to test on browsers that don't support one of the two shading language options.
According to WEBGPU spec, WEBGPU objects such as buffers should be able to move between worker threads.
Should there be example for doing that?
This would make the shaders nicer to read!
For example based on this code.
The NDC coordinate of WebGPU is left-handed; NDC coordinate of WebGL is right handed left handed (I'm confused again).
Right now our sample uses everything in right-handed: model coordinate (cube.ts), world coordinate (several transformation), perspective matrix (from gl-matrix). So the result turns out to be correct.
Wondering what a good common behavior for future WebGPU dev community should be (likely somewhat affected by this webgpu-samples) . Although this seems to be something that engine level app needs to consider (KhronosGroup/UnityGLTF#257, unity engine uses left-handed, glTF model uses right-handed, and unity takes care of conversion at import time)
If most model assets inherit from WebGL age would be targeted at right-handed coordinate, it makes sense to just keep everything right-handed?
The firefox nightly can run wgpu-rs demo.
However, run webgpu-samples will make the firefox crashes...
I'd like to modify the example and try again, e.g. to change the triangle colour.
When I save, Chrome makes an HTML page and a directory of supporting files.
When I load the HTML file, any time I click on an example in the pane on the left, it loads it from your website rather than from the local files. So I don't get the effect of may source code change.
This prevents us from running all the samples except the triangle stuff.
At this point, it could be a helper function that creates a buffer with data and issues a copy.
cc @kainino0x
GPUBuffer.setSubData
was removed from the spec a while ago, but it continued to work until at least the last time I posted about a WebGPU breakage, which was a few weeks ago. All of the examples that use setSubData
no longer work, though the method does still exist. The triangle examples don't use the method, so they are fine.
It's unclear whether there will be any attempts to fix or replace setSubData
since I believe the decision to deprecate was related to poor performance. (Maybe they're trying to re-implement it?)
If it's not fixed by the next version of Chrome or a few, it might be worth looking into some of the alternative ways of uploading buffer data.
This change log shows what the developers did to replace setSubData for a few use cases:
https://trac.webkit.org/changeset/246217/webkit
Here's some example code from the WebGPU design github repository:
https://github.com/gpuweb/gpuweb/blob/master/design/BufferOperations.md
I hope they're still thinking about how to get a higher level buffer upload function into the API, but for now these lower level functions do seem to work. I used them to fix my own example.
Hi Austin,
I was curious if it's possible to do atomic operations (eg: atomicAdd) with WGSL, I see these functions in the spec. However, I have been unable to make use of them in WGSL (works with SPIR-V). Is it that maybe chrome implementation of WGSL does not contain the atomics part of spec yet?
Hi,
I got the following error when running the examples on Chrome canary (64-bit) version 89.0.4380.0:
Uncaught (in promise) TypeError: m.device.createBufferMapped is not a function.
It just showed a black screen.
Please advice.
Just curious: that line makes a new object every frame it seems. Is there a better way so it doesn't grow memory each frame? Can that function be called once, outside the frame? Or does it need to be called every time?
If it needs to be called every time, maybe making the object arg outside of frame would help a little.
At this time both Chromium and Firefox have reasonable support for WGSL and are close to the upstream specification (except I/O for Firefox but that's coming soon?) @austinEng what do you think of removing the SPIRV path?
Hello, even though it's mentioned on the first page that it works on Chrome Canary, first two demos also work fine on FF Nightly - without any errors
but the rest of the demos breaks, because I get this error:
Uncaught (in promise) TypeError: d.getMappedRange is not a function
I don't think it's worth spending time on resolving an issue, but just in case anyone faces an issue too, where only first two demos work - most probably FF webgpu hasn't implemented GPUBuffer.getMappedRange
yet
Here's my environment:
When I open every example in browser, the console log the following:
Push constants aren't supported.
Object is an error.
I searched the issues and found this one https://github.com/austinEng/webgpu-samples/issues/27.
So the root cause is in the Metal driver on Intel Macs right?
There was a number of adjustments in the API, like bindings
-> entries
change, that would be nice to see here.
Hi, your code helps me a lot. I have a question for you.
How to render multi-views?
I try the following code, but only render the last command!
class Renderer {
sceneView;
constructor(sceneView) {
this.sceneView = sceneView;
}
frame() {
let gpuCommands = [];
for (let i = 0, len = this.sceneView.viewList.length; i < len; i++) {
let viewport = this.sceneView.viewList[i];
viewport.preRender.raiseEvent();
//gpuCommands.splice(0,0,this.renderViewport(viewport));
gpuCommands.push(this.renderViewport(viewport));
viewport.postRender.raiseEvent();
}
this.sceneView.device.queue.submit(gpuCommands);
}
renderViewport(viewport) {
// CAMERA BUFFER
const cameraViewProjectionMatrix = viewport.camera.getCameraViewProjMatrix();
this.sceneView.device.queue.writeBuffer(
viewport.cameraUniformBuffer,
0,
cameraViewProjectionMatrix.buffer,
cameraViewProjectionMatrix.byteOffset,
cameraViewProjectionMatrix.byteLength
);
viewport.renderPassDescriptor.colorAttachments[0].view = this.sceneView.swapChain.getCurrentTexture().createView();
const commandEncoder = this.sceneView.device.createCommandEncoder();
const passEncoder = commandEncoder.beginRenderPass(viewport.renderPassDescriptor);
passEncoder.setViewport(viewport.viewRect.x, viewport.viewRect.y, viewport.viewRect.width, viewport.viewRect.height, 0, 1);
passEncoder.setScissorRect(viewport.viewRect.x, viewport.viewRect.y, viewport.viewRect.width, viewport.viewRect.height);
for (let object of viewport.scene.Primitives) {
object.draw(passEncoder, this.sceneView.device, viewport.camera)
}
passEncoder.endPass();
return commandEncoder.finish();
}
}
I get an error:
TypeError: Cannot read properties of null (reading 'requestDevice')
any ideas?
chrome canary Version 95.0.4630.2 (Official Build) canary (64-bit)
windows 10
GTX560
geforce driver 378.92 (could the driver be too old? with any later geforce driver my gpu downclocks to 50 MHz , so I cannot update)
for example apple has hooked up compute support:
https://trac.webkit.org/changeset/246427/webkit
https://trac.webkit.org/browser/webkit/trunk/LayoutTests/webgpu/whlsl-compute.html?rev=246427
and sample in that patch shows it supports feeding WHLSL directly..
Because WebGPU has an implicit present step, that means the code is always a frame late.
PSA for Chromium / Dawn WebGPU API updates 2020-07-28 mentions a number of API breaking changes. We should implement them in the samples so that Chromium can start removing the "old" paths.
Hi, your code helps me a lot. I have a question for you.
How to render multi-views?
I try the following code, but only render the last command!
class Renderer {
sceneView;
constructor(sceneView) {
this.sceneView = sceneView;
}
frame() {
let gpuCommands = [];
for (let i = 0, len = this.sceneView.viewList.length; i < len; i++) {
let viewport = this.sceneView.viewList[i];
viewport.preRender.raiseEvent();
//gpuCommands.splice(0,0,this.renderViewport(viewport));
gpuCommands.push(this.renderViewport(viewport));
viewport.postRender.raiseEvent();
}
this.sceneView.device.queue.submit(gpuCommands);
}
renderViewport(viewport) {
// CAMERA BUFFER
const cameraViewProjectionMatrix = viewport.camera.getCameraViewProjMatrix();
this.sceneView.device.queue.writeBuffer(
viewport.cameraUniformBuffer,
0,
cameraViewProjectionMatrix.buffer,
cameraViewProjectionMatrix.byteOffset,
cameraViewProjectionMatrix.byteLength
);
viewport.renderPassDescriptor.colorAttachments[0].view = this.sceneView.swapChain.getCurrentTexture().createView();
const commandEncoder = this.sceneView.device.createCommandEncoder();
const passEncoder = commandEncoder.beginRenderPass(viewport.renderPassDescriptor);
passEncoder.setViewport(viewport.viewRect.x, viewport.viewRect.y, viewport.viewRect.width, viewport.viewRect.height, 0, 1);
passEncoder.setScissorRect(viewport.viewRect.x, viewport.viewRect.y, viewport.viewRect.width, viewport.viewRect.height);
for (let object of viewport.scene.Primitives) {
object.draw(passEncoder, this.sceneView.device, viewport.camera)
}
passEncoder.endPass();
return commandEncoder.finish();
}
}
In ImageBlur example, compute shader layout is
layout(set = 1, binding = 1) uniform texture2D inputTex;
layout(set = 1, binding = 2, rgba8) uniform writeonly image2D outputTex;
layout(set = 1, binding = 3) uniform Uniforms {
uint uFlip;
};
where inputTex
's internal usage is constants and outputTex
's internal usage is storage-write, they are not in same compatible-usage-list. Binding same textures into these two entries should result in error states for both passEncoder and commandEncoder.
But in the following code , texture[0]
and texture[1]
are bound to inputTex
and then outputTex
repeatly in same passEncoder.
const computeBindGroup0 = device.createBindGroup({
layout: blurPipeline.getBindGroupLayout(1),
entries: [
{
binding: 1,
resource: cubeTexture.createView(),
},
{
binding: 2,
resource: textures[0].createView(),
},
{
binding: 3,
resource: {
buffer: buffer0,
},
},
],
});
const computeBindGroup1 = device.createBindGroup({
layout: blurPipeline.getBindGroupLayout(1),
entries: [
{
binding: 1,
resource: textures[0].createView(),
},
{
binding: 2,
resource: textures[1].createView(),
},
{
binding: 3,
resource: {
buffer: buffer1,
},
},
],
});
const computeBindGroup2 = device.createBindGroup({
layout: blurPipeline.getBindGroupLayout(1),
entries: [
{
binding: 1,
resource: textures[1].createView(),
},
{
binding: 2,
resource: textures[0].createView(),
},
{
binding: 3,
resource: {
buffer: buffer0,
},
},
],
});
const computePass = commandEncoder.beginComputePass();
computePass.setPipeline(blurPipeline);
computePass.setBindGroup(0, computeConstants);
computePass.setBindGroup(1, computeBindGroup0);
computePass.dispatch(
Math.ceil(srcWidth / blockDim),
Math.ceil(srcHeight / batch[1])
);
computePass.dispatch(2, Math.ceil(srcHeight / batch[1]));
computePass.setBindGroup(1, computeBindGroup1);
computePass.dispatch(
Math.ceil(srcHeight / blockDim),
Math.ceil(srcWidth / batch[1])
);
for (let i = 0; i < settings.iterations - 1; ++i) {
computePass.setBindGroup(1, computeBindGroup2);
computePass.dispatch(
Math.ceil(srcWidth / blockDim),
Math.ceil(srcHeight / batch[1])
);
computePass.setBindGroup(1, computeBindGroup1);
computePass.dispatch(
Math.ceil(srcHeight / blockDim),
Math.ceil(srcWidth / batch[1])
);
}
how to gernerate mimap texture?
Now that we've decided on the coordinate systems in the spec, we need to update these samples. gpuweb/gpuweb#458
Immediately creates dialog that claims the platform does not support webGPU. Works perfectly on Chrome Canary on same device.
When I was writing a program inspired by deferred rendering sample, I noticed that current UV mapping algorithm have minimum resolution of 2x2 pixels.
Here is how to see prove that issue:
When adding following lines at the end of "fragmentDeferredRendering.wgsl" before return statement:
let fragCoordI=vec2<i32>(round(coord.xy));
if(fragCoordI.x > 500) {
result=vec3<f32>(1.0,0.0,0.0); //red
}
if(fragCoordI.x > 501) {
result=vec3<f32>(0.0,1.0,0.0); //green
}
if(fragCoordI.x > 502) {
result=vec3<f32>(0.0,0.0,1.0); //blue
}
if(fragCoordI.x > 503) {
result=vec3<f32>(1.0,1.0,0.0); //yellow
}
if(fragCoordI.x > 504) {
result=vec3<f32>(0.0,1.0,1.0); //cyan
}
if(fragCoordI.x > 505) {
result=vec3<f32>(1.0,0.0,1.0); //pink
}
Expected result is 1px thin red, green, blue, yellow and cyan line to be drawn and right side of image to be pink.
Here is a screenshot from actual results:
When zooming to very close, it is visible that only green and yellow line is visible, both being 2px wide:
The 2x2 pixel resolution is also clearly visible when looking aliasing of edges of the squares in the sample page
This may be a bit much to ask (in which case no worries, I totally understand), but I figure I'll ask anyways; I think it would be quite useful to have an example of rendering text, specifically one that first renders the text to a canvas and then renders that with WebGPU via a texture.
A more complicated but also perhaps more useful example would render some chars to a canvas, create a texture from it, and then use that texture to construct various strings on-the-fly by sampling from different parts of the texture for each letter. (Note that a WebGL version of this is described in https://webglfundamentals.org/webgl/lessons/webgl-text-glyphs.html, which might be helpful.)
I'm happy to help with the implementation of this if wanted and as I'm able, but unfortunately I think my knowledge is too lacking in both WebGPU and graphics programming in general to do the full PR myself.
I enabled both flags dom.webgpu.enabled
and gfx.webrender.all
. Gpu is AMD R9-380, vulkan version 1.2.148 All samples results in this error
Uncaught (in promise) TypeError: GPUDevice.createRenderPipeline: Missing required 'layout' member of GPUPipelineDescriptorBase.
createRenderPipeline index.js:27
createRenderPipeline index.js:27
i computeBoids.ts:23
r computeBoids-0af943.js:1
promise callback*c computeBoids-0af943.js:1
r computeBoids-0af943.js:1
promise callback*c computeBoids-0af943.js:1
r computeBoids-0af943.js:1
Works fine on Chrome canary on Macos.
I tried to display the following sample in Chrome Canary for Mac.
https://austineng.github.io/webgpu-samples/#helloTriangle
https://austineng.github.io/webgpu-samples/#helloTriangleMSAA
However, there is no error on the console and nothing is displayed on the screen.
Strangely, samples other than the above are displayed without problems.
The environment I tried is as follows.
MacBook Air + Intel UHD Graphics 617 + MacOS X + Chrome Canary 81.0.3998.0 : NG
MacBook Air + Intel UHD Graphics 617 + Win10(BootCamp) + Chrome Canary 81.0.3998.0 : OK
When running in Chromium Canary, compute boids fail with the following error:
Tint WGSL reader failure:
Parser: error: 19:46 error: variables declared in the <storage> storage class must be of an [[access]] qualified structure type
[[binding(1), group(0)]] var<storage_buffer> particlesA : Particles;
^^^^^^^^^^
3:5 warning: use of deprecated language feature: [[offset]] has been replaced with [[size]] and [[align]]
[[offset(0)]] pos : vec2<f32>;
^^^^^^
4:5 warning: use of deprecated language feature: [[offset]] has been replaced with [[size]] and [[align]]
[[offset(8)]] vel : vec2<f32>;
^^^^^^
7:5 warning: use of deprecated language feature: [[offset]] has been replaced with [[size]] and [[align]]
[[offset(0)]] deltaT : f32;
^^^^^^
8:5 warning: use of deprecated language feature: [[offset]] has been replaced with [[size]] and [[align]]
[[offset(4)]] rule1Distance : f32;
^^^^^^
9:5 warning: use of deprecated language feature: [[offset]] has been replaced with [[size]] and [[align]]
[[offset(8)]] rule2Distance : f32;
^^^^^^
10:5 warning: use of deprecated language feature: [[offset]] has been replaced with [[size]] and [[align]]
[[offset(12)]] rule3Distance : f32;
^^^^^^
11:5 warning: use of deprecated language feature: [[offset]] has been replaced with [[size]] and [[align]]
[[offset(16)]] rule1Scale : f32;
^^^^^^
12:5 warning: use of deprecated language feature: [[offset]] has been replaced with [[size]] and [[align]]
[[offset(20)]] rule2Scale : f32;
^^^^^^
13:5 warning: use of deprecated language feature: [[offset]] has been replaced with [[size]] and [[align]]
[[offset(24)]] rule3Scale : f32;
^^^^^^
16:5 warning: use of deprecated language feature: [[offset]] has been replaced with [[size]] and [[align]]
[[offset(0)]] particles : [[stride(16)]] array<Particle, 1500>;
^^^^^^
Uncaught TypeError: GPUCommandEncoder.beginRenderPass: Missing required 'view' member of GPURenderPassColorAttachmentDescriptor.
According to the spec, the view parameter is required now: https://gpuweb.github.io/gpuweb/#color-attachments, so just passing that in the beginRenderPass
should fix the issue.
Hello Austin, this example is broken https://austineng.github.io/webgpu-samples/#twoCubes
rotatingCube.ts:66 Uncaught (in promise) TypeError: Failed to execute 'createRenderPipeline' on 'GPUDevice': required member stencilBack is undefined.
at Module.<anonymous> (rotatingCube.ts:66)
at Generator.next (<anonymous>)
at o (rotatingCube.js:1)
Also, the first one with the red triangle is working just when i load the page the first time.
I am on chrome 79.0.3928.4 (Official Build) dev (64-bit)
I tried opening examples (https://austin-eng.com/webgpu-samples/samples/helloTriangle) on Firefox Nightly 94.0a1 (2021-09-24) (64-bit) and I get the following:
Hello Triangle
See it on Github!
Shows rendering a basic triangle.
Is WebGPU Enabled?
TypeError: can't access property "requestAdapter", navigator.gpu is undefined
I've also tried to set flags dom.webgpu.enabled=true
and gfx.webrender.all=true
as suggested in one of the articles on the subject. Then, counterintuitively, I get the following:
Hello Triangle
See it on Github!
Shows rendering a basic triangle.
Is WebGPU Enabled?
InvalidStateError: WebGPU is not enabled!
The new version of Chrome Canary apparently introducing breaking changes, as I see a black screen for each of your examples. Would you have any idea about what needs to be changed now?
First off, thanks so much for putting these samples up! 🎉 🎉 It's been a great resource to learn from, really appreciate the time you put into this.
This is more a question really and not anything wrong with the samples themselves; I've basically been building myself a toy framework based on these samples. One thing I noticed today while following the textured cube example is that image loading seems to be weird in the version of Canary I have; it's possible this is more a question for the Chromium team but figured I'd start here since this is where I first encountered this behavior.
Anyways, basically, on my computer, it seems like image loading in general ends up blocking WebGPU from rendering things to the screen for some reason. The code I have now
just nothing renders to the screen. The weird thing is, when I resize the window however, things render as expected. Even something simple like this
let loadImage = async ()=>{
let img = new Image();
img.src ="test.jpg"
await img.decode();
}
async function run(){
// if this line is un-commented, nothing renders until I resize the window.
let img = await loadImage();
// more WebGPU code to render something
}
causes this behavior to manifest.
I was hoping that you might have some kind of idea as to what is happening? If not that's cool, but figured it couldn't hurt to ask.
Thanks!
Any chance that source of dist/utils.js wasm will be provided?
Currently no WGSL samples covers this GLSL use case in vertex shader:
layout(std430, set = 0, binding = 0) readonly buffer myBuffer {
float numbers[];
} myBuffer;
This examples can be useful.
Some code with readPixel
is here, but I don’t understand yet how to do it.
It worked before. However, now it gives a black cube.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.