google / model-viewer Goto Github PK
View Code? Open in Web Editor NEWEasily display interactive 3D models on the web and in AR!
Home Page: https://modelviewer.dev
License: Apache License 2.0
Easily display interactive 3D models on the web and in AR!
Home Page: https://modelviewer.dev
License: Apache License 2.0
It should be possible to create a new type of view (e.g., VR, off-screen rendering) as mixins.
Consider renaming 'views' to 'modes' - views is overloaded, and we already refer to these as modes in some of the docs.
Note that it may make sense to have tasks or new issues to create those mixins when we pick this work up.
Especially noticeable on mobile with a large element; even a 400x300 element can make it impossible to scroll.
Perhaps create these as optional addons? Would vignette
fit into that category?
While we could use ImageBitmapRenderingContext in the main thread and blit textures using that instead of Canvas2DRenderingContexts in a more efficient way, the bitmap rendering and offscreen canvas APIs are related and if supported, should both be available, meaning we can always do rendering in a worker for this codepath. We could transferControlToOffscreen for each model element, and then we'd have multiple WebGL contexts in a worker, or multiple workers, but unclear if that's more performant than a single context in a worker sending out textures. Also if one can crop the bitmaps created from there.
More info: #10 (comment)
Dependent on #10 in some cases, but after writing this, now I'm not sure.
While some content publishers want full control over the scene, there are specifically perf costs for things like shadows and lights. Do we want something like allow-model-lights
to import dynamic lights that are found in a glTF model? Would this enable shadows as well? Should this be disabled, even if set, on mobile?
Can be resolved via -p
flag for chokidar for polling, but non-ideal.
paulmillr/chokidar#237
I have a 3D model and when I tried to load it with <xr-model>
it renders at the bottom and a portion of the model gets cut off. See this screenshot:
The same model renders nicely in https://gltf-viewer.donmccurdy.com/ which is what I would expect.
I tried a few 3D models in poly.google.com and they all seem to render a bit off in <xr-model>
. It would be nice if the <xr-model>
can render the model without needing to adjust it by hand.
We may be able to check the size of the element every frame instead.
For release, we need:
How do we want this interaction to work if both values are set on a model?
Also include a browser compatibility support/platform requirements matrix
UpdatingElement
to save a few bytes https://github.com/Polymer/lit-element/blob/master/src/lib/updating-element.tsMeta bug, can break out if not handled here. Taken from current readme:
There is currently no way to tell whether an iOS device has AR Quick Look support. Possibly check for other features added in Safari iOS 12 (like CSS font-display): https://css-tricks.com/font-display-masses/ Are there any better ways?
Since there are no USDZ three.js loaders (and seems that it'd be difficult to do), Safari iOS users would either need to load a poster image, or if they load the 3D model content before entering AR, they'd download both glTF and USDZ formats, which are already generally rather large. Not sure if this is solvable at the moment, so we may need to just document the caveats.
With native AR Quick Look, the entire image triggers an intent to the AR Quick Look. Currently in this component implementation, the user must click the AR button. Unclear if we want to change this, as interacting and moving the model could cause an AR Quick Look trigger. Do we want this to appear as native as possible?
The size of the AR Quick Look native button scales to some extent based off of the wrapper. We could attempt to mimic this, or leverage the native rendering possibly with a transparent base64 image. I've played around with this, but someone with an OSX/iOS device should figure out the rules for the AR button size. I did some prelim tests here: https://ar-quick-look.glitch.me/
The list probably includes auto-rotate
, orbit-controls
, ar
, preload
(although that list could be trimmed if it reduces scope or complexity). This will help tease out patterns for extensibility.
Using an Intersection Observer and lazily draw only models in view.
Has decent browser support: https://caniuse.com/#feat=intersectionobserver
Probably should depend on #10.
For inline, WebXR, and Magic Leap, we'll need a glTF/glb file. There's a scenario where only the USDZ is provided, with a poster, which is more or less no different from iOS's native AR Quick Look.
This may already work due to the mixin pattern, but something we should confirm, and/or clarify in docs.
Current options on the table include:
This one was tricky earlier and never was fully happy with a solution.
Models can be large, and may want to lazy load the models until an interaction occurs. We can display a poster
image in the mean time, but if preload is the default, our "out of the box" model (<xr-model src="..."></xr-model>
) would not preload, or if it does preload, it doesn't have a poster to display. We could have some smart options (like never preloading on mobile). But balancing the out of the box experience default params with the ideal user experience is something we'll need to figure out.
Related is what kind of messaging is provided if the user needs to interact with the element before loading a model? "Click to display model"?
While our vision includes a generic component that supports multiple views (embedded, AR and VR), this first release will support only embedded and AR.
Is there room for a specific embedded and AR component, perhaps with a simpler interface, and show we adjust the name to reflect our initial focus - perhaps to ar-model?
Instead, these should be generated as part of a prepublish
or prepare
NPM script.
Currently several CE lifecycle callbacks have empty implementations. If these aren't needed, we should just remove them.
Our goal is to make it simple, so let's define defaults that look good and are performant. Advanced developers (or users with specific needs) can turn to three.js.
A simple version (recommended by @mrdoob) might be to only use an environment map.
We've been calling this a composer, but we need a simple way to at least:
Even with WebXR flags enabled/ARCore installed, the ARCore bindings do not ship in Chrome release or beta. This means checking the existence of XRSession.prototype.requestHitTest
is insufficient, which will exist when the flags are enabled, even in release. This test was originally used since the initial releases of AR support did not have any flags to distinguish an AR session from a typical VR session, but now there is the temporary session option environmentIntegration
that we can use.
requestSession
requires a user gesture, but we can call supportsSession({ environmentIntegration: true })
outside of a gesture to test existence of AR support (if WebXR is detected). May also need to pass in the XRPresentationContext in the outputContext
session config key.
Related bug: https://bugs.chromium.org/p/chromium/issues/detail?id=898980
This may be an addon/mixin that adds a new attribute (some version of source-magicleap
, magicleap-source
, or source-fbx
), and abstracts the <ml-model>
tag (passing down parameters from the enclosing <xr-model>
into an <ml-model>
nested into a shadow root).
We'll be using GLTFLoader for this, which handles both, but maybe we should be clear on what we're supporting. Related to dynamic lights (#34) which 2.0 has a different light extension.
We're not confident that our current approach is stable long-term. Is there a better mechanism we can use to discover what capabilities the current device has?
We discussed that having a hosted version of the component on a CDN that could be easily included would be helpful.
There are a few use cases we should support:
Where to host this is TBD. We should include links in the README so this is discoverable.
The content author should decide if this is the correct polyfill, or if one's needed at all depending on which browsers they're targeting.
We have in previous iterations offered a background-image
attribute that allows setting an equirectangular image to be used as the skybox, as well as an environment map. Is this something we want?
Related to #56 maybe, but perhaps we can consider only allowing rotation on the Y, and a limit on zooming (or removing, or using a different gesture due to #56?) -- when using a provided image, it's possible to zoom out too far seeing the skysphere, and possible to zoom in too close and the near plane kicks in, or 'get lost' so to speak with no way of resetting.
It's pedantically correct but potential confusing - including requiring users to know/specify mime-types).
Instead, use a simpler, attribute-based approach. The main glTF resource can remain source
, and we can have other optional attributes for source-usdz
and potentially source-ml
(or source-fbx
).
The current/previous WebXR+AR experience was the most crude thing imaginable: a circle appears on a surface when found and tapping places the model there. No other visuals or indications. Very MVP. Open questions surrounding this, and what we need or can do for an initial release:
In the WebXR+AR codelab, a gif is displayed to users indicating they should "look around" with their device as a pose is found. This can take a few seconds in some environments on some phones.
We use this gif with the caveat that there's some compositing jank with it.
In the Chacmool AR experience, instructions are rendered to describe how to interact.
In the Chacmool AR experience, the reticle becomes the footprint of the model, and displays additional information regarding the dimensions (once placed, see the other Chacmool image here).
Apparently it's difficult to touch the screen and press two buttons to take a screenshot, but you can see some styled touch marks when interacting, and can slide (translate) and rotate the model (via two finger rotate). These interactions are/were non-trivial to get working, probably out of scope for MVP. I think the "pin" button halts the movements.
This seems easy and something we should probably have. The "X" in the upper left in Chacmool example.
Thoughts on removing ar
attribute? Rational being this should just be for "free" on platforms that can support it, rather than something one explicitly has to ask for.
Right now, there are cases where each model having its own GL renderer is more performant than using one renderer (via #50) blitting to 2D canvases. Most cases, actually, but harder to test without a way to flip between rendering strategies.
While working on environment maps, each model will need access to whatever renderer its using to create the appropriate maps, which prompted this issue.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.