Code Monkey home page Code Monkey logo

model-viewer's Issues

Create a generalized notion of 'views'

It should be possible to create a new type of view (e.g., VR, off-screen rendering) as mixins.

Consider renaming 'views' to 'modes' - views is overloaded, and we already refer to these as modes in some of the docs.

Note that it may make sense to have tasks or new issues to create those mixins when we pick this work up.

Add ImageBitmapRendering + OffscreenCanvas (on main thread) rendering mode

While we could use ImageBitmapRenderingContext in the main thread and blit textures using that instead of Canvas2DRenderingContexts in a more efficient way, the bitmap rendering and offscreen canvas APIs are related and if supported, should both be available, meaning we can always do rendering in a worker for this codepath. We could transferControlToOffscreen for each model element, and then we'd have multiple WebGL contexts in a worker, or multiple workers, but unclear if that's more performant than a single context in a worker sending out textures. Also if one can crop the bitmaps created from there.

More info: #10 (comment)

Dependent on #10 in some cases, but after writing this, now I'm not sure.

Allow users to bring their own lights

While some content publishers want full control over the scene, there are specifically perf costs for things like shadows and lights. Do we want something like allow-model-lights to import dynamic lights that are found in a glTF model? Would this enable shadows as well? Should this be disabled, even if set, on mobile?

The model is not centered correctly

I have a 3D model and when I tried to load it with <xr-model> it renders at the bottom and a portion of the model gets cut off. See this screenshot:

screen shot 2018-10-26 at 10 45 35 am

The same model renders nicely in https://gltf-viewer.donmccurdy.com/ which is what I would expect.

screen shot 2018-10-26 at 10 42 51 am

I tried a few 3D models in poly.google.com and they all seem to render a bit off in <xr-model>. It would be nice if the <xr-model> can render the model without needing to adjust it by hand.

Documentation/Website

For release, we need:

  • Docs (can be README, or on website) -- any patterns/tools for documenting web component APIs?
    • APIs, caveats, the polyfill story, compatibility matrix, best practices
    • #8
  • Ensure all example pages work, and ideally some nice way of navigating them (e.g. three.js demos)
  • Maybe a gif for the README

`auto-rotate` and `controls` at odds

How do we want this interaction to work if both values are set on a model?

  • Does the model continue spinning even after interacting with orbit controls?
  • Can only one exist at the same time?
  • Does the model auto-rotate until interacting with it, then orbit controls take over, and then after inactivity, go back to rotating? Do we switch back to the lazy susan camera view, or start from where the user left off via the controls?

We should use lit-element's `UpdatingElement` as the base class

iOS Quick Look - make sure that it works as expected, and that the caveats outlined here are documented in the README

Meta bug, can break out if not handled here. Taken from current readme:

  • There is currently no way to tell whether an iOS device has AR Quick Look support. Possibly check for other features added in Safari iOS 12 (like CSS font-display): https://css-tricks.com/font-display-masses/ Are there any better ways?

  • Since there are no USDZ three.js loaders (and seems that it'd be difficult to do), Safari iOS users would either need to load a poster image, or if they load the 3D model content before entering AR, they'd download both glTF and USDZ formats, which are already generally rather large. Not sure if this is solvable at the moment, so we may need to just document the caveats.

  • With native AR Quick Look, the entire image triggers an intent to the AR Quick Look. Currently in this component implementation, the user must click the AR button. Unclear if we want to change this, as interacting and moving the model could cause an AR Quick Look trigger. Do we want this to appear as native as possible?

  • The size of the AR Quick Look native button scales to some extent based off of the wrapper. We could attempt to mimic this, or leverage the native rendering possibly with a transparent base64 image. I've played around with this, but someone with an OSX/iOS device should figure out the rules for the AR button size. I did some prelim tests here: https://ar-quick-look.glitch.me/

Require 'src' attribute

For inline, WebXR, and Magic Leap, we'll need a glTF/glb file. There's a scenario where only the USDZ is provided, with a poster, which is more or less no different from iOS's native AR Quick Look.

This may already work due to the mixin pattern, but something we should confirm, and/or clarify in docs.

Decide on licensing model to use

Current options on the table include:

  • Google licensing model (currently in place)
  • Polymer licensing model (BSD 3 Clause, etc)

Revisit poster & preload behavior

This one was tricky earlier and never was fully happy with a solution.

Models can be large, and may want to lazy load the models until an interaction occurs. We can display a poster image in the mean time, but if preload is the default, our "out of the box" model (<xr-model src="..."></xr-model>) would not preload, or if it does preload, it doesn't have a poster to display. We could have some smart options (like never preloading on mobile). But balancing the out of the box experience default params with the ideal user experience is something we'll need to figure out.

Related is what kind of messaging is provided if the user needs to interact with the element before loading a model? "Click to display model"?

Rename xr-model to model-viewer throughout project

While our vision includes a generic component that supports multiple views (embedded, AR and VR), this first release will support only embedded and AR.

Is there room for a specific embedded and AR component, perhaps with a simpler interface, and show we adjust the name to reflect our initial focus - perhaps to ar-model?

Simple editor/composer

We've been calling this a composer, but we need a simple way to at least:

  • Import multiple models, and export as a single model
  • Perform simple movements (changing the origin, scaling)
  • Preview how the model will appear embedded or in AR

Gate WebXR support on supportsSession

Even with WebXR flags enabled/ARCore installed, the ARCore bindings do not ship in Chrome release or beta. This means checking the existence of XRSession.prototype.requestHitTest is insufficient, which will exist when the flags are enabled, even in release. This test was originally used since the initial releases of AR support did not have any flags to distinguish an AR session from a typical VR session, but now there is the temporary session option environmentIntegration that we can use.

requestSession requires a user gesture, but we can call supportsSession({ environmentIntegration: true }) outside of a gesture to test existence of AR support (if WebXR is detected). May also need to pass in the XRPresentationContext in the outputContext session config key.

Related bug: https://bugs.chromium.org/p/chromium/issues/detail?id=898980

Target Magic Leap's ml-model

This may be an addon/mixin that adds a new attribute (some version of source-magicleap, magicleap-source, or source-fbx), and abstracts the <ml-model> tag (passing down parameters from the enclosing <xr-model> into an <ml-model> nested into a shadow root).

glTF spec support

We'll be using GLTFLoader for this, which handles both, but maybe we should be clear on what we're supporting. Related to dynamic lights (#34) which 2.0 has a different light extension.

CDN release, provided builds

We discussed that having a hosted version of the component on a CDN that could be easily included would be helpful.

There are a few use cases we should support:

  • A fully bundled release (including all polyfills and three.js)
  • An unbundled release (for users who already have three.js)
  • A specifically versioned URL for stability
  • A 'latest stable' URL to always get the latest & greatest

Where to host this is TBD. We should include links in the README so this is discoverable.

Externalize dependency on polyfills

The content author should decide if this is the correct polyfill, or if one's needed at all depending on which browsers they're targeting.

'background-image' feature?

We have in previous iterations offered a background-image attribute that allows setting an equirectangular image to be used as the skybox, as well as an environment map. Is this something we want?

Limit OrbitControls?

Related to #56 maybe, but perhaps we can consider only allowing rotation on the Y, and a limit on zooming (or removing, or using a different gesture due to #56?) -- when using a provided image, it's possible to zoom out too far seeing the skysphere, and possible to zoom in too close and the near plane kicks in, or 'get lost' so to speak with no way of resetting.

Remove <source> configuration API

It's pedantically correct but potential confusing - including requiring users to know/specify mime-types).

Instead, use a simpler, attribute-based approach. The main glTF resource can remain source, and we can have other optional attributes for source-usdz and potentially source-ml (or source-fbx).

WebXR+AR experience design

The current/previous WebXR+AR experience was the most crude thing imaginable: a circle appears on a surface when found and tapping places the model there. No other visuals or indications. Very MVP. Open questions surrounding this, and what we need or can do for an initial release:

Stabilizing notification

In the WebXR+AR codelab, a gif is displayed to users indicating they should "look around" with their device as a pose is found. This can take a few seconds in some environments on some phones.

We use this gif with the caveat that there's some compositing jank with it.

Instructions

In the Chacmool AR experience, instructions are rendered to describe how to interact.

Footprint Reticle

In the Chacmool AR experience, the reticle becomes the footprint of the model, and displays additional information regarding the dimensions (once placed, see the other Chacmool image here).

Interaction Controls

Apparently it's difficult to touch the screen and press two buttons to take a screenshot, but you can see some styled touch marks when interacting, and can slide (translate) and rotate the model (via two finger rotate). These interactions are/were non-trivial to get working, probably out of scope for MVP. I think the "pin" button halts the movements.

Exit controls

This seems easy and something we should probably have. The "X" in the upper left in Chacmool example.

Refactor renderer to handle different strategies

Right now, there are cases where each model having its own GL renderer is more performant than using one renderer (via #50) blitting to 2D canvases. Most cases, actually, but harder to test without a way to flip between rendering strategies.

While working on environment maps, each model will need access to whatever renderer its using to create the appropriate maps, which prompted this issue.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.