bjornbytes / lovr Goto Github PK
View Code? Open in Web Editor NEWLua Virtual Reality Framework
Home Page: https://lovr.org
License: MIT License
Lua Virtual Reality Framework
Home Page: https://lovr.org
License: MIT License
Having a general Buffer object in the graphics module is really powerful since it maps directly onto what the graphics API is doing and can be used to work more effectively with several other graphics objects.
I am using the sept 11 master. I build two versions, one using -DLOVR_OCULUS_PATH and thus using OculusVR SDK and the other defaulting to OpenVR. I run the same test program in both of them.
They are 180 degrees off from each other. If I face my monitor and run the test program, in the OpenVR mode the scene renders behind me, and if I do this in OculusVR mode the scene renders in front of me. Otherwise everything works correctly in both modes.
Expected behavior: The scene should appear identical in both OpenVR and OculusVR.
There was talk in slack about the value of exposing the standard shaders to Lua, or having some way to easily insert commonly-used chunks of shader functionality.
If you clone the OpenVR repo as it currently stands, you are unable to compile LÖVR from source. It appears LÖVR is targeting v1.0.3 (and I have confirmed the build works as expected when compiling against this release).
I will submit a PR that updates the README to include this requirement, but feel free to ignore and directly support the newest versions :)
Note: This has been tested only on Mac OSX.
I loaded a glTF from Adobe Animate into LÖVE. It was fine on https://gltf-viewer.donmccurdy.com but hardly seen with LÖVE using Animation sample with Scene_1.glb. What is wrong with the code?
function lovr.load()
model = lovr.graphics.newModel('Scene_1.glb')
animator = lovr.graphics.newAnimator(model)
animation = animator:getAnimationNames()[1]
assert(animation, 'No animation found!')
animator:play(animation)
animator:setLooping(animation, true)
model:setAnimator(animator)
shader = lovr.graphics.newShader([[
vec4 position(mat4 projection, mat4 transform, vec4 vertex) {
return projection * transform * lovrPoseMatrix * vertex;
}
]], nil)
end
function lovr.update(dt)
animator:update(dt)
end
function lovr.draw()
lovr.graphics.setShader(shader)
model:draw(0, 0, -4, 0.2)
lovr.graphics.setShader()
end
I felt like it was worth writing these up on GitHub as opposed to leaving the throughts in the eternal history of the Slack channel.
The general idea listed here is to tirn LOVR into an opinionated engine instead of a loose framework. The purpose of this would be to allow developers to spend virtually (heh) all of their time in VR instead of constantly cycling your headset on and off of your face.
I think that the pros of having all of these cool features generally outweigh the cons that I was able to think up. Being told how to design your code by the engine might seem rude, but it is definitely the trend we're in. The only people I can see who that might turn away woudl be peopel who generally aren't willing to expirement and learn in the first place. "I do things my way, and only my way" is probably nto the kind of attitude you want to foster in the community (I am aware of the irony here).
In the Lua bindings (src/api
folder) the luaL_check* and luaL_opt* family of functions are generally used to check argument types. However, if they are used with a negative argument index (like when reading a table field), you'll get a bad error message that looks like this:
Bad argument #-1 to Object:function (expected number, got string)
We should replace these with custom error messages using lovrAssert
or use luaL_typerror
with a better stack index.
Need a way to get the 'animated bounding box' of a Model. Either Model:getAABB or Animator:getAABB could work. Performance is definitely a concern for huge models, so there are other techniques like threads/heuristics/lower-poly model approximations that could be used, but those could be explored in Lua-land.
Here is a test file I have:
local shader = lovr.graphics.newShader([[
out vec3 lightDirection;
out vec3 normalDirection;
vec3 lightPosition = vec3(0, 3, 3);
vec4 position(mat4 projection, mat4 transform, vec4 vertex) {
vec4 vVertex = transform * vec4(lovrPosition, 1.);
vec4 vLight = lovrView * vec4(lightPosition, 1.);
lightDirection = normalize(vec3(vLight - vVertex));
normalDirection = normalize(lovrNormalMatrix * lovrNormal);
return projection * transform * vertex;
}
]], [[
in vec3 lightDirection;
in vec3 normalDirection;
vec3 cAmbient = vec3(.25);
vec3 cDiffuse = vec3(.75);
vec3 cSpecular = vec3(.35);
vec4 color(vec4 graphicsColor, sampler2D image, vec2 uv) {
float diffuse = max(dot(normalDirection, lightDirection), 0.);
float specular = 0.;
if (diffuse > 0.) {
vec3 r = reflect(lightDirection, normalDirection);
vec3 viewDirection = normalize(-vec3(gl_FragCoord));
float specularAngle = max(dot(r, viewDirection), 0.);
specular = pow(specularAngle, 5.);
}
vec3 cFinal = vec3(diffuse) * cDiffuse + vec3(specular) * cSpecular;
cFinal = clamp(cFinal, cAmbient, vec3(1.));
return vec4(cFinal, 1.) * graphicsColor * texture(image, uv);
}
]])
local tim = 0
function lovr.load()
lovr.graphics.setBackgroundColor(.8, .8, .8)
lovr.headset.setClipDistance(0.1, 3000)
--error("ok")
end
function lovr.update(dt)
--print(dt)
tim = tim + dt
for i, controller in ipairs(lovr.headset.getControllers()) do
if controller:isDown("touchpad") then
tim = 0
end
end
end
function lovr.draw()
local gs = 30
local far = 1*gs
local grid = 2*gs
for y=-grid,grid,gs do for z=-grid,grid,gs do
lovr.graphics.line(-far, y, z, far, y, z)
lovr.graphics.line(y, -far, z, y, -far, z)
lovr.graphics.line(y, z, -far, y, z, far)
end end
lovr.graphics.setShader(shader)
local count = 30
for i=1,count do
local stim = (tim * i) / count
local stim1 = stim + 1
lovr.graphics.translate(0, stim, -stim)
lovr.graphics.rotate(stim, 0, 1, 0.1)
lovr.graphics.scale(stim1, stim1, stim1)
lovr.graphics.cube('fill', 0, 0, 0, .25)
end
end
The "local shader =" value is taken exactly from a file shader.lua
that i got off the lovr.org docs at some point. I use this file in multiple of my programs.
I build two copies of lovr. lovr-march is built from commit 58e59d9 Mar 3 2018. lovr-sept is built from 48dcb50 Sep 11 2018. In the test program many cubes spiral around the space. You can click the touchpad to restart the animation. I run my test program. I turn around 180 degrees and look "up and to the left" (so the upper southeast corner). Using lovr-march, this looks fine. Using lovr-sept, the cube shading looks "wrong", the stereoscopy is broken.
Expected behavior: Because the shader is not dependent on stereo instancing, lovr-march and lovr-sept should look the same. The fact they are different suggests some sort of bug.
Hey, theGet Started
and Documentation
pages can't be accessed. Did their server perhaps crash?
A module for development utilities that is (by default) only loaded in non-fused mode. Contains things like:
Add support for graphics stenciling.
Currently to rebuild boot.lua (and other embedded resources), you have to manually cd
into a directly and run xxd
. It would be nice if CMake did this. We could probably have CMake invoke the xxd command (if it's available).
I also found this SO answer that suggests it could maybe be done from CMake directly without invoking xxd, which would make it more cross platform.
Following the compilation instructions for Mac OS X, I run into the following cmake error:
-- Checking for one of the modules 'openal' CMake Error at /usr/local/Cellar/cmake/3.9.1/share/cmake/Modules/FindPkgConfig.cmake:640 (message): None of the required 'openal' found Call Stack (most recent call first): CMakeLists.txt:152 (pkg_search_module)
Brew has installed openal-soft version 1.18.1.
I've started on some fake headset/controller support. It uses mouselook and WASD to move about.
There's a fake controller, locked to the head position/orientation, with left mouse button acting as trigger.
I'm working in this branch: https://github.com/bcampbell/lovr/tree/fake-headset
Not quite ready for a pull request, but I'm pleasantly surprised how little I've had to change.
The changes:
lovrHeadset
functions in headset/fake.c
, analogous to the webvr/openvr split.boot.lua
.src/graphics
, and a special case in boot.lua
to only initialise the headset after window creation. (ugh)TODO:
src/headset
, or in api/headset.c
.boot.lua
back how it was.It would be nice to extend math objects (vec3, mat4, quat) with custom functionality. That way if LÖVR doesn't provide something you need, you can add it yourself and have it work nicely. I have four ideas for how to do this and I'm not sure which one is the best, so I'm deferring here until I have enough information or desperation to make a decision:
__index
key of the metatable for math objects could be the value of lovr.math.vec3
, and then we give it its own metatable with a __call
key. This works and is sort of the ideal API, since you can just write lovr.math.vec3.customMethod = function() end
.
lovr.math.mixin
function where you can pass in all your custom functions and they'll be "installed" into the math objects.lovr.conf
setting that lets you mixin your own math functions.I had a look through the API and I can't tell if the appropriate data (bones, weights etc) is available from loaded Model
s, and there's no Data
object like I use with love for IQM, so I can't use this without making it build tables instead.
Also, is it possible to set all the matrix values manually in a Transform
object? looks like I'd need to do this when sending matrices to shaders for stuff like shadow mapping... which also needs support for render targets, which appear to be absent.
It should be nice to add OSVR support to LÖVR since a lot of devices supports it.
The ClientKit is a good starting point.
This is an alternative way of doing single pass stereo rendering that's particularly useful for Android and Vulkan. It shouldn't require very many changes to the way the renderer is structured, but will involve some new Canvas code and OpenGL extensions.
I had many issues while compiling lovr on Linux Mint (Ubuntu derivate), most of them related to older system headers/libs being used instead of ones supplied in deps directory.
In the end I managed to stumble through it, so here's list of issues I had with my solution/workaround:
/home/user/lovr/deps/msdfgen/core/msdfgen-c.cpp: In function ‘void msGenerateMSDF(uint8_t*, int, int, msShape*, float, float, float, float, float)’:
/home/user/lovr/deps/msdfgen/core/msdfgen-c.cpp:72:29: error: ‘isfinite’ was not declared in this scope
data[i++] = isfinite(r) ? clamp(int(r * 0x100), 0xff) : 0xff;
I replaced all isfinite instances with std::isfinite in msdfgen-c.cpp.
/home/user/lovr/src/api/graphics.c: In function ‘luax_readvertices’:
/home/user/lovr/src/api/graphics.c:65:5: error: ‘for’ loop initial declarations are only allowed in C99 or C11 mode
for (int i = 1; i <= count; i++) {
Added CMAKE_C_FLAGS:STRING=-std=c99 to CMakeCache.txt.
/home/user/lovr/deps/enet/unix.c: In function ‘enet_address_set_host’:
/home/user/lovr/deps/enet/unix.c:121:21: error: storage size of ‘hints’ isn’t known
struct addrinfo hints, * resultList = NULL, * result = NULL;
Added _#define XOPEN_SOURCE 600 to the top of that file.
/home/user/lovr/src/audio/audio.c: In function ‘lovrAudioInit’:
/home/user/lovr/src/audio/audio.c:20:10: error: unknown type name ‘LPALCRESETDEVICESOFT’
static LPALCRESETDEVICESOFT alcResetDeviceSOFT;
Had different version of libopenal-dev from package manager, so I had to compile local version in deps/openal-soft and set OPENAL_INCLUDE_DIRS:INTERNAL in CMakeCache.txt to correct dir.
I had to do same thing for assimp, glfw and physfs. For glfw, static lib didn't work, I had to build dynamic lib. Also needed to include -lX11 -ldl -lXxf86vm linker flags.
I'm out of my depth with CMake so I'm not able to submit a decent pull request.
Right now lovr.graphics.present
is in lovr.draw
but that's what renders to the headset, not the window. The stuff rendered to the window happens later, after the present. Does this mean that the mirror window is one frame behind?
I've been bitten several times over the past few days by vector mutations leaking into places I didn't expect. It seems that some operations yield new temporary vectors, while others just mutate the first argument and return it for chaining. It is not clear which operations do this, so I've found myself wrapping lots of things in vec3
out of paranoia.
While I understand that a mutation-based API might be good for performance reasons, I don't think it's a great default behavior, since it leads to sneaky issues that are hard to debug. I also don't think it's really necessary for most things, since you have this temporary vector system that reduces allocation and GC load.
At the moment, my preferred API design would be:
:mutCross
, :mutNormalize
, etc. Or something along those lines. This makes it easy to opt into mutation for cases where it matters, like with permanent vectors that you are frequently changing.However, I understand that this would be a big change that is liable to break, well everything. I just want to bring it up for discussion and get your thoughts on it.
A fresh download of the mac lovr.app fails with a libphysfs.dylib error. The issue seems to be that libphysfs.dylib is referenced by both @rpath/libphysfs.dylib and @rpath/libphysfs.1.dylib. A quick hack to fix it was to duplicate the libphysfs.dylib to libphysfs.1.dylib and then set the @rpath to the new name.
This person saw a failure with the Break demo on Pixel 2 + Daydream + VR Chrome:
https://twitter.com/aaryte/status/1195809179841134592
The console contained:
The error was: lovr.js:formatted:1100 PANIC: unprotected error in call to Lua API (Temporary vector space exhausted. Try using lovr.math.drain to drain the vector pool periodically.)
No Lua stack trace. The demo does do a fair bit of math with temporaries, but why does it only fail on this one specific platform?
I've found the lack of a math library to be really painful. It seems that > 90% of projects end up bringing in a maf/cpml, so it makes sense to just provide it directly. Thoughts:
v.x
. Function calls like v:setX()
would be pretty heavy and annoying to work with.OpenVR and Oculus APIs both have stencil masks available, and I'm told it can save around 10% off pixel fill. Seems valuable for any canvas which will be used for eye buffers.
If lovr.filesystem
got support for reading and writing archives, then it would be possible for the lovr CLI to support a --pack
option that packages your game into a .lovr/.zip for you. This could go even further and create a full executable (to the best of its ability on the current platform). It should be possible to implement this in boot.lua just using stuff that LÖVR provides.
I think the benefits of this are immense, because for every project I work on, a huge sticking point at the end is trying to figure out all those command line incantations I need to perform to finally package up the project.
One thing to think about is what archive formats to support. I think zip is a requirement, even though it sucks because it's the most complicated. I would want to use miniz for this. Other possibilities include tar and asar, which are simpler to extract things from.
Finally, having more archive functionality in lovr means that using physfs starts to make less sense, since I think one of its main benefits is its ability to mount archives to create a virtual filesystem. It would be possible to write this virtual filesystem stuff in lovr directly (rxi/lovedos shows that it can be made to be very simple). From here, I think it would be possible to remove physfs if the platform layer had some filesystem operations added to it.
Currently the API for 3D models doesn't expose very much. It would be great to add more functionality here, but I'm interested in collecting thoughts on what people actually want and collaborating more on it.
Some of the different areas that can be exposed:
ModelData:getVertexData
ModelData:getTriangleCount
ModelData:getTriangle
ModelData:getNodeCount
ModelData:getNodeName
ModelData:getLocalNodeTransform
ModelData:getGlobalNodeTransform
ModelData:getNodeParent
ModelData:getNodeComponentCount
ModelData:getNodeComponent
ModelData:getAnimationCount
Animation
object?KeyframeData
with getPointer hook for FFI?ModelData:getMaterialCount
ModelData:getMetalness
ModelData:getRoughness
ModelData:getDiffuseColor
ModelData:getEmissiveColor
ModelData:getDiffuseTexture
ModelData:getEmissiveTexture
ModelData:getMetalnessTexture
ModelData:getRoughnessTexture
ModelData:getOcclusionTexture
ModelData:getNormalTexture
LÖVR already supports streams for audio sources. I would like to be able to get samples of an audio source which I think would require support of static audio sources. I'm basing this off of the love2d Audio source constructor that accepts 'static'
and the SoundData documentation with SoundData:getSample(i)
.
Would it be possible to add support for mesh instancing?
This would be beneficial for scenes with allot of geometry that is the same.
Hey,
just wanted to try the latest version, but I get the following error:
In the report we get the following details:
Termination Reason: DYLD, [0x1] Library missing
Application Specific Information:
dyld: launch, loading dependent libraries
Dyld Error Message:
Library not loaded: /Users/*/Documents/*/libglfw.3.dylib
Referenced from: /Applications/LÖVR.app/Contents/MacOS/lovr
Reason: image not found
I'm running MacOS High Sierra.
Add buttons to the error screen to make it suck less:
These can be selected with the mouse, controllers (either laser pointer or LoS), or headset view vector (w/ timer).
What does "view docs" do? If we were able to determine that a lovr function threw an error internally from an assert, and we were able to determine the docs key for that function, we could link to the specific docs page for that function, which I think is what people usually do in this situation a large percentage of the time.
Here's how we can get the docs key:
getWidth
if you were inside Font:getWidth
. That's half the battle!Font:
part needed to link to https://lovr.org/docs/Font:getWidth, we can give every LÖVR function an upvalue that contains its "parent".
Font:getWidth
we would specify a parent "Font", and for lovr.graphics.cube
it would be "graphics".luax_registertype
already knows the name of the thing it's registering functions for, and it just uses luaL_register
to register functions, which could be easily modified to support a string to set as an upvalue for all the functions.__lovrparent_
so prevent collisions with functions that have actual upvalues.Note that this issue intentionally does not include some other important work to improve error messages.
Would be cool to have HTTP(S) support. But it would be even cooler if resources could optionally be "Request" objects or URLs, similar to how Blob
objects work where they can be specified as filenames or Blobs. Imagine the possibilities:
lovr.graphics.newModel('https://example.com/asset.gltf')
It's interesting to think about synchronous vs. asynchronous requests: Maybe assets could be loaded asynchronously and just "pop in" once they've loaded.
Currently looking at mongoose as a potential single-file library to use for this.
This belongs in the data module, e.g. lovr.data.newRequest
.
Use libtheora most likely (other video formats don't seem to be free to use).
Specs if they help out:
Windows 10
8GB ram
Intel "HD" Graphics Card
Tried to run it on my computer, didn't even open, no window or anything. Tried compatibility mode for windows 7 & 8, as well as running in administrator. I also tested both 32 and 64 bit releases. I'd assume it should work without the need of a VR headset connected as the FAQ claims it would run in a simulated VR window.
Here's my specs:
Processor: i5 3230-M 2.6GHz
RAM: 8GB
GPU: Intel HD Graphics 4000
OS: Windows 10 Home x64
Hello!
I just wanted to give this a try, but the only thing I get running is the no game screen.
I've downloaded the mac binary from the website and tried to run the hello world, just like I would with LÖVE with ./LÖVR.app/Contents/MacOS/lovr foldername
.
I also tried to zip the main.lua and lovr.conf and run it with the zip, but it didn't work either.
I'm getting some terminal output when running LÖVR, don't know if it's related:
Unable to read VR Path Registry from /Users/bla/Library/Application Support/OpenVR/.openvr/openvrpaths.vrpath
Thanks for checking it out!
On Android (Oculus Quest), I get a hard crash when my shader fails to compile instead of seeing a LÖVR error message.
LÖVR should give you the tools you need to do frustum culling (without actually trying to do it for you).
This is an interesting problem for VR because there are two cameras and each one has a frustum. Some magic math will need to be used to combine the two frusta into a single one. That combined frustum can be exposed by lovr.headset. I'm not sure what format is best to represent a frustum: mat4? 6 planes? How are planes represented?
From there, there's probably something in lovr.math like mat4:contains
or lovr.math.testFrustum
that is able to test things against frusta. You could use your own or the one returned from lovr.headset.
When doing the actual tests, need to think about:
This sample program
function lovr.draw(eye)
local s = string.format("%s\n%d controllers", eye, lovr.headset.getControllerCount())
for i,controller in ipairs(lovr.headset.getControllers()) do
s = string.format("%s\n%d: %s %s %1.1f, %1.1f", s, i, controller:isTouched("touchpad") and "Y" or "N", controller:isDown("touchpad") and "Y" or "N", controller:getAxis("touchx"), controller:getAxis("touchy"))
end
lovr.graphics.print(s, 0, 2, -3, .5)
end
prints information about the current controllers. If you run this and move your fingers back and forth on the vive touchpad on current lovr master, the text becomes garbled eventually.
The example uses multiline rendering, but the problem can be reproduced without newlines.
GL_LINES... it burns
Maybe with some kind of manual camera control? Would be pretty neat.
Hi there,
I just came across LOVR today, and I have to say this project looks awesome! Keep up the good work!
I am a big fan of LOVE and have played around with OpenVR as well, so I just had to try this out.
Unfortunately, it seems LOVR did not compile out of the box on my Linux system (ArchLinux), however it seems to have been just an issue of adding some platform specific stuff to src/filesystem/filesystem.c
Here is a patch I made which fixed it for me: linux_filesystem_patch.txt
I also found out (at least on my system) there was a small issue with the PhysFS path state.fullSavePath
being used in both the source and destination of a snprintf
call, which seems to be undefined behavior and was causing some weird behavior for me, so that is also fixed in the patch.
Unfortunately, I do not have my HTC Vive set up at the moment so I cannot check to see if all the VR functionality works on Linux, however I did a few tests of just the graphics system and the normal drawing functions seems to work fine in windowed mode at least.
There seem to be some random SegFaults on launch, but that might be due to not having a headset connected, or some Linux specific problems. Not sure.
I will be able to set up my Vive and try it out for real in a week or two, I am very excited to see it working!
I also noticed that you seem to be doing all this work on your own which is incredible, and I would just like to let you know if you needed any help, just add some Issues and I would be more than happy to help out if I can!
Thanks!
It’d be helpful to have a few more texture slots available in Material—just having “diffuse” and “environment” is pretty limiting. Assuming most folks these days will use a nominally PBR shading model, the most useful additions would be:
The default material doesn’t need to do anything with these, of course, but having them available would make it possible for people to set up their own more advanced materials.
Currently when you get samples from a Microphone
using :getData
, it always creates a new SoundData
object.
Seemingly this is "bad" because it uses more memory and does more allocation work to create and destroy/garbage-collect all of these objects, even in situations where you only need to use the samples temporarily (performing one-time analysis on chunks, stream it to another API/system, etc.).
Can we add an interface that lets you pass in your own Blob or SoundData object that acts as a destination for the microphone samples? That way one object could be reused as the destination over and over again. It would be optional but gives you the ability to manage the memory better if you're willing to do so.
I build two copies of lovr. lovr-march is built from commit 58e59d9 Mar 3 2018. lovr-sept is built from 48dcb50 Sep 11 2018.
I load up this example.
https://lovr.org/docs/Animation
I have to download scyther, rename some of the files, and I have to delete the shader =
line because lovrPoseMatrix
appears to not be supported anymore. Whatever, no problem.
The problem: It does not animate. If I run the sample in lovr-march, it animates. If I run it in lovr-sept, it remains in t-pose. I am seeing similar effects with my existing code based on the lovr animator documentation.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.