BonVision is an open-source closed-loop visual environment generator developed by the Saleem Lab and Solomon Lab at the UCL Institute of Behavioural Neuroscience in collaboration with NeuroGEARS.
The current default composition order for DrawQuad, DrawImage and DrawCheckerboard seems to be Translation > Scale > Rotation, rather than the most conventional Translation > Rotation > Scale.
This deforms quad vertex alignment so right angles are not preserved anymore for excessively elongated quads.
Currently most angle properties can be configured using degree units, but this is not true for specifying ViewWindow rotation, which uses radians rather than degrees. Given this node is often configured directly for manual screen calibration, it would be very useful to have it work with degrees.
Looking inside the definition of the gamma correction node, it can be seen that the LUT for each correction node is only loaded and bound once. This means only the last gamma LUT will be applied to all displays.
Similar to SphereMapping and MeshMapping, perspective mapping is a generally useful operator to warp the final render output, for example when mapping the output of a projector onto the surface of a planar screen.
I followed these instructions to make a series of flashing gratings. Here's the Bonsai Rx code for it: gratings.zip. Here's a screenshot of the workflow as well.
I would like for an Arduino to send a TTL pulse as the grating is showed, to sync it with an ephys rig.
I cannot figure out where to insert the digital output module to have it flag every single time a grating is showed (once every second for 5 minutes).
I will also cross post this issue on the Bonsai Rx github, in case its more of a general issue that can be solved over there.
The support for specifying ViewWindow calibration in degrees (#11) has broken type compatibility by exposing the Rotation property as a Point3f instead of a Point3d.
It is important to restore this compatibility as the output from automatic calibration routines is specified using 64-bit rather than 32-bit precision, which breaks several legacy workflows.
Currently drawing 3D objects requires a specialized structure containing the view matrix, projection matrix and light source for rendering the scene. It would be useful to include a node for converting 2D (or 3D) projection matrices straight into a fixed 3D view point e.g. (Eye at (0,0,0) looking into -Z) and fixed or configurable light source.
This would allow easily drawing 3D objects into the eye-centric coordinate frame which is very useful in certain situations.
The GammaCorrection node expects 8-bit image as inputs. This can be an issue and make the post-gamma luminance of the screen step-like. Exposing the InternalFormat would solve the issue.
Maybe adding a small hint to say which pixel type are the most likely to be usefull in the description would be nice if you expose the property.
Use case examples where we cannot use the whole range and would want to use more than 256 values:
1- We have a projector with very bad dark. The first 50 gray values have exactly the same luminance. To do gamma correction we stretch the 205 other values, which means some steps
2- When using multiple monitors, we normalise the brightness of all screens by cropping the top and bottom of the curve (so that they have the same absolute luminance).
Currently sphere mapping of 2D to 3D is always centered around the equator. In some cases it can be useful to center the origin of the environmental map on the poles or in other points in the sphere.
Normal matrix calculation is currently repeated on a per-vertex basis, but this could be optimized away into the CPU and uploaded to the shader on a per-draw basis.
The DrawText operator is failing to show with default parameters on the latest BonVision release. This seems to be related to the new layer order, since there was a fixed offset of -2 on the Z-axis.
The issue arises largely due to the way grating angle is implemented in the vertex shader. The workflow passes a transformation matrix that applies a rotation to the texture coordinates vt. For example, as that matrix depends only on the orientation property, a 45 degree grating would always be oriented along the quad diagonal.
When using non-linear display surfaces such as spherical projection domes, each pixel in the projector output will be independently mapped to a viewing direction in the world. Currently the mesh mapping solution allows correcting this distortion using a grid of vertices that warp the final image texture.
However, given that we have a cubemap handy, the precision of this mapping can be improved by mapping each pixel fragment independently to a direction in the cubemap, essentially doing the distortion of the view directions in the fragment shader instead of the pixel shader.
Similar to DrawImage, it can be useful to generate a wrap-around animation using DrawCheckerboard directly by simply shifting the texture coordinates. You can do this currently by RenderTexture > DrawImage, but it is both less performant and risks aliasing if texture resolution is not high enough.
Currently the automatic display calibration is limited to rectangular fiducial markers which require presentation on a flat display.
The goal of this feature request is to be able to use multiple smaller markers (or circular non-rectangular targets) to approximate auto-calibration of non-flat projection surfaces.
Although the mercator projection is extremely convenient for sphere mapping 2D primitives, it suffers from the drawback of exhibiting large distortions for stimuli presented near the poles.
One straightforward solution to this would be to allow rendering stimuli onto spherical segments which are placed around the unit sphere.
Some BonVision operators (e.g.: DrawText) depend on Bonsai.Vision or Bonsai.Vision.Design. However, these do not seem to be part of the dependencies of the package.
It seems the examples and demos on the BonVision website are down. Moreover, I cant seem to find them in the repo nor in Bonsai gallery either.
Any chance they can be reuploaded?
This would make it easier to combine images to generate certain types of stimuli. Since DrawVideo uses DrawImage under the hood, it could be exposed for this primitive too.