niivue / niivue Goto Github PK
View Code? Open in Web Editor NEWa WebGL2 based medical image viewer. Supports over 30 formats of volumes and meshes.
Home Page: https://niivue.github.io/niivue/
License: BSD 2-Clause "Simplified" License
a WebGL2 based medical image viewer. Supports over 30 formats of volumes and meshes.
Home Page: https://niivue.github.io/niivue/
License: BSD 2-Clause "Simplified" License
at the beginning of loadVolumes
we should add loading...
to the canvas in order to give a visual indication to the user that images are loading in the background.
This can be set or cleared in progress events from #45
I imagine the canvas UI could look something like this...
It would be nice to enable picking in the timeline section of the canvas so that a mouse click and drag in that region scrolls through the volumes and re-renders the selected volume in the timeline.
The line plot shows the time series of the voxel under the crosshair.
The y axis min and max could come from cal_min/max (or global min/max)
Allow for niivue to load textures and nii files from remote sources
@neurolabusc, figured creating this shader might be a worthwhile task. Shall we meet to discuss?
see https://github.com/hanayik/niivue/blob/main/src/components/glviewer.vue for mouse and touch events to add in.
A clip volume will consist of a cuboid mesh and lines that change color when a face of the cuboid has focus.
when using the selection box, it would be nice to pass in a parameter (via some widget) so that you can choose which overlay to select from with the selection box. Alternatively, maybe implement an "active layer" property of the niivue instance so that it can be set to a volume index via a user interface.
@cdrake, what do you think of this for the future?
Allow objects to be picked. This will form the basis for future UI interactions.
Use mocha
to implement simple unit tests for niivue. We may be able to use a headless browser to run the webgl tests, but a real, local browser may be ok as the first pass.
NiiVue should include operations to remove haze from a volume. This can help the appearance of the volume rendering, and ensure good behavior for depth picking. I would suggest copying the method from MRIcroGL. As an aside, I wonder if there are any JavaScript tools that do something like FSL's BET.
Add handler to allow clip plane to be dragged along each axis. This will update the relative position of the visible plane as well as change the distance from the origin.
in order to speed up loading, If cal_min, and cal_max are set then trust that the user (And the image creator) want to use those values. If cal_min/max are not set (zero) then run calMinMax
as normal.
@neurolabusc , what do you think of this strategy? It we'd need to workout what to do with global_min and global_max, but perhaps they could be set to cal_min and cal_max if cal_min/max > 0?
We can use webworker transferable objects in order to share data to and from web workers quickly (no need to copy data, which is slow).
This would enable us to spawn new web workers to run long running calculations such as calMinMax, which is for every voxel. Right now, if we want more than one niivue instance on a page, they share a single thread when running these long tasks (this is slow).
This is because the selection box drawing mechanism. simplest option is to disable drawing the selection box when in 3D mode.
I work on windows, mac, and linux regularly. our niivue npm scripts contain cp
and /
which don't play nice with a standard windows powershell environment.
The feature request if to add optional cross hairs to the volume rendering and allow depth picking where clicking on the volume rendering adjusts the cross hair position.
I had formerly thought this was tricky, but it turns out it only requires a few lines of code.
This was inspired by this feature request:
You can try this out with the latest MRIcroGL pre-release (v1.2.20210816). When you choose the "Multi-Planar A+C+S+R" from the "Disply" menu, you will see all four views show a cross-hair and all interactively move to the location of a mouse click:
The simple formula is described here:
this feature would enable users to zoom in on 2D slices.
We already devote mouse wheel events to slice scrolling in 2D view modes. So, in order to add slice zooming as well, I propose that we implement a "z + scroll" event handler.
We would update the zoom level under the following conditions:
if a user is zoomed on an image, it would be nice to pan around the 2D slice.
Since left and right mouse click events are already used I propose the following user interaction:
Panning happens when:
mobile gesture: two finger touch and drag
Images with RGB/RGBA value use 3 and 4 values respectively. Min and max functions currently only use single values to determine min and max for voxels.
Include @neurolabusc's docs from the old niivue repo, and separate the live niiuve demos into individual pages for each demo. We will have one index.html with links to other pages that show of features.
The readme needs to be updated to reflect the latest information.
to add:
niivue uses WebGL2 which supports features unavailable in WebGL1 (3D textures and ability to write to the depth buffer) which result in lean code that is easy to maintain and leverages modern graphics cards well. The timing is nice, as Safari has just started to support this.
However, since @cdrake is working on making our GL code modular, it might be worth bearing in mind that we may want to keep the WebGL code modular from the other logic, in case at some future date we decide to target future graphics APIs.
WebGPU looks like the next generation web graphics API. Will Usher (who wrote some of the code we already leverage) just released a minimal WebGPU volume renderer. It does not support many browsers.
Reading the code reveals some interesting limitations of the current WebGPU implementation
I suspect this is years away from being prime time, and that WebGL 2 will be supported for many years. However, it is worth keeping an eye on emerging technologies.
the NVImage
class will become the standardised image container with documented properties. This is cleaner than our current object image method where we add properties within various functions
A common task to FSL and AFNI quality assurance is to test the alignment of different images, e.g. is a fMRI scan aligned to the individual's T1 scan, is one individual's T1 scan aligned to a group template. Both FSL and AFNI chauffeur provide methods to show edge of one image on another. AFNI provides references for the method it uses, which seems to be very similar to a Sobel. Here is my implementation of a 3D Sobel. While the wiki describes a separable formula, it does seem to require a lot of memory, and doing the 3D computation is very quick.
I think a good first application for NiiVue is to support these FSL and AFNI QA displays on an interactive web page.
MRIcroGL Sobel:
Create a data structure to store objects that will be rendered in 3D. Render each object in order they are stored in the array.
Add functionality to render on either side of clip plane
Add a visible clip plane that will clip the volume in the appropriate plane.
adding a feature like niivue.syncWith(otherNiivueInstance)
could allow one niivue instance to drive another. Useful for viewing images that are coregistered but viewed in two different panels. Similar to the MRIcroGL yoke window functionality.
This could allow the crosshair locations to be updated together in real time.
documentation should exist as a static web page, and should be generated from the niivue src code files using jsdoc.
There is a bug in calMinMaxCore that I introduced. I have removed it, and fixing momentarily. Creating this issue to document it.
frac2vox should return integers that correspond to voxel indices
How did I get this list?
cat src/niivue.js| grep prototype | tr -d '{' | sed -e 's/^/- [ ] /'
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.