Code Monkey home page Code Monkey logo

navigate's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

navigate's Issues

Concurrency Tools - Python Versionitis

Interestingly enough, it does seem that the concurrency tools dependency for threading.excepthook has some strange Python version-specific availability.

3.10.1 - threading.excepthook available
3.9.9 - threading.excepthook available.
3.8.12 - threading.excepthook available.
3.7.12 - Not available!
3.6.15 - Not available!
3.5.9 - Not available!

I believe we are using Python 3.7. Would love to hear what version of Python everyone else has in their environments.

Particle Detection

Ultimately, we will likely use a convolution-based particle detection approach to find cells of interest for follow-up imaging. It would be nice to develop a Python or C++ equivalent of a common function used in the Danuser Lab:

https://github.com/DanuserLab/u-track3D/blob/master/software/pointSourceDetection3D.m

The only reason why I mention C++ is the fact that this is typically a slow operation, so perhaps we could call a compiled C++ object. This function, if I recall correctly, does a laplaucian of gaussian convolution, finds the peaks, fits the peaks and evaluates their statistical likelihood, and then only gives back the statistically robust events.

Scikit-Image seems to have most of what we need in the 'blob-log' function.
https://github.com/scikit-image/scikit-image/blob/v0.19.0/skimage/feature/blob.py#L401-L564

Redundancy in experiment and configuration loading?

@annie-xd-wang - I ran into a strange problem today where it didn't seem like changes that I was making to the experiment configuration in the controller were propagating to the experiment configuration in the model.

Upon closer look, it seems as if we are creating two instances of the experiment and configuration instances, each in the model and experiment. Is there a reason for this?

Thanks,
Kevin

User Task - Add Sub_GUI_Controllers

Given that the View is not completely done, it may be a bit early. But at the very least, for organizational purposes, it would be great if we could introduce the sub_gui_controllers as you did in the separate branch. I'm not sure if we can simply merge the branches, but that would be a good project that doesn't require an intimate knowledge of how the microscope is working.

Unit Testing

  • Test folder that mimics source folder
  • Test file for each python file that needs testing
  • Code Cov
  • PyTest

Camera View Functionality

We will also want to populate the camera view tab with a bunch of functionality. This includes:

  • The ability to load an image into the viewer
  • The ability to look at different z-planes within a single image.
  • The ability to overlay analysis results (pretty standard with matplotlib imshow).
  • The ability to change the lookup table.
  • The ability to automatically autoscale the lookup table.
  • The ability to look at different acquisition channels.
  • Autoscale causing read only spin boxed, see comment below
  • The ability to recognize that the mouse is over the camera window, and to use the wheel to slightly adjust the focusing stage.
  • The ability to double-click region on image, determine distance stages need to move laterally and then move stages and position the selected/double-clicked region into the center of the field of view
  • Triple-click or right click image to perform autofocus routine (can use @Rapuris metrics code for image sharpness analysis/contrast.py -> normalized_dct_shannon_entropy() ) The overall goal is to adjust focus by user-specified distance, collect an image, calculate dct_shannon-entropy and repeat. Then find max dct_shannon_entropy, and move stage to position

image

GPU/CPU Decision at Runtime

We need to create a parent class for the analysis module and then going forward the software will decide whether to run on the GPU or CPU depending on the OS.

Camera Display - GUI/Controller/Model

General Info

Need the GUI to display images from the camera. This requires creating a controller and updating both the synthetic and base models for the Camera class. The below task list outlines some of the required tasks, benchmarks and optimizations needed to ensure proper and efficient functionality. Any comments on my process or logic is appreciated and welcome. I don't want to miss anything.

Todo List

Benchmarking

  • Frame save time to disk (after its been pulled from buffer)
  • Spool time(how fast does the camera offload frames to buffer)
  • Pull time from buffer (how long does it take to get a frame from the buffer)
  • Displaying frame time (how long does it take to display a frame to GUI)
  • Display Frames Per Second

Optimizations

  • Pulling from buffer is faster than spooling to buffer
  • Saving a frame is faster than or will always come before displaying frame
  • Framerate of GUI display needs to be at least 24fps

Remote Focusing Popup

Creating the popup discussed in Slack. It will use the popup base class, going forward you can reuse the code structure for other popups.

image (1)

Stack Acquisition Settings Not Updating Experiment

When changing the step size, start position, etc., it does not appear to be updating the MicroscopeState dictionary in the experiment module.

Also, if one briefly deletes a value such that the tkinter .get function retrieves an empty string, it throws an error. I tried to put something in there to fix it, but wasn't successful.

validation of Spinbox/Entry that only accepts integer/float numbers

There are several Spinbox and Entry that should not accept random strings but only integer/float numbers. Otherwise, it will trigger some errors in computation functions or even in the model when dealing with devices. I decided to solve this problem in three steps:

  1. when loading an experiment file, we could not guarantee a user will load a complete right file, so there will be a checkup when populating that information.
  2. when a user input something from GUI, the widget itself will do the validation. It will show errors in red and won't propagate errors to upper-level functions. @codeCollision4 Right now I am changing the color of the input to red, but later on, maybe we could use other solutions.
  3. when a user clicks the 'Acquire' button, during the prepare acquisition step there will be a checkup, and will popup a window if there is something wrong, otherwise, it will tell the model to do the acquisition.

Another thing we should do is that we need to add range limits in the configure.yml file.

Two separate yaml files

  • **DEFAULT SETTINGS ** Pre-populate all of the GUI parameters. This has com ports, analog outputs, digital outputs, everything.
  • **NON-DEFAULT SETTINGS ** The second one would be the user modified state of the microscope. Number of channels actually used, exposure times.

GUI User Entry Validation/Automation

Opening up this one so that I don't forget. The goal is to incorporate data validation for user inputs to prevent wonky behavior. Widgets will reject bad input or notify users.

Acquire Bar Popup Dependencies

Currently we have the different modes of the software. Continuous, Z-Stack, Single Acquisition, and Projection.

For Z-Stack, Single Acquisition, and Projection, the only time the popup bar should occur is if the 'Save Data' button is enabled in the Channels Settings dialog. Otherwise, the data is being acquired and saved in RAM for inspection by the user (e.g., using the maximum intensity projection interface).

For Continuous, it should never pop up but just display the data live in the window.

image

User Task - Camera View Tab

We will also want to populate the camera view tab with a bunch of functionality. This includes:

  • The ability to load an image into the viewer
  • The ability to look at different z-planes within a single image.
  • The ability to overlay analysis results (pretty standard with matplotlib imshow).
  • The ability to change the lookup table.
  • The ability to automatically autoscale the lookup table.
  • The ability to look at different acquisition channels.

image

User Task - Proper Meta Data in Saved Data

We typically use the module to save images, but we have not been controlling our metadata: https://pypi.org/project/tifffile/

We will need to adopt the proper metadata architecture as specified here: https://docs.openmicroscopy.org/ome-model/5.6.3/ome-tiff/specification.html

According to tifffile, "Numpy arrays can be written to TIFF, BigTIFF, OME-TIFF, and ImageJ hyperstack compatible files in multi-page, volumetric, pyramidal, memory-mappable, tiled, predicted, or compressed form." We will need to look into how we can do this with the proper descriptors in the metadata.

User Task - Automatically calculate the stack acquisition time

Based upon the number of channels selected, the camera exposure time of each channel, and the size of each image stack, we need to calculate the time necessary to acquire a stack for all of the 3 channels.

Thus it will have to figure out how many channels are selected, figure out the number of steps, and the laser cycling mode. If we are set to perZ cycling, we need to provide a finite amount of time for the filter wheel to change between adjacent image slices.

If we are in the perStack mode, then this finite amount of time only needs to be done at the image of each image stack.

image

GUI Formatting Tweaks

  • Make Channel Setting Columns Even
  • Align Channels tab label frames, make then more even
  • Fitting Channels tab label frames more tightly to widgets
  • Moving Stage Control Widgets to more natural position on GUI

User Task - Interactive Image Window

When we finally have the image displayed in the view, there are a couple of nice features that we could implement (hopefully).

The first would be for us to be able to double-click a region in the image, and have it determine what distance the stages would need to move laterally, and to then move the stages and position that region in the center of the field of view.

The second would be for us to position the mouse over the window, and then have it adjust the focus of the microscope when we roll the wheel on the mouse.

A potential third is for us to triple-click or right-click on the image, and then perform an autofocus routine. Sampath has coded some metrics that calculate the image sharpness (analysis/contrast.py -> normalized_dct_shannon_entropy()) The goal would be to adjust the focus by some user-specified distance, collect an image, calculate the dct_shannon_entropy, and repeat. You then find the maximum dct_shannon_entropy, and move the stage to that position.

Real-Time Analysis Test Scenario

We would like to test the feasibility of doing online or real-time analysis. What is the best architecture to do this? And how best to perhaps leverage GPU speedups. For the latter, I have been working quite a bit with TensorFlow and cucim, and using this should be pretty easy. But moving forward, should we do sub-processes, or threads, or something else?

To start easy, we will do an autofocusing routine. Assume single channel currently, may independently measure focus parameter for each channel in the future.

We can make an autofocus menu item. We also talked about possibly having it be something you can right click on the image canvas for a menu popup?

Select the channel (CH1, CH2, CH3), and we have to select the device we want to adjust. For the low-resolution imaging mode, this is the focus ("F") stage. User must say how many images to acquire, and how far apart?

For example. If we want to adjust F a total range of 50 microns, with 5 micron steps... - 1 stage optimization - We would acquire one image at:
0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50 = 11 steps. Always an odd number. For each image, we would then calculate the Shannon Entropy of the Discrete Cosine Transform. Then we find the maximum response, and then move the stage to that position...

2-stage optimization. Adjust F for a total range of 500 microns, with 50 micron steps.
0, 50, 100, 150... 500

Find the local maxima, then do a second stage optimization step over 50 microns with 5 micron steps.

Maybe having the option to do both 1 stage or 2 stage optimization would be nice. Your call.

Local maxima detection. Simplest would be to find the index of the maximum response. Or, we could do some sort of curve fitting to find the maximum response. Is the response Gaussian-like with focus. Start with the index, and then move forward with more complex steps in the future.
6x_focus.pdf

model/analysis/image_contrast -> normalized_dct_shannon_entropy()

Tiling Acquisition Pop-Up Window/Wizard

Need a pop-up window that allows the user to specify the spatial limits of an acquisition. X0, X1, Y0, Y1, Z0, Z1. What the % overlap?

For example, you might want 10% overlap, or 25% overlap. Useful depending upon the sparsity of the thing being imaged.

Pop-up window would come up when the user selects the option in the menu of the Pandastable data object.

Hypothetical 2D case. Calculate the field of view size from the pixel size (6.5 microns), the zoom value (e.g., 1x), and the number of pixels. 2048*6.5/1x = 13.3 mm FOV. If we wanted to image a field of view of 50 mm square, 3.8 images in x and Y, but that isn't accounting for the overlap. 10% overlap. 0,0 -> 12,0 -> 24,0 -> 36,0... 0,12 -> 12,12....

Really we also need to account for tiling in the z dimension. Which we will pull from the stack acquisition settings. If the z-stack is 250 microns thick, 10% overlap would 25 microns.

Thread Error

Attempted to test out the new autofocus mode with the hardware. When trying to acquire continuously, or even collect a single image acquisition, I run into some thread issues:

GUI Custom Widgets

Please feel free to add ideas here so we can discuss and keep track of a "wishlist" of sorts for widget functionality we want.

  • Generic Popup
  • Entry widget with red as a warning
  • Bubble Label above entries to notify user of something?

The module of menus

We need to arrange the codes of menus to make it work more like a module and add a controller to make the menu work functionally. This includes:

  • Move menus from main_application_window.py to a single python file.
  • Adjust menu items, those menus need to be added as more functions are supported.
    1) Add 'Load Experiment', 'Save Experiment'
    2) Add 'Load Positions', 'Export Positions', 'Move to Select Position', 'Append current Position', 'Sort Positions'
  • Add a sub-controller to make it work functionally.

ETL popup sub-controller

We will need a sub-controller for the ETL popup dialog.

It will need to automatically populate the imaging modes and their respective magnifications. For the low-resolution mode, there is only one magnification. For the high-resolution, there are 0.63x, 1x, 2x, 3x, 4x, 5x, and 6x. I updated the ETL constants file.

The percent duty cycle, duty clcle, etc., will be pulled from the configuration file.

It will also need to enable saving of the ETL parameters once they have been changed.

Improve responsiveness.

Let's improve the software responsiveness by properly laying out a threading/sub-module strategy for our devices. Ideally, we want certain functions, like presenting the image to the viewer after each acquisition, to occur in real-time. When the software is too laggy it can be quite difficult to optimize a microscope's configuration and alignment (e.g., changing the focus, adjusting the ETL can, etc..).

We should decide whether or not we want to make the camera as an ObjectInSubprocess, or other items, such as the stage, filter_wheel, etc., as a ResultThread, etc. Once these settings are configured, having some readout for improvement in performance would be reaffirming.

User Task - Drop Down for Sensor Mode

In the camera settings tab we need a drop down menu that will include a sensor mode for light-sheet mode, and normal mode. If in light-sheet mode, have the ability to select from another drop down list for readout direction. Options include top to bottom, bottom to top, and bidirectional. If in light-sheet mode be able to select number of pixels in the rolling shutter. A spinbox with integer steps. In a separate fram have all the ROI info as shown in the slack converstation.

User Task - Continuous Acquisition Mode

If the user hits acquire in the continuous mode (1 laser, 1 filter position, 1 exposure time):

Get the channel properties (laser, filter wheel, exposure time).
Set laser
Set the filter wheel
Set the exposure time
Prepare the data acquisition card (sends and receives voltages)
Once everything has prepared itself, send out the voltage that triggers the camera.
Grab the image, and display it to the user.

Repeat until the user hits the acquire button again.

Efficient Volume Search

Distant goal/idea for the autonomous mode of imaging operation.

The tissues are oddly shaped, so imaging them in a set grid is often a stupid idea (e.g., x0-x1, y0-y1, z0-z1). Thus, we will need an effective way to map out the tissue boundaries in a coarse imaging mode (e.g. with 0.63 or 1x magnification), and then do follow-up imaging at a slightly higher resolution (e.g. 6x magnification). Remember, that the low-resolution arm of the microscope has a motorized zoom servo that can automatically change the magnification of the imaging system.

So, what is the best way to do a search? Perhaps an R-Tree? It is slightly different because we aren't dealing with points, but images, which also need to be analyzed for the presence or absence of tissue.
https://en.wikipedia.org/wiki/R-tree

Couple other interesting ideas here: https://blog.mapbox.com/a-dive-into-spatial-search-algorithms-ebd0c5e39d2a

At some point,, we will have to start thinking about this.

GUI Dicts/Array Sweep

Ensure that all widgets and variables tied to widgets are inside of dicts or arrays so that data can easily be called/passed to the sub-controller modules.

  • Camera View Tab
  • Camera Settings Tab
  • Stage Control Tab
  • Multi Position Frame
  • Timepoint Frame
  • Laser Cycling Frame
  • Stack Acq Settings Frame
  • Acquire Bar Frame
  • Waveform Settings Tab

Initial settings not properly parsed?

I've noticed that when we launch the software 3 channels are selected by default. If we do not do anything, but acquire a single image, you will notice that it is acquiring only 2 frames, not 3. Then the second time you hit acquire, it collects 3. Not sure what causes this.

Resolution estimation

One possible feedback mechanism is to use the measured resolution. In such a case, if the resolution falls below some threshold, then we could flag those regions for reanalysis or for automatic optimization of the imaging parameters.

Ideally, this would be performed in a way that is agnostic to the sample characteristics. One method is decorrelation analysis. This could possibly be a nice project for our St. Marks team.

https://github.com/Ades91/ImDecorr
https://www.nature.com/articles/s41592-019-0515-7

One possibility is to try to rewrite the code in a format where the data can be pushed to the GPU (we have 1, and possibly 2 if we want it Titan RTX GPUs). Pyclesperanto is a nice package that makes it easy to use the GPU, and I have had excellent results from it. The other is cupy.

User Task - Toolbar Functionality

I would like to start to build out the toolbar's functionality.
image

I think to do this, we should first build out a sub-controller for the toolbar using @annie-xd-wang's format. This will pass the commands to the controller, and be initialized in the controller.

Under File, we would like to be able to load a particular 'experiment' yaml file, and populate the experiment settings in the model. Likewise, we would also like to save an experiment yaml file so that it can be reused later on. We do not have code for saving the experiment, but it should essentially give the user a standard save dialog so that they can choose the name and save location. When the user chooses to load the config, I believe our 'session' class (https://github.com/AdvancedImagingUTSW/ASLM/blob/b9f1583878343670452077cad681f2cfadfb01c7/base/1.0.0/model/aslm_model_config.py)

image

Under File, we will also provide the opportunity for a user to load images. Just like we had a save dialog to save the experiment, we would give a similar popup window that allows the user to explore the file tree and choose the image that they would like. As we get further along, this will allow the user to load an image, evaluate it using some of the computer vision tools that we will build out.

We can Change 'Edit' to 'Mode'. Two options should be 'Mesoscale' and 'Nanoscale'. This will ultimately decide which shutters to operate, cameras, etc. It will toggle the operation between the left half and the right half of the microscope, which we ultimately hope to do automatically in the future.

The Zoom buttons will need to work properly. In reality, it only influences the operation in the Mesoscale mode. These buttons will interact with the set_zoom function within the DynamixelZoom Device: https://github.com/AdvancedImagingUTSW/ASLM/blob/develop/base/1.0.0/model/devices/zoom/dynamixel/DynamixelZoom.py

The values that we send to this function are located in the model.configuration.ZoomParameters.zoom_position dictionary.

def set_zoom(self, zoom, wait_until_done=False):
"""
Changes zoom after checking that the commanded value exists
"""
if zoom in self.zoomdict:
self._move(self.zoomdict[zoom], wait_until_done)
self.zoomvalue = zoom
else:
raise ValueError('Zoom designation not in the configuration')
if self.verbose:
print('Zoom set to {}'.format(zoom))

ZoomParameters:
#Dynamixel or SyntheticZoom
type: SyntheticZoom
servo_id: 1
COMport: COM21
baudrate: 1000000
zoom_position: {
0.63x: 0,
1x: 627,
2x: 1711,
3x: 2301,
4x: 2710,
5x: 3079,
6x: 3383
}

User Task - Dynamic Display Size

The GUI will need to be resized to dynamically fit any display size/resolution. For example, because the current setup is static, the acquire bar is not showing up on some displays due to the sizing of the total GUI. This will only be further exacerbated as more dense features are added.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.