Code Monkey home page Code Monkey logo

acq4's People

Contributors

ales-erjavec avatar anntzer avatar billy4195 avatar blink1073 avatar campagnola avatar cboulay avatar compassym avatar daver98133 avatar davoudian avatar dvj avatar ericdill avatar fabioz avatar feketeimre avatar gpoulin avatar ibressler avatar ixjlyons avatar jfperkins avatar jnevrly avatar justengel avatar lesauxvi avatar markotoplak avatar maxpeng avatar meganbkratz avatar mgraupe avatar nicolaisi avatar pbmanis avatar pfctdayelise avatar pijyoi avatar scseeman avatar yashikno avatar

Watchers

 avatar  avatar  avatar

acq4's Issues

Multipatch log should provide basic information about logged devices

Currently, lines in the multipatch log file give the name of the manipulator to which they belong, but there is no way to determine anything else about the manipulator, such as:

  • What is being manipulated (a headstage with pipette? a stim electrode?)
  • Orientation
  • Association with ephys data

It might be the case that some of this information should be logged elsewhere. It would be nice if we could at least graphically reconstruct the pipette orientations when reading the log file.

An easy fix for orientation might be to just include the complete transform for each pipette as part of its initial state. We could also have each pipette provide a describe method that could provide more custom information (like MIES channel associations).

Allow saving MosaicEditor state to file

The main purpose is to be able to reload the complete state of a MosaicEditor session, including the list/order of files loaded, transparency, filters, etc.

  • Selecting this file in the data manager should give you an image preview of the canvas when the file was saved.
  • Possibly image filters / levels / LUT should be stored along with the image itself?

Using MultiPatch log files in MosaicEditor

The primary purpose is to extract 3D cell locations from manipulator position data.

  • Load MultiPatch log file in to MosaicEditor, allow user to see movement of pipettes/targets over time
    • Pipettes should record an initial description into the log file that gives their orientation and associated ephys channel, if any
  • Allow user to create cell markers from pipettes (or from targets?)
    • Cell markers should be movable in x/y and should have another mechanism to set their z depth
    • Export cell positions and surface depth

Store LED/filter settings with images

Every image should have metadata on the illumination / filtering / imaging devices used to generate the image (when available).

This feature should be implemented in the Microscope class.

Should build on acq4#45

Camera module should save background frames and levels/LUT with images

Goal: when images are opened in MosaicEditor, they should look exactly as they looked onscreen when they were saved.

  • Images should get userLevels and userLookupTable info keys. These may be modified by MosaicEditor like userTransform.
  • Images should also get a backgroundImage info key that references a stored background frame, and a 'backgroundRemovalMode' key that can be either 'divide' or 'subtract'.
  • Don't save duplicate background frames; multiple images may reference the same background file.

Record image transform for every frame in a recording.

The primary use is for tracking the focus depth for every frame in a stack. If the transform changes rapidly, then there will be some lag between each frame and its recorded transform (but we can live with this).

Complications:

  1. Potentially very expensive
    • Do we record the entire transform stack for each frame?
    • Can we optionally just record the global transform, or just the global translation, or just the focal depth?
  2. Messes with MetaArray (currently can't record more than one axis value per frame)
  3. Messes with ImageView (want to display depth OR time on the z axis)

Suggestions:

  1. Just record the global translation (x,y,z) values
  2. Add a new key to the metaarray info: metaarray._info[0]['translation'] = array((N,3)). When we record image stacks, we currently append a new frame time to _info[0]['values'] whenever a new frame comes in. We can extend this mechanism to optionally append values to the translation field as well. Perhaps it would look something like ma.write(filename, appendAxis='Time', appendKeys=['translation'])) (this would be in acq4/util/imaging/record_thread.py : writeFrames() )
  3. We can kick this one down the road. Maybe it doesn't need to be handled by ImageView at all..

Record pipette positions in Multipatch module

I always forget to actually hit the record button in the multipatch module to record pipette positions during an experiment. Could this be automatically activated when a "New Site" is created in the Data Manager assuming that this means one is starting a new experiment?

Make manipulator search configuration more intuitive

Right now we have two config options tipHeight and tipSearchHeight that are confusing and have weird interactions. Let's replace these with something more straightforward like

  • minSearchHeight - minimum distance above the slice surface when bringing pipettes in for search
  • searchFocusOffset - Z-distance from pipette to focal plane in search mode (positive values to give the pipette extra room, prevent crashing into objective; negative values to make the pipette easier to find)

For backward compatibility, we can catch the old options and raise a super-helpful error message that provides the new parameters and values to use.

MosaicEditor image features

Would like to see several features added to MosaicEditor image handling:

  • Flexible/customizable filters to replace the fixed set of filter buttons (max/mean/edge/etc). User can add any number of "preset" buttons for single-click filter changes. In progress here: #8
  • Button to enable / disable auto gain control while moving time slider. Can we use the camera module's code for this? (and can it be implemented in pyqtgraph instead of acq4?)
  • Quick control for disabling filter (maybe a check by the filter groupbox title)
  • Option to apply settings to a selection of images (adjust levels / LUT, set filter, etc.)
  • Option to link z-axes across images (depends on #2)
  • Remember last used levels / LUT / filter (but remember initially recorded values as well?)
  • Mask painting - user can manually paint masked areas over regions of a stack to facilitate 2D projection (for example, to make a 2D morphology image from a z-stack where fine dendrites are not visible in a max-projected image)

Locking manipulator from multipatch module should disable rotary controller

Manipulators that are locked in the multipatch module are currently excluded from any automatic movements, but it is still possible to move these pipettes manually with the rotary controllers. Ideally, this should not be possible to prevent accidental pipette movement.

Behavior should be:

  • Manipulators can only be moved manually if they are unlocked or in solo mode
  • Manipulator selection has no effect on manual movement

Record target positions, surface depth in multipatch module

I think right now we are only recording pipette positions.

Each value needs to be recorded once when the log file is created, and subsequently whenever the value changes.

  • Surface height is stored in the Microscope device and can be monitored with Microscope.sigSurfaceDepthChanged. Use man.listInterfaces('microscope') to list the names of all configured microscopes.
  • Target positions are managed by individual PatchPipette devices and can be monitored with PatchPipette.sigTargetChanged.

Module for assessing connectivity in multipatch experiments

The primary purposes of this module will be to:

  • track the state of all patch recordings and assess connectivity
  • visual display of connectivity between pipettes (matrix and/or graph)
  • display averaged synaptic responses and evoked spikes
  • keep track of which protocols have been run on which synapses
  • schedule a sequence of protocols to be executed, let the user rearrange
  • pack all data from the experiment into a single NWB file

LED control on XKey Pad

Implement on/off control of LEDs with buttons on the XKeys pad for easier accessibility when switching between light sources

Multipatch module

The main purpose of this module is to facilitate patching cells and monitoring cell health when using multiple patch electrodes.

  • Allow easy control of multiple manipulators for patching (mostly complete, but see #12)
  • Track state of each pipette -- sealing, cell-attached, whole-cell, etc
  • Store a log of all pipette state / qc / position data (see #1, #2, #3)
  • Plot basic QC measurements from test pulse for each pipette -- input/access resistances, holding values, etc.
  • Visual alerts when QC values are out of desired range

Current development is in https://github.com/campagnola/acq4/tree/multipatch

Resetting stage coordinate system causes pipette calibrations to become invalid

Currently, making changes to the coordinate system of a stage (such as when we click the "zero" button) causes the calibration of pipettes to become invalid.

Probably the correct solution is to compensate for the sudden CS change by introducing the opposite change into the stage's transform, but then do we need to automatically update the config file as well?

Or maybe we just need to add a button that lets the user write the transform back out to config (this would be really nice when calibrating stage/scope/camera anyway).

Add reseal button to multipatch module

Scientifica's new motion cards have a very low minimum speed, so we should be able to automate pipette resealing.

  • Add a "reseal' / "pull off" / "outside-out" button to multipatch module
  • Add a config option to set the speed for pulling off
  • Another option to set the distance? Can we just hardcode this at 200 um or so?

LIMS upload for multipatch pipeline

Generate a single NWB file per experiment and send to LIMS

  • Start from MIES NWB file
  • Pack in multipatch timeline
  • Pack in and label image data
  • Upload to LIMS
  • Upload queryable JSON file

Should be part of a module for streamlining multipatch connectivity experiments (#13)

Camera module z-stack does not work with cameras

This feature was developed for 2p imaging devices but never tested with cameras. Should be fixable with minor effort..

  • Make z-stack / timelapse work with camera devices
  • Test changes on 2p imaging device

Race condition in Camera.getScopeState()

Camera.getScopeState() currently returns the scopeState dictionary stored by the camera. This should probably return a copy of the dict instead, but we should do a quick check to be sure performance is not affected.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.