Code Monkey home page Code Monkey logo

plant-seg's Introduction

PlantSeg

alt text

doc build status package build status

Anaconda-Server Badge Anaconda-Server Badge Anaconda-Server Badge Anaconda-Server Badge

Illustration of Pipeline

PlantSeg is a tool for cell instance aware segmentation in densely packed 3D volumetric images. The pipeline uses a two stages segmentation strategy (Neural Network + Segmentation). The pipeline is tuned for plant cell tissue acquired with confocal and light sheet microscopy. Pre-trained models are provided.

Table of Contents

Getting Started

For detailed usage checkout our documentation ๐Ÿ“–.

Documentation Napari GUI Legacy GUI Command Line
doc build status doc build status doc build status doc build status

Installation

Please go to the documentation for more detailed instructions. In short, we recommend using mamba to install PlantSeg, which is currently supported on Linux and Windows.

  • GPU version, CUDA=12.x

    mamba create -n plant-seg -c pytorch -c nvidia -c conda-forge pytorch pytorch-cuda=12.1 pyqt plant-seg --no-channel-priority
  • GPU version, CUDA=11.x

    mamba create -n plant-seg -c pytorch -c nvidia -c conda-forge pytorch pytorch-cuda=11.8 pyqt plant-seg --no-channel-priority
  • CPU version

    mamba create -n plant-seg -c pytorch -c nvidia -c conda-forge pytorch cpuonly pyqt plant-seg --no-channel-priority

The above command will create new conda environment plant-seg together with all required dependencies.

Repository Index

The PlantSeg repository is organised as follows:

  • plantseg: Contains the source code of PlantSeg.
  • conda-reicpe: Contains all necessary code and configuration to create the anaconda package.
  • docs: Contains the documentation of PlantSeg.
  • evaluation: Contains all script required to reproduce the quantitative evaluation in Wolny et al..
  • examples: Contains the files required to test PlantSeg.
  • tests: Contains automated tests that ensures the PlantSeg functionality are not compromised during an update.

Citation

@article{wolny2020accurate,
  title={Accurate and versatile 3D segmentation of plant tissues at cellular resolution},
  author={Wolny, Adrian and Cerrone, Lorenzo and Vijayan, Athul and Tofanelli, Rachele and Barro, Amaya Vilches and Louveaux, Marion and Wenzl, Christian and Strauss, S{\"o}ren and Wilson-S{\'a}nchez, David and Lymbouridou, Rena and others},
  journal={Elife},
  volume={9},
  pages={e57613},
  year={2020},
  publisher={eLife Sciences Publications Limited}
}

plant-seg's People

Contributors

constantinpape avatar fynnbe avatar jackyko1991 avatar k-dominik avatar lorenzocerrone avatar mohinta2892 avatar pre-commit-ci[bot] avatar qin-yu avatar tibuch avatar wolny avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

plant-seg's Issues

evaluation

hello author,

i want to know about your evaluation metrics code. can i use it if my dataset has raw and segment labels as png. or should do i need some other annotation format of labels? which approach is better.

thank you

LiftedMulticut error

Hi all! First, congrats for this amazing and friendly tool.

I have been trying the lifted_multicut protocol as you described in the wiki.
https://github.com/hci-unihd/plant-seg/blob/master/plantseg/resources/nuclei_predictions_example.yaml
https://github.com/hci-unihd/plant-seg/blob/master/plantseg/resources/lifted_multicut_example.yaml

but when the program execute the segmentation module of the second step, then, crash -> "Unsupported algorithm name LiftedMulticut"

Have I to download or add this algorithm segmentation method to the preinstalled plantseg program?

Thank you in advance.

Pedro

New file structure for pipeline execution

Currently while executing the PlantSeg pipeline it creates a directory structure which depends on whether some of the pipeline stages were enabled or not, and the directory names correspond to the network names / segmentation algorithms, which can make it difficult for automatic post-processing.
To tackle this we should be using a hierarchical file format like h5 or n5 in order to store the output of the plantseg pipeline in a single n5, h5 file.

TODO:

  1. Design and document a h5 file structure for the plantseg pipeline
  2. Implement new file structure

Add `LiftedMulticut' to the list of supported algorithms in UI

LiftedMulticut is now supported from within the YAML file configuration. It should also be added to the UI.
If added to the UI we would need additional field to specify the nuclei_predictions_path option. In order to keep the UI generic we could just add generic text field additional_attributes where the user could provide comma separated values, e.g.
nuclei_predictions_path: /home/user/nuclei_pmaps, some_attr: XYZ.

KeyError: 'cnn_prediction' when opening plantseg --gui

Hello
I've been using plantseg for a while now and am generally very happy with the program.
Not sure if this is the right place to ask this, but I hope you might be able to help
Since today, when I try to open the GUI, I get the KeyError described below:

File "/home/usr/miniconda3/envs/plant-seg/bin/plantseg", line 11, in
sys.exit(main())
File "/home/usr/miniconda3/envs/plant-seg/lib/python3.7/site-packages/plantseg/run_plantseg.py", line 24, in main
PlantSegApp()
File "/home/usr/miniconda3/envs/plant-seg/lib/python3.7/site-packages/plantseg/gui/plantsegapp.py", line 75, in init
self.build_all()
File "/home/usr/miniconda3/envs/plant-seg/lib/python3.7/site-packages/plantseg/gui/plantsegapp.py", line 89, in build_all
self.init_frame1()
File "/home/usr/miniconda3/envs/plant-seg/lib/python3.7/site-packages/plantseg/gui/plantsegapp.py", line 189, in init_frame1
self.post_obj) = self.init_menus(show_all=True)
File "/home/usr/miniconda3/envs/plant-seg/lib/python3.7/site-packages/plantseg/gui/plantsegapp.py", line 382, in init_menus
font=self.font, show_all=show_all)
File "/home/usr/miniconda3/envs/plant-seg/lib/python3.7/site-packages/plantseg/gui/gui_widgets.py", line 210, in init
default=config[self.module]["model_name"],
KeyError: 'cnn_prediction'

I tried to re-install the package twice but the error persists. Using plant-seg with conda and python 3.7.6 on Ubuntu LTS 18.04, but I had to re-install my CUDA environment yesterday, not sure if this might be linked
Thanks for your work and looking forward to get back to segmentation

crop_volume error in GUI Plantseg 1.3

Hello
I just updated the Plantseg environment on my System, using Linux, and wanted to test some of the new models with the GUI.
When I set up the image and press run, Plantseg returns an error message with the last line reading:

RuntimeError: key : 'crop_volume' is missing, plant-seg requires 'crop_volume' to run

I tried to enter several values in the new crop volume field, from leaving it empty to [0:, 0:, 0:] and trying different other numers to specify the range, like [0:10, 0:10, 0:10]. The returned error is always the same. I could not find any updated entries explaining this new parameter. Could you add that to the documentation or look into the error?
Greetings Moritz

Can't run the segmentation pipeline based on description in README

I tried to run the segmentation pipeline as described in the README, but ran into several issues,
that seem to happen due to hard-coded paths.
For example, it is not enough to clone the pytorch-unet3d-repository into the plant-seg model, because it needs to be added to the pythonpath.
Also, the imports in plantseg.py don't work if the script is called from the parent directory.

pipeline step: 'cnn post processing' - input files path is empty

Hiya sorry have another issue:

The cnn predictions appear to be saved in the PreProcessing folder:
....
Executing pipeline step: 'cnn_prediction'. Parameters: '{'state': True, 'model_name': 'generic_confocal_3d_unet', 'device': 'cpu', 'patch': [8, 32, 32], 'stride': 'Balanced', 'version': 'best', 'model_update': False}'. Files ['/root/datasets/Crop/PreProcessing/Stack7crop51.h5'].

But when Executing pipeline step: 'cnn_postprocessing', the 'input file path is empty':
Executing pipeline step: 'cnn_postprocessing'. Parameters: '{'state': True, 'tiff': False, 'output_type': 'data_uint8', 'factor': [0.41504768632991873, 0.226928895612708, 0.24748391354561955], 'order': 2, 'save_raw': False}'. Files [].

Not sure why, because the .h5 file in the PreProcessing folder is 75Mb so definitely there. Any advice?

More graceful error for incorrect patch size

Currently if one of the patch size dimension is bigger than the image dimension plantseg will fail.
Screenshot from 2020-02-21 14-31-55

this isn't very descriptive for the user. We should give a better error, or autoadjust the patch dim to fit the image dim

failing to launch plantseg gui on macOS

Hi people ,
I am trying to use plantseg on my mac and I have trouble in launching. From what I understand gui display is failing and can't pin-point. I will eventually move to a linux system but would be nice to get it working on mac.
I would really appreciate help from you.
best,
Sourabh
[email protected]

macOS Catalina version 10.15.6
conda 4.7.12
Python 3.7.4
Docker version 19.03.12
XQuartz 2.7.11 (xorg-server 1.18.4)

โ€œ
bash-3.2$ docker run -it --rm -v /Users/saubhi/Desktop/SembLab/Data:/root/datasets -e DISPLAY=$DISPLAY wolny/plantseg

ERROR conda.cli.main_run:execute(31): Subprocess for 'conda run ['plantseg', '--gui']' command failed. (See above for error)
Traceback (most recent call last):
File "/root/miniconda3/envs/plant-seg/bin/plantseg", line 11, in
sys.exit(main())
File "/root/miniconda3/envs/plant-seg/lib/python3.7/site-packages/plantseg/run_plantseg.py", line 24, in main
PlantSegApp()
File "/root/miniconda3/envs/plant-seg/lib/python3.7/site-packages/plantseg/gui/plantsegapp.py", line 43, in init
self.plant_segapp = tkinter.Tk()
File "/root/miniconda3/envs/plant-seg/lib/python3.7/tkinter/init.py", line 2023, in init
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: couldn't connect to display "/private/tmp/com.apple.launchd.mAtyr7ic37/org.macosforge.xquartz:0"

bash-3.2$

โ€œ

Allow 'model_name' to be specified from the GUI

For users who wants to try their own networks we should allow to point to the directory where the network resides from withing the UI. Currently it's possible only when editing the config

__init__() got multiple values for keyword argument 'basic_module'

Hi,

I followed the Linux install instructions and am getting the following error using the GUI:

Unknown Error. Error type: <class 'TypeError'>
init() got multiple values for keyword argument 'basic_module'

Tried running with and without data preprocessing (by preprocessing it in Fiji) and get the same.
Dataset is a confocal z-stack

Is there something I can do to fix this?

Thank you in advance.

Config is as follows:
`path: /home/user/Desktop/TC/samples/sample-1.tif
preprocessing:
factor:

  • 2.000688936170213
  • 3.172168
  • 3.172168
    filter:
    filter_param: 1.0
    state: false
    type: gaussian
    order: 2
    save_directory: preprocessing
    state: false
    segmentation:
    beta: 0.5
    name: MultiCut
    post_minsize: 50
    postprocessing:
    factor:
    • 1
    • 1
    • 1
      order: 0
      state: false
      tiff: false
      run_ws: true
      save_directory: MultiCut
      state: true
      ws_2D: true
      ws_minsize: 50
      ws_sigma: 2.0
      ws_threshold: 0.5
      ws_w_sigma: 0.0
      unet_prediction:
      device: cuda
      model_name: generic_confocal_3d_unet
      model_update: false
      patch:
  • 32
  • 128
  • 128
    postprocessing:
    factor:
    • 1
    • 1
    • 1
      order: 2
      state: false
      tiff: false
      state: true
      stride:
  • 20
  • 100
  • 100
    version: best`

Software crash upon segmentation

Hi! Loving the software thus far. I have some inexplicable software crashes for some files when trying to segment them. Preprocessing works, but segmentation just fails without any particular error code. The plantseg output is the following:

(plant-seg) [~] plantseg --config .plantseg_models/configs/npa_plants-test.yaml                                                                                                                           
2020-08-31 16:39:31,389 [MainThread] INFO PlantSeg - Running the pipeline on: ['<path-to-file>]
2020-08-31 16:39:31,389 [MainThread] INFO PlantSeg - Executing pipeline, see terminal for verbose logs.
2020-08-31 16:39:31,389 [MainThread] INFO PlantSeg - Executing pipeline step: 'preprocessing'. Parameters: '{'factor': [1.1063829787234043, 1.171450898200888, 1.171450898200888], 'filter': {'filter_param': 1.0, 'param': 1.0, 'state': True, 'type': 'gaussian'}, 'order': '2', 'save_directory': 'PreProcessing', 'state': False}'. Files [<path-to-file>].
2020-08-31 16:39:31,390 [MainThread] INFO PlantSeg - Skipping 'DataPreProcessing3D'. Disabled by the user.
2020-08-31 16:39:31,390 [MainThread] INFO PlantSeg - Executing pipeline step: 'cnn_prediction'. Parameters: '{'device': 'cuda', 'model_name': 'confocal_unet_bce_dice_ds3x', 'model_update': False, 'patch': [16, 64, 64], 'state': False, 'stride': 'Accurate (slowest)', 'version': 'best'}'. Files [<path-to-file>].
2020-08-31 16:39:31,390 [MainThread] INFO PlantSeg - Skipping 'UnetPredictions'. Disabled by the user.
2020-08-31 16:39:31,390 [MainThread] INFO PlantSeg - Executing pipeline step: 'cnn_postprocessing'. Parameters: '{'factor': [0.9038461538461539, 0.8536422666419891, 0.8536422666419891], 'order': 2, 'output_type': 'data_float32', 'save_raw': False, 'state': False, 'tiff': False}'. Files [<path-to-file>].
2020-08-31 16:39:31,390 [MainThread] INFO PlantSeg - Skipping 'DataPostProcessing3D'. Disabled by the user.
2020-08-31 16:39:31,390 [MainThread] INFO PlantSeg - Executing pipeline step: 'segmentation'. Parameters: '{'beta': 0.6, 'name': 'GASP', 'post_minsize': 50, 'run_ws': True, 'save_directory': 'GASP', 'state': True, 'ws_2D': True, 'ws_minsize': 50, 'ws_sigma': 1.0, 'ws_threshold': 0.1, 'ws_w_sigma': 0.0}'. Files [<path-to-file>].
2020-08-31 16:39:31,464 [MainThread] INFO PlantSeg - Loading stack from <path-to-file>
2020-08-31 16:39:31,480 [MainThread] INFO PlantSeg - Found 'predictions' dataset inside <path-to-file>
2020-08-31 16:39:37,743 [MainThread] INFO PlantSeg - Clustering with GASP...
[1]    33805 killed     plantseg --config .plantseg_models/configs/config-test.yaml

I've tried all segmentation methods, but all generate the same error. I can segment other files just fine. Copy of the file in question here:

https://drive.google.com/file/d/1oi5cu0HYp7XbGgt_QkAyn6UY1uB38Fa8/view?usp=sharing

Any suggestions?

Is this OK for RTX 3060 ti?

Hi,
I tested the software on my PC (Ryzen 3800x 3.9GHz, 48GB RAM, RTX 3060ti) and it took 26 minutes to perform the pipeline on a confocal stack of 145x145x50um (50 slices).
Is this performance normal for the PC? I feel it was too slow. In fact, the GPU was not used much.
Does the software support the RTX 30 series?
Thank you very much!

Cannot skip CNN processin step from the pipeline

If the boundary maps are available it should be possible to just disable the cnn processing (state: False in the config) and just run the segmentation algorithm directly, but currently even if the step is disabled in the config, PlantSeg still tries to run it.

Change the names of the hyper-params to more readable

  1. Filter -> Filter (Optional)
  2. Beta -> Under-/Over-segmentation factor
  3. WS -> Watershed
  4. WS Threshold -> Probability Maps Threshold (?)
  5. WS Minimum Size -> Superpixel Minimum Size (?)
  6. Minimum Size -> Cell Minimum Size

AssertionError: Not enough patch overlap for stride

Dear lorenzocerrone,

Thank you for providing this great image analysis tool for plant researchers.

I am planning to use PlantSeg to auto-segment my confocal images from inflorescence meristems.
First, I succeeded to install the software via conda in WSL2 (windows 10); Then I tested the example files "sample_ovule.h5" from https://github.com/hci-unihd/plant-seg/tree/master/tests/resources. It works well when I set each steps as "True" from the file "test_config.yaml".

Next, I tried to test the inflorescence meristem images from your OSF project: MutX_DR5v2_6_4_M1.tif, https://mfr.de-1.osf.io/render?url=https://osf.io/edx2f/?direct%26mode=render%26action=download%26mode=render. In cnn_prediction section,
I set the model_name: "confocal_PNAS_3d", patch size: patch: [2, 128, 128] and stride: 'Balanced', then it always give me this error message:
"AssertionError: Not enough patch overlap for stride: [1, 96, 96] and halo: (4, 8, 8)",

I also tried to change different models, patch size and stride, but it always shows this error message. Did I misunderstand some points from the instruction? How can I solve this problem?
Thanks very much in advance if you could help.

Best regards,
Guojian

Add patch_halo to plantseg config (CNN predictions)

Currently the patch_halo can only be configured from within the CNN prediction template file, i.e. plantseg/resources/config_predict_template.yaml. We should allow it in the main plantseg config for conveniance

conda recipe problems

I found the following two problems with the current install recipe:

  • gasp even if installed from pip still does not work with it's own nifty fork
  • the current installation instructions installs only pytorch cpu, in order to install pytorch compiled with cuda we would need to change the instruction to something like that:
    conda create -n plant-seg -c lcerrone -c abailoni -c cpape -c awolny -c pytorch -c nvidia -c conda-forge pytorch cudatoolkit=11.1 plantseg
    The issue seems to be some how in the pytorch recipe from conda-forge

Docker container not working

Hi,

I wanted to install plant-seg on Ubuntu 18 with a GForce RTX 2070. There were a lot of dependency issues due to other software relying on specific version of CUDA, hence I decided to use Docker. Nvidia docker is working, and plantseg GUI is also starting. However, when I try to run any predictions, there are two isses:

1, Torch cannot recognize the gpu, even when I include --gpu all in the docker command
key: device has got value: cuda, but torch can not detect a valid cuda device. defaulting default value: cpu

2, plantseg failes after downloading the model files and starting network prediction
I can see, that the memory usage goes up until ~25 Gb (64Gb RAM in the PC) for a few second and than this error message:
File "/root/miniconda3/envs/plant-seg/lib/python3.8/site-packeges/torch/utils/data/dataloader.py", line 999, in _try_get_data raise RuntimeError('DataLoader worker(pid(s) {}) exied unexpectedly' .format(pids_str)) from e RuntimeError: DataLoader worker (pid(s) 39, 73) exited unexpectedly

I am using one of the example tif for the setup: Col0_03_T1.tif with default setting. I tried several model files and other images, same results.

I would appreciate some help to make the software work with docker. Thanks in advance.

Cheers,
Gergo

Slices are skipped + ghost signal

problems PlantSeg Sepal.zip

Hello,
I am testing PlantSeg with my sepal samples (only Prediction, I don't use Segmentation yet), and I got some issues:

  • Slices were frequently skipped; black slices were returned. I tried with a cropped image which contain only the sample (less black space), and reduced the patch size, but the problem persisted.
  • Ghost signal appears in first slices when there are no fluorescent signal. Of course I can remove them, but how do the ghost signal appear?
  • There is halo surrounding true signal.
  • Sometimes the program stays for a very long time at "Sending the model to cuda:0" step (>30 min). Sometimes it works very fast which I assume how it should (finished in <6min). There are some issues of consistency there.

I attached the input and output images, as well as PlantSeg parameter here so that you can have a look.
For those slices that are treated, the results seem to be excellent.
Thanks for your time.
Cheers,

Use 'cuda' device by default in PlantSeg

Device Type should be set to 'cuda' by default. This way if cuda device is not available PlantSeg will fail early and the user can choose to predict on cpu. If cuda device is available and user forgets to choose 'cuda' as Device Type, the prediction will be unnecessary slow

Install fail with pytorch3dunet=1.2.5

Traceback (most recent call last):
  File "/home/lcerrone_local/miniconda3/envs/plant-seg/bin/plantseg", line 7, in <module>
    from plantseg.run_plantseg import main
  File "/home/lcerrone_local/miniconda3/envs/plant-seg/lib/python3.7/site-packages/plantseg/run_plantseg.py", line 3, in <module>
    from plantseg.pipeline.raw2seg import raw2seg
  File "/home/lcerrone_local/miniconda3/envs/plant-seg/lib/python3.7/site-packages/plantseg/pipeline/raw2seg.py", line 5, in <module>
    from plantseg.predictions.predict import UnetPredictions
  File "/home/lcerrone_local/miniconda3/envs/plant-seg/lib/python3.7/site-packages/plantseg/predictions/predict.py", line 6, in <module>
    from pytorch3dunet.datasets.hdf5 import get_test_loaders
ImportError: cannot import name 'get_test_loaders' from 'pytorch3dunet.datasets.hdf5' (/home/lcerrone_local/miniconda3/envs/plant-seg/lib/python3.7/site-packages/pytorch3dunet-1.2.5-py3.7.egg/pytorch3dunet/datasets/hdf5.py)

Installation instructions don't work

The current installation instructions don't work:

(main38) pape@gpu7:~/Work/my_projects$ conda create -n plant-seg -c lcerrone -c abailoni -c cpape -c awolny -c conda-forge nifty=vplantseg1.0.8 pytorch-3dunet=1.2.5 plantseg
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: \ 
Found conflicts! Looking for incompatible packages.
This can take several minutes.  Press CTRL-C to abort.
failed                                                                                                                                  

UnsatisfiableError: The following specifications were found to be incompatible with each other:

Output in format: Requested package -> Available versions

Package python conflicts for:
pytorch-3dunet=1.2.5 -> h5py -> python[version='2.7.*|3.5.*|3.6.*|>=2.7,<2.8.0a0|>=3.6,<3.7.0a0|>=3.9,<3.10.0a0|>=3.8,<3.9.0a0|>=3.5,<3.6.0a0|3.4.*|>=3.6']
pytorch-3dunet=1.2.5 -> python[version='>=3.7,<3.8.0a0']

Package pytorch-3dunet conflicts for:
pytorch-3dunet=1.2.5
plantseg -> pytorch-3dunet

Package pillow conflicts for:
pytorch-3dunet=1.2.5 -> pillow[version='<7']
pytorch-3dunet=1.2.5 -> scikit-image -> pillow[version='>=1.7.8|>=2.1.0|>=4.3.0|>=4.1.1']

Package scipy conflicts for:
pytorch-3dunet=1.2.5 -> scipy
pytorch-3dunet=1.2.5 -> scikit-image -> scipy[version='>=0.17|>=0.19|>=0.9']

Package libgfortran5 conflicts for:
pytorch-3dunet=1.2.5 -> scipy -> libgfortran5[version='>=9.3.0']
nifty=vplantseg1.0.8 -> hdf5[version='>=1.10'] -> libgfortran5[version='>=9.3.0']

Package blas conflicts for:
nifty=vplantseg1.0.8 -> numpy[version='>=1.15'] -> blas[version='*|1.1|1.0|1.0',build='openblas|mkl|openblas']
pytorch-3dunet=1.2.5 -> scikit-learn -> blas[version='*|*|1.1|1.0|1.0',build='mkl|openblas|mkl|openblas']
plantseg -> pytorch -> blas[version='*|*|1.0|1.1|1.0',build='mkl|openblas|openblas|mkl|openblas']

Package nifty conflicts for:
nifty=vplantseg1.0.8
plantseg -> gasp -> nifty=1.0.9

Btw, I put nifty on conda-forge recently and I would very much encourage you to use it:
https://github.com/conda-forge/nifty-feedstock.

Also, this: -c lcerrone -c abailoni -c cpape -c awolny -c conda-forge doesn't look like a good idea for reproducible environments ...

Issue when running the GUI in version 1.3.5

When running the GUI in the new version 1.3.5, the following error appears:

FIle "...plantsegv1.3.5/lib/python3.7/site-packages/plantseg/pipeline/config_validation.py", line 182, in recursive_config_check raise RuntimeError(f'key: '{key}' is missing, plant-seg requires '{key}' to run:). RuntimeError: key: 'crop_volume' is missing, plant-seg requires 'crop_volume' to run.

The command line runs well though. We tested a previous GUI version (v.1.1.8) and it was working well as well.

Thanks!

Best

Pau

manifest not found for tag latest with docker run

Hi,

I tried running the docker image following the readme and this happened:

$ docker run -it --rm -v /Descargas/data:/root/datasets -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY wolny/plantseg
Unable to find image 'wolny/plantseg:latest' locally
docker: Error response from daemon: manifest for wolny/plantseg:latest not found: manifest unknown: manifest unknown.
See 'docker run --help'.

After that I checked the tags available for the project in this url: https://hub.docker.com/v2/repositories/wolny/plantseg/tags

I tried again with the tag '1.0.1' and it worked:

$ docker run -it --rm -v /Descargas/data:/root/datasets -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY wolny/plantseg:1.0.1
Unable to find image 'wolny/plantseg:latest' locally
1.0.1: Pulling from wolny/plantseg

test the network using tif image of root/leaf

I tried to test the network using a root/leaf tissue image (tif ) at cellular resolution. I got error. it produce .h5 prediction for each test file which is not in the folder. can some one tell me how to run a test image from a custom data?

PlantSeg Crashes before finishing predictions for each slice

Hi am using PlantSeg on Windows10 via docker/Xlaunch - i cant get past this stage without it crashing. Have tried reducing the size of input file. Any advice?

`.....
021-03-05 06:20:39,262 [ThreadPoolExecutor-0_0] INFO UNet3DPredictor - Saving predictions for slice:(slice(0, 1, None), slice(0, 32, None), slice(0, 128, None), slice(230, 358, None))...
2021-03-05 06:21:31,205 [ThreadPoolExecutor-0_0] INFO UNet3DPredictor - Saving predictions for slice:(slice(0, 1, None), slice(0, 32, None), slice(0, 128, None), slice(345, 473, None))...
2021-03-05 06:22:33,596 [ThreadPoolExecutor-0_0] INFO UNet3DPredictor - Saving predictions for slice:(slice(0, 1, None), slice(0, 32, None), slice(0, 128, None), slice(460, 588, None))...

/root/miniconda3/envs/plant-seg/lib/python3.7/site-packages/scipy/ndimage/interpolation.py:605: UserWarning: From scipy 0.13.0, the output shape of zoom() is calculated with round() instead of int() - for these inputs the size of the returned array has changed.
"the returned array has changed.", UserWarning)
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server "[my ip]:0.0"
after 10886 requests (10648 known processed) with 0 events remaining.`

[Help] PNAS model questions

Hi, thanks again for providing this tool - I'm having much use of it in my projects.

I have some questions about the PNAS models, which are trained using the Willis and Refahi et al. (2016) dataset. I know that this dataset had its resolution manually corrected due to artificial stretching stemming from plant growth during imaging, and therefore the resolution varies quite a bit between both plant by plant, but also between time-points for the same plant. How have you accounted for this, given that the resolution you list for the model is (.25, .25 .25) um? I think some plants also differ in terms of XY resolution, and I think plant 1 in the dataset has some wounded cells in its first time-points...

Secondly, the PNAS dataset suffers quite a bit from signal penetration bias, where the epidermal signal is strong, but the internal signal is much more sketchy. The authors conducted manual corrections to improve the segmentation, but AFAIK only for cells in the epidermis, within something like 45 um from the apex.

I would ideally like to use this model to segment confocal SAM data in lower resolution ~(.5, .25, .25) um, but would like to achieve decent segmentations also for L2 and L3. Due to my questions above, I would like to avoid any biases due to the data the model is trained on. For achieving better segmentation of the subepidermal cells, would it be better to use the ovule models given the layer-bias in the PNAS dataset?

Thanks!

/ Henrik

lmc from nuclei segmentation

Hey,

I wanted to add the possibility to configure plantseg to solve the lmc problem from nuclei segmentation instead of cnn predictions (#84).
but I get this weird error:

Traceback (most recent call last):
  File "run_plantseg.py", line 4, in <module>
    main()
  File "/home/lcerrone/PycharmProjects/plant-seg/plantseg/run_plantseg.py", line 28, in main
    raw2seg(config)
  File "/home/lcerrone/PycharmProjects/plant-seg/plantseg/pipeline/raw2seg.py", line 72, in raw2seg
    output_paths = pipeline_step()
  File "/home/lcerrone/PycharmProjects/plant-seg/plantseg/pipeline/steps.py", line 58, in __call__
    return [self.read_process_write(input_path) for input_path in self.input_paths]
  File "/home/lcerrone/PycharmProjects/plant-seg/plantseg/pipeline/steps.py", line 58, in <listcomp>
    return [self.read_process_write(input_path) for input_path in self.input_paths]
  File "/home/lcerrone/PycharmProjects/plant-seg/plantseg/segmentation/lmc.py", line 97, in read_process_write
    output_data = self.process(pmaps)
  File "/home/lcerrone/PycharmProjects/plant-seg/plantseg/segmentation/lmc.py", line 67, in process
    self.ws_minsize)
  File "/home/lcerrone/PycharmProjects/plant-seg/plantseg/segmentation/lmc.py", line 184, in segment_volume_lmc_from_seg
    node_labels = lmc.lifted_multicut_kernighan_lin(rag, costs, lifted_uvs, lifted_costs)
  File "/home/lcerrone/miniconda3/envs/plant-seg/lib/python3.7/site-packages/elf/segmentation/lifted_multicut.py", line 72, in lifted_multicut_kernighan_lin
    objective.setCosts(lifted_uv_ids, lifted_costs)
TypeError: setCosts(): incompatible function arguments. The following argument types are supported:
    1. (self: nifty.graph.opt.lifted_multicut._lifted_multicut.LiftedMulticutObjectiveUndirectedGraph, uv: numpy.ndarray[numpy.uint64], weight: numpy.ndarray[numpy.float64], overwrite: bool = False) -> None

Invoked with: <nifty.graph.opt.lifted_multicut.LiftedMulticutObjectiveUndirectedGraph object at 0x7fcdc8bee070>, array([[5973, 6092],
       [5973, 6762]], dtype=uint64), array([[18.849344, 18.849344],
       [18.849344, 18.849344]], dtype=float32)

@wolny Do you have any idea on where this could come from?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.