Code Monkey home page Code Monkey logo

zerocostdl4mic's People

Contributors

alonsaguy avatar aminrezaei-img avatar brunomsaraiva avatar ckspahn avatar constantinpape avatar ctr26 avatar esgomezm avatar fynnbe avatar guijacquemet avatar ivanhcenalmor avatar johannarahm avatar krentzd avatar lucpaul avatar mariana-gferreira avatar oeway avatar paxcalpt avatar romain-laine avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zerocostdl4mic's Issues

Network choice

Hi,

How does one chose which network to use for example for artificial labelling since fnet, CARE2D and pix2pix can do it ?
What are the advantages of using one over the other ?

Thank you very much!

Single plane label-free prediction (fnet)

Hi,

I am interested in predicting the position of nuclei from single plane bright-field images using fnet.
However, according to the documentation " [...] Stacks with fewer than 32 slices will only successfully train the model when they contain 16, 8 or 4 slices and other values will throw a tensordimensions error. [...]".

But is there a trick to convert a single plane into a synthetic 3D stack and make fnet work? Gaussian blur on Z axis?

The final aim being to be able to do nuclei segmentation and tracking from brightfield or phase contrast instead of fluorescence.

Thank you

U-net: trained model applied to "unseen images" of bigger xy size not working

Dear Developers,

For the membrane segmentation I was able to train the U-net network and my model at size of 512x512 pixels. My understanding was, after training the small portion of the dataset one could apply the trained network to the whole image. But I discovered that it does not work: 512x512 pixels trained model applied to "unseen images" of 3732 x 4720 pixels size resulted to very fuzzy prediction with no recognizable membrane...however, after cropping random region of the same size as training images (e.i. 512x512 pixels) it seems to work and I got membrane segmentation.

So I would like to ask you if this is a way how the algorithm is set at the moment or is it really some issue? How it would be possible to apply the trained network (model) to perform the whole image size...

Thank you very much for any suggestion.

Enable local runtime support

Hi,

I think this project is absolutely fantastic. However, I would like to try and run some of these files locally and have had issues using the exactly same code provided for the google colab notebook. Attempting to run the 3D Unet locally i receive an error when trying to run the Data Augmentation.

I assume this is because I am running a different version of one of the python libraries as it works fine in google colab. Would it be possible to provide conda env files for the associated projects.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-22-efd9551e10a5> in <module>
     87                                        binary_target=binary_target)
     88 
---> 89 sample_src_aug, sample_tgt_aug = train_generator.sample_augmentation(random.randint(0, len(train_generator)))
     90 
     91 def scroll_in_z(z):

2 frames
<ipython-input-1-8a00038ccc25> in augment_volume(self, src_vol, tgt_vol)
    335 
    336         for i in range(src_vol.shape[3]):
--> 337             src_vol_aug[:,:,:,i,0], tgt_vol_aug[:,:,:,i,0] = self.seq(images=src_vol[:,:,:,i,0].astype('float16'), 
    338                                                                       segmentation_maps=tgt_vol[:,:,:,i,0].astype(bool))
    339         return self._min_max_scaling(src_vol_aug), tgt_vol_aug

~/anaconda3/envs/unet/lib/python3.8/site-packages/imgaug/augmenters/meta.py in __call__(self, *args, **kwargs)
   2006     def __call__(self, *args, **kwargs):
   2007         """Alias for :func:`~imgaug.augmenters.meta.Augmenter.augment`."""
-> 2008         return self.augment(*args, **kwargs)
   2009 
   2010     def pool(self, processes=None, maxtasksperchild=None, seed=None):

~/anaconda3/envs/unet/lib/python3.8/site-packages/imgaug/augmenters/meta.py in augment(self, return_batch, hooks, **kwargs)
   1977         )
   1978 
-> 1979         batch_aug = self.augment_batch_(batch, hooks=hooks)
   1980 
   1981         # return either batch or tuple of augmentables, depending on what

~/anaconda3/envs/unet/lib/python3.8/site-packages/imgaug/augmenters/meta.py in augment_batch_(self, batch, parents, hooks)
    594         elif isinstance(batch, UnnormalizedBatch):
    595             batch_unnorm = batch
--> 596             batch_norm = batch.to_normalized_batch()
    597             batch_inaug = batch_norm.to_batch_in_augmentation()
    598         elif isinstance(batch, Batch):

~/anaconda3/envs/unet/lib/python3.8/site-packages/imgaug/augmentables/batches.py in to_normalized_batch(self)
    203             heatmaps=nlib.normalize_heatmaps(
    204                 self.heatmaps_unaug, shapes),
--> 205             segmentation_maps=nlib.normalize_segmentation_maps(
    206                 self.segmentation_maps_unaug, shapes),
    207             keypoints=nlib.normalize_keypoints(

~/anaconda3/envs/unet/lib/python3.8/site-packages/imgaug/augmentables/normalization.py in normalize_segmentation_maps(inputs, shapes)
    202         return None
    203     if ntype in ["array[int]", "array[uint]", "array[bool]"]:
--> 204         _assert_single_array_ndim(inputs, 4, "(N,H,W,#SegmapsPerImage)",
    205                                   "SegmentationMapsOnImage")
    206         _assert_exactly_n_shapes_partial(n=len(inputs))

~/anaconda3/envs/unet/lib/python3.8/site-packages/imgaug/augmentables/normalization.py in _assert_single_array_ndim(arr, ndim, shape_str, to_ntype)
     63 def _assert_single_array_ndim(arr, ndim, shape_str, to_ntype):
     64     if arr.ndim != ndim:
---> 65         raise ValueError(
     66             "Tried to convert an array to list of %s. Expected "
     67             "that array to be of shape %s, i.e. %d-dimensional, but "

ValueError: Tried to convert an array to list of SegmentationMapsOnImage. Expected that array to be of shape (N,H,W,#SegmapsPerImage), i.e. 4-dimensional, but got 3 dimensions instead.

Error Transport endpoint is not connected: '/content/gdrive/My Drive/Colab Notebooks/Stardist_v2/Stardist/Models/My_model/config.json'

Describe the bug
After running Stardist model without changing default settings but with new images in the training set, every time I tried to test the model (step 5) that was just created it seems that the connection with google drive doesn't work since is complaining about not finding config.json file.
The way I manage to work around was restarting the session, running the first steps and skipping the training model.
Then in step 5 unchecked "use current model" and pointed to the folder where the model was. And it worked.

Screenshots

OSError                                   Traceback (most recent call last)
<ipython-input-13-95cd261c8237> in <module>()
     48   if n_channel > 1:
     49     print("Normalizing image channels %s." % ('jointly' if axis_norm is None or 2 in axis_norm else 'independently'))
---> 50   model = StarDist2D(None, name=inference_model_name, basedir=inference_model_path)
     51   #model.optimize_thresholds(X_val, Y_val)
     52 

7 frames
/usr/local/lib/python3.6/dist-packages/stardist/models/model2d.py in __init__(self, config, name, basedir)
    251     def __init__(self, config=Config2D(), name=None, basedir='.'):
    252         """See class docstring."""
--> 253         super().__init__(config, name=name, basedir=basedir)
    254 
    255 

/usr/local/lib/python3.6/dist-packages/stardist/models/base.py in __init__(self, config, name, basedir)
    164 
    165     def __init__(self, config, name=None, basedir='.'):
--> 166         super().__init__(config=config, name=name, basedir=basedir)
    167         threshs = dict(prob=None, nms=None)
    168         if basedir is not None:

/usr/local/lib/python3.6/dist-packages/csbdeep/models/base_model.py in __init__(self, config, name, basedir)
     90             # config was provided -> update before it is saved to disk
     91             self._update_and_check_config()
---> 92         self._set_logdir()
     93         if config is None:
     94             # config was loaded from disk -> update it after loading

/usr/local/lib/python3.6/dist-packages/csbdeep/models/base_model.py in wrapper(*args, **kwargs)
     28                 warn is False or warnings.warn("Suppressing call of '%s' (due to basedir=None)." % f.__name__)
     29             else:
---> 30                 return f(*args, **kwargs)
     31         return wrapper
     32     return _suppress_without_basedir

/usr/local/lib/python3.6/dist-packages/csbdeep/models/base_model.py in _set_logdir(self)
    122         config_file =  self.logdir / 'config.json'
    123         if self.config is None:
--> 124             if config_file.exists():
    125                 config_dict = load_json(str(config_file))
    126                 self.config = self._config_class(**config_dict)

/usr/lib/python3.6/pathlib.py in exists(self)
   1334         """
   1335         try:
-> 1336             self.stat()
   1337         except OSError as e:
   1338             if e.errno not in (ENOENT, ENOTDIR):

/usr/lib/python3.6/pathlib.py in stat(self)
   1156         os.stat() does.
   1157         """
-> 1158         return self._accessor.stat(self)
   1159 
   1160     def owner(self):

/usr/lib/python3.6/pathlib.py in wrapped(pathobj, *args)
    385         @functools.wraps(strfunc)
    386         def wrapped(pathobj, *args):
--> 387             return strfunc(str(pathobj), *args)
    388         return staticmethod(wrapped)
    389 

OSError: [Errno 107] Transport endpoint is not connected: '/content/gdrive/My Drive/Colab Notebooks/Stardist_v2/Stardist/Models/My_model/config.json'

Desktop (please complete the following information):

  • OS: windows 10 pro
  • Browser: chrome
  • Version: 81.0.4044.138
  • working with remote desktop (vpn)

Supporting interactive visualization and annotation via ImJoy

FYI: With a recent PR in ImJoy-RPC, it is now possible to run interactive widgets in Google Colab! I think this will be useful for some applications in Zero.

There are many possibilities, you can basically use any current and future ImJoy plugins in Colab, for example:

  1. visualizing massive multi-scale images in zarr format, try the vizarr example: Open In Colab
    (also see this PR)

  2. get user annotation for e.g. segmentation, see an example that uses Kaibu here: Open In Colab

Also see here, that we may also support itk-vtk-viewer soon.

Screenshot 2020-08-20 at 11 08 58

StarDist 2D notebooks does not segmentate but outputs raw input

Hello,

First, thank you and your team for making such a nice platform for ease of use in microscopy deep-learning and research.

The StarDist 2D notebook that i have been using works well, however, when i want the network to analyse my own dataset, it outputs the raw input. Besides this, it also outputs a .rar with black images, where the network tries to find a cell.

For this, i have performed no edits to the StarDist 2D notebook, and trained the network with 1000 epochs at standard settings. I have tried different bit versions of the same file, which was originally 16-bit, now adapted to 32-bit.

Below i have included my system information and my data and it's results, zipped. My data input into the notebook as a .tiff file, not a .rar. I also included the original 16-bit .tiff file before it was

https://drive.google.com/file/d/1WMlHVQOQOGWQiGH1NjtFnR9NUJUAtuoU/view?usp=sharing
Systeminfo.txt

Installation of fnet and dependencies

Hello all,
First time user of ZeroCostDL4Mic, and github for that matter. Please accept my apologies in advance for if is the wrong forum for such a query.

I have been trying to use the fnet notebook, guided by your YouTube video. I just wanted to run through the example data to get a feel for the set up. I can access a GPU and mount my google drive without any problems. However, I am running into problems when trying to install fnet and it's dependencies.

Specifically I receive the following error:

image

Desktop:

  • OS: Windows 10 Enterprise
  • Browser Chrome

Any help would be greatly appreciated.

Cheers,
Ian

U-Net - Prediction all black

Hello,

Before the last update I trained a U-Net with 48 images of my own data with their labels (tif - 8-bit, 640x640) and I got a reasonably good prediction.
Captura
I didn't save the model.

Now, after the last update, I tried to train again U-Net, but every time I get a black prediction. Before I had tried with new training samples and also I had run it in my own computer. But eventually I returned to run exactly the same as before (or so I think) but still I do not get predictions.

Could it be something changed? or maybe I am missing something now, that before I did right...

Captura-14

I don't have the loss plot from before, but it had some nice evolution. However now it flattens very rapidly...
índex

Some outputs:
Model: "model_3"


Layer (type) Output Shape Param # Connected to

input_3 (InputLayer) (None, 640, 640, 1) 0


conv2d_27 (Conv2D) (None, 640, 640, 64) 640 input_3[0][0]


conv2d_28 (Conv2D) (None, 640, 640, 64) 36928 conv2d_27[0][0]


max_pooling2d_5 (MaxPooling2D) (None, 320, 320, 64) 0 conv2d_28[0][0]


conv2d_29 (Conv2D) (None, 320, 320, 128 73856 max_pooling2d_5[0][0]


conv2d_30 (Conv2D) (None, 320, 320, 128 147584 conv2d_29[0][0]


max_pooling2d_6 (MaxPooling2D) (None, 160, 160, 128 0 conv2d_30[0][0]


conv2d_31 (Conv2D) (None, 160, 160, 256 295168 max_pooling2d_6[0][0]


conv2d_32 (Conv2D) (None, 160, 160, 256 590080 conv2d_31[0][0]


up_sampling2d_5 (UpSampling2D) (None, 320, 320, 256 0 conv2d_32[0][0]


conv2d_33 (Conv2D) (None, 320, 320, 128 131200 up_sampling2d_5[0][0]


concatenate_5 (Concatenate) (None, 320, 320, 256 0 conv2d_30[0][0]
conv2d_33[0][0]


conv2d_34 (Conv2D) (None, 320, 320, 128 295040 concatenate_5[0][0]


up_sampling2d_6 (UpSampling2D) (None, 640, 640, 128 0 conv2d_34[0][0]


conv2d_35 (Conv2D) (None, 640, 640, 64) 32832 up_sampling2d_6[0][0]


concatenate_6 (Concatenate) (None, 640, 640, 128 0 conv2d_28[0][0]
conv2d_35[0][0]


conv2d_36 (Conv2D) (None, 640, 640, 64) 73792 concatenate_6[0][0]


conv2d_37 (Conv2D) (None, 640, 640, 64) 36928 conv2d_36[0][0]


conv2d_38 (Conv2D) (None, 640, 640, 2) 1154 conv2d_37[0][0]


conv2d_39 (Conv2D) (None, 640, 640, 1) 3 conv2d_38[0][0]

Total params: 1,715,205
Trainable params: 1,715,205
Non-trainable params: 0


None
{'lr': 0.0003000000142492354, 'beta_1': 0.8999999761581421, 'beta_2': 0.9990000128746033, 'decay': 0.0, 'epsilon': 1e-07, 'amsgrad': False}
!! WARNING: Model folder already existed and has been removed !!
---------------------------- Main training parameters ----------------------------
Number of epochs: 200
Batch size: 4
Number of training dataset: 96
Number of training steps: 24
Number of validation steps: 3


I am using the augmentation options without vertical flip.

Thank you in advance,

Nuria

Hit a wall

I found a few issues with the wiki compared to Supp Video 1, and I am unable to move past Step 3 in trying to set the paths to use Noise2VOID.

In section 1.2, it says "Click on "Files" site on the right. Refresh the site. Your Google Drive folder should now be available here as "drive"." I couldn't figure this out until watching the video and saw that the mounted Google Drive files can be accessed on the left by leaving the table of contents and clicking on the file folder icon. Just a typo, but the video is much clearer than the wiki

At Step 4, I am stuck (see screenshots). My drive is mounted, and I think I've set the paths correctly, but I cannot move past because of the error given in the screenshot

This is using Safari on Mac OS Catalina 10.15.3. I'll give it a shot on Chrome and see if that browser behaves

Screen Shot 2020-03-21 at 12 01 08 AM

Screen Shot 2020-03-21 at 12 01 17 AM

Stardist 2D Notebook error

Hi,

I've been able to use your pretrained models in Qupath and they work great, thank you so much for all the hard work you have done! I have been trying to train a model on my own data for a particularly difficult DAB-Hematoxylin stained data set but I have been encountering some problems in the Stardist 2D colab notebook,(https://colab.research.google.com/github/HenriquesLab/ZeroCostDL4Mic/blob/master/Colab_notebooks/StarDist_2D_ZeroCostDL4Mic.ipynb)

Everything seems to work fine all the way through section 4.1 but as soon as I start section the "4.2 Start Training" section, I get this error:

TypeError Traceback (most recent call last)
in ()
15 # 'input_epochs' and 'steps' refers to your input data in section 5.1
16 history = model.train(X_trn, Y_trn, validation_data=(X_val,Y_val), augmenter=augmenter,
---> 17 epochs=number_of_epochs, steps_per_epoch=number_of_steps)
18 None;
19

3 frames
/usr/local/lib/python3.6/dist-packages/csbdeep/utils/utils.py in _raise(e)
88
89 def _raise(e):
---> 90 raise e
91
92

TypeError: exceptions must derive from BaseException

I have very little Python experience, so I don't know exactly what is going on, but it won't train the model. The only other error I get is during the installation of Stardist and Dependencies that says:
Successfully built reikna pytools
ERROR: scikit-tensor-py3 0.4.1 has requirement numpy==1.16., but you'll have numpy 1.19.4 which is incompatible.
ERROR: scikit-tensor-py3 0.4.1 has requirement scipy==1.3.
, but you'll have scipy 1.4.1 which is incompatible.
Installing collected packages: appdirs, pytools, pyopencl, configparser, mako, funcsigs, reikna, scikit-tensor-py3, gputools
Successfully installed appdirs-1.4.4 configparser-5.0.1 funcsigs-1.0.2 gputools-0.2.9 mako-1.1.3 pyopencl-2020.3.1 pytools-2020.4.4 reikna-0.7.5 scikit-tensor-py3-0.4.1

Can you please help me figure out what might be happening here, I would really appreciate it!

Thanks,

Aaron

Fix notebook preview link

Right now the notebook preview app doesn't work properly due to the wrong value in the source field.

The way the "notebook preview" application works is to take the source field of the item and render a notebook preview page.

So we need to set a raw url to the ipynb file from github to the source field.

This line for example, the source value should be changed to
https://raw.githubusercontent.com/HenriquesLab/ZeroCostDL4Mic/master/Colab_notebooks/U-net_2D_ZeroCostDL4Mic.ipynb

If you want to add a reference to github repo, set the git_repo field, so I would add git_repo: https://github.com/HenriquesLab/ZeroCostDL4Mic

U-net v1.10 "- loss: nan - val_loss: nan", v1.5 ValueError: Tensor conversion requested dtype float32_ref , v1.10

Hello,

ERROR 1: I am trying to train U-Net with the last version, but I get "- loss: nan - val_loss: nan" (see below)

ERROR 2: Then I also tried an older version that in the past gave me better results ( v1.5. ), but in this case I get the following error: "ValueError: Tensor conversion requested dtype float32_ref for Tensor with dtype float32: <tf.Tensor 'training_1/Adam/Adam/conv2d_1/kernel/m/Initializer/zeros:0' shape=(3, 3, 1, 64) dtype=float32>" (see below)

Any idea of what I am doing wrong?

ERROR 1:

Epoch 1/200
Found 21 images belonging to 1 classes.
Found 2 images belonging to 1 classes.
Found 2 images belonging to 1 classes.
Found 21 images belonging to 1 classes.
4/4 [==============================] - 3s 686ms/step - loss: 48540198518.8084 - val_loss: 10300715008.0000

Epoch 00001: val_loss improved from inf to 10300715008.00000, saving model to /content/gdrive/My Drive/Colab Notebooks/unet-master/Nuclei/640/reduced/re-model/re-model-22October/weights_best.hdf5
Epoch 2/200
4/4 [==============================] - 1s 338ms/step - loss: 5460510654.4727 - val_loss: 69490.6875

Epoch 00002: val_loss improved from 10300715008.00000 to 69490.68750, saving model to /content/gdrive/My Drive/Colab Notebooks/unet-master/Nuclei/640/reduced/re-model/re-model-22October/weights_best.hdf5
Epoch 3/200
4/4 [==============================] - 1s 330ms/step - loss: 182848.7949 - val_loss: 264032.6875

Epoch 00003: val_loss did not improve from 69490.68750
Epoch 4/200
4/4 [==============================] - 1s 330ms/step - loss: 238031.1406 - val_loss: 21348.3184

Epoch 00004: val_loss improved from 69490.68750 to 21348.31836, saving model to /content/gdrive/My Drive/Colab Notebooks/unet-master/Nuclei/640/reduced/re-model/re-model-22October/weights_best.hdf5
Epoch 5/200
4/4 [==============================] - 1s 329ms/step - loss: nan - val_loss: nan


/
/
/
/

ERROR 2:

ValueError Traceback (most recent call last)

in ()
3 start = time.time()
4 # history = model.fit_generator(train_datagen, steps_per_epoch = number_of_steps, epochs=epochs, callbacks=[model_checkpoint,csv_log], validation_data = validation_datagen, validation_steps = validation_steps, shuffle=True, verbose=1)
----> 5 history = model.fit_generator(train_datagen, steps_per_epoch = number_of_steps, epochs=epochs, callbacks=[model_checkpoint, reduce_lr], validation_data = validation_datagen, validation_steps = validation_steps, shuffle=True, verbose=1)
6
7 # Save the last model

17 frames

/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
89 warnings.warn('Update your ' + object_name + ' call to the ' +
90 'Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper

/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
1656 use_multiprocessing=use_multiprocessing,
1657 shuffle=shuffle,
-> 1658 initial_epoch=initial_epoch)
1659
1660 @interfaces.legacy_generator_methods_support

/usr/local/lib/python3.6/dist-packages/keras/engine/training_generator.py in fit_generator(model, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
40
41 do_validation = bool(validation_data)
---> 42 model._make_train_function()
43 if do_validation:
44 model._make_test_function()

/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in _make_train_function(self)
510 training_updates = self.optimizer.get_updates(
511 params=self._collected_trainable_weights,
--> 512 loss=self.total_loss)
513 updates = (self.updates +
514 training_updates +

/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py in get_updates(self, loss, params)
502 if g is not None and v.dtype != dtypes.resource
503 ])
--> 504 return [self.apply_gradients(grads_and_vars)]
505
506 def _set_hyper(self, name, value):

/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py in apply_gradients(self, grads_and_vars, name)
431 _ = self.iterations
432 self._create_hypers()
--> 433 self._create_slots(var_list)
434
435 apply_state = self._prepare(var_list)

/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/optimizer_v2/adam.py in _create_slots(self, var_list)
147 # Separate for-loops to respect the ordering of slot variables from v1.
148 for var in var_list:
--> 149 self.add_slot(var, 'm')
150 for var in var_list:
151 self.add_slot(var, 'v')

/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/optimizer_v2/optimizer_v2.py in add_slot(self, var, slot_name, initializer)
583 dtype=var.dtype,
584 trainable=False,
--> 585 initial_value=initial_value)
586 backend.track_variable(weight)
587 slot_dict[slot_name] = weight

/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py in call(cls, *args, **kwargs)
258 return cls._variable_v1_call(*args, **kwargs)
259 elif cls is Variable:
--> 260 return cls._variable_v2_call(*args, **kwargs)
261 else:
262 return super(VariableMetaclass, cls).call(*args, **kwargs)

/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py in _variable_v2_call(cls, initial_value, trainable, validate_shape, caching_device, name, variable_def, dtype, import_scope, constraint, synchronization, aggregation, shape)
252 synchronization=synchronization,
253 aggregation=aggregation,
--> 254 shape=shape)
255
256 def call(cls, *args, **kwargs):

/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py in (**kws)
233 shape=None):
234 """Call on Variable class. Useful to force the signature."""
--> 235 previous_getter = lambda **kws: default_variable_creator_v2(None, **kws)
236 for _, getter in ops.get_default_graph()._variable_creator_stack: # pylint: disable=protected-access
237 previous_getter = _make_getter(getter, previous_getter)

/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variable_scope.py in default_variable_creator_v2(next_creator, **kwargs)
2550 synchronization=synchronization,
2551 aggregation=aggregation,
-> 2552 shape=shape)
2553
2554

/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/variables.py in call(cls, *args, **kwargs)
260 return cls._variable_v2_call(*args, **kwargs)
261 else:
--> 262 return super(VariableMetaclass, cls).call(*args, **kwargs)
263
264

/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/resource_variable_ops.py in init(self, initial_value, trainable, collections, validate_shape, caching_device, name, dtype, variable_def, import_scope, constraint, distribute_strategy, synchronization, aggregation, shape)
1404 aggregation=aggregation,
1405 shape=shape,
-> 1406 distribute_strategy=distribute_strategy)
1407
1408 def _init_from_args(self,

/tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/resource_variable_ops.py in _init_from_args(self, initial_value, trainable, collections, caching_device, name, dtype, constraint, synchronization, aggregation, distribute_strategy, shape)
1536 initial_value = ops.convert_to_tensor(
1537 initial_value() if init_from_fn else initial_value,
-> 1538 name="initial_value", dtype=dtype)
1539 if shape is not None:
1540 if not initial_value.shape.is_compatible_with(shape):

/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py in convert_to_tensor(value, dtype, name, preferred_dtype, dtype_hint)
1182 preferred_dtype = deprecation.deprecated_argument_lookup(
1183 "dtype_hint", dtype_hint, "preferred_dtype", preferred_dtype)
-> 1184 return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
1185
1186

/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py in convert_to_tensor_v2(value, dtype, dtype_hint, name)
1240 name=name,
1241 preferred_dtype=dtype_hint,
-> 1242 as_ref=False)
1243
1244

/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx, accepted_result_types)
1271 raise ValueError(
1272 "Tensor conversion requested dtype %s for Tensor with dtype %s: %r" %
-> 1273 (dtype.name, value.dtype.name, value))
1274 return value
1275

ValueError: Tensor conversion requested dtype float32_ref for Tensor with dtype float32: <tf.Tensor 'training_1/Adam/Adam/conv2d_1/kernel/m/Initializer/zeros:0' shape=(3, 3, 1, 64) dtype=float32>


CellPose

Hello,

I just wanted to say that it would be super nice if there was an ImageJ-friendly CellPose notebook.

:)

Thanks a lot!

Problem to conect to Gdrive

Hi,

I'm having trouble in the step where we are supposed to conect to google drive. I do as it says and I put the autorization code but then a time-out error apears.
image

I tried with 2 different accounts, 2 browsers and 2 computers and still have the same issue.
Is this happening to other people?

Thank you

U-net segmentation issue

I’m trying to use the U-net part to segment EM images. Whenever I try to do the training, I always get a “blurry” result image, different from the predicted output - images attached.
Most likely it is a problem with me (e.g. setting the wrong parameters). But unfortunately I’m a little lost in this.
I’m happy to share with you the folder, if that’s easier.
28_predict
predicted_0

single plane (FNET) free labeling live/dead cells

Hello,
I understand that FNET works only on 3D images. I would like to train 2D phase contrast images whether a cells is dead or a live - by training it first on fluorescence images. Will I be able to do it using the FNET or should I look for another solution?
Thank you,
Reinat

Error when Training 3D Stardist

Hello,

I'm running into this error when attempting to train a model on the 3D stardist notebook. It occurs on the 4.2 Start Training step and I can't seem to figure out why. I'm using the default training parameters. I've tried readjusting the epochs and steps per epochs parameters but the error persists. Turning off data augmentation and restarting the kernel also seemed not to help. The training set is about 5 pairs of image stacks (about 16 slices per stack).

image

Is there a known cause of this error? Thanks

Best,
Giona

YOLO V2 map_evaluation.py no such file or directory

Hi.

I am trying to utilize your pipeline to train images on my local folder. I have slightly modified the code to find the correct path, but I am facing the same issue using the gdrive. I think that there is a missing file in the cloning.

I am getting this error :
FileNotFoundError: [Errno 2] No such file or directory: '/content/gdrive/My Drive/keras-yolo2/map_evalution.py'.

Indeed, the folder exists for the file is absent.
Screenshot 2020-08-17 at 18 21 07

Thank you very much for your help.
Regards, Richard

run notebooks with local resources

Hi guys!

first, congratulation for this awesome platform. I have been trying some notebooks, for example, the N2V one to denoise my noisy non-paired images. In my case, the free resources provided from Google Colab are not enough (i.e. RAM memory) to completly run the algorithm. Thus, I have tried to use the computing power of my local machine after "connecting to local environment of execution". This option only reports problems and tensorflow errors along the notebook execution...

It there any tutorial or notebook available to work with both local enviroment and google colab resources? This would be a huge improvement for us, people with light knowledge in deep-learning but strongly interested in its applicability!

Thank you in advance! best wishes!
Pedro

UNet issue training issue

I'm trying to use the U-net to segment IF images. I gave 16 images as training source and the same 16 images manually segmented as training target.
Unfortunately, I got this when I start the training:
image
I'm totally novice in python. Do you know where I'm doing something wrong?
Thanks

U-Net data types

Hi, I was wondering what the best way is of getting images with the appropriate data types in to the U-Net.
I have a set of grayscale images (uint32) and binary images as labels.
I think I've now gotten them into the UNet properly by saving as uint16 and uint8, respectively, using tifffile with the kwarg imagej=True

I suspect my binary images get converted to all zeroes because in the training of the network the accuracy rapidly becomes 1 and the loss 0. The output I get is also all zero.

What is the correct datatype / format for the input images? I've added a zip with some of my input images.

input.zip

Question about training/validation error curves

Hi,
Question about about training/validation error curves found in the tips and trick page.
image
image
"By decreasing the number of patches significantly, we were able to prevent overfitting to the training dataset"
Does this mean that the same fragment of the image could be found in multiple patches ? And this is why it is overfitting ?

Thank you

Problems running own data through Stardist Colab Notebook

Hi,

Thank you for creating these notebooks,

I know little about coding so they are very useful.

I've just had a couple of problems I was hoping you could help with please.

My laptop is installing the most up to date version of Numpy and Scipy which do not work with the code, as shown in the screen shots below.
Numpy_Scipy_Issue

1st_Issue(Which_was_resolved)

I initially had a problem with my data not being read properly (Also shown in the screen shots) however you kindly offered guidance and I made sure that my images had been converted into the correct 8-bit tiff format.

After trying the notebook again, I unfortunately came across another problem which I was hoping you would be able to provide guidance on how to overcome please. I attached the problem below.

New_Issue

Thank you

StarDist 2D - Fiji not working on Fiji

Hi all,

I'm trying to use the notebook to train a StarDist model to use on Fiji. Training the model works and then I export it, save the zip file and use that in Fiji.

However, this is failing everytime and I'm getting

(Fiji Is Just) ImageJ 2.0.0-rc-69/1.52p; Java 1.8.0_172 [64-bit]; Windows 10 10.0; 1616MB of 185000MB (<1%)

 
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.NullPointerException
	at net.imagej.legacy.LegacyService.runLegacyCompatibleCommand(LegacyService.java:307)
	at net.imagej.legacy.DefaultLegacyHooks.interceptRunPlugIn(DefaultLegacyHooks.java:166)
	at ij.IJ.runPlugIn(IJ.java)
	at ij.Executer.runCommand(Executer.java:137)
	at ij.Executer.run(Executer.java:66)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: java.lang.NullPointerException
	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.util.concurrent.FutureTask.get(FutureTask.java:192)
	at net.imagej.legacy.LegacyService.runLegacyCompatibleCommand(LegacyService.java:303)
	... 5 more
Caused by: java.lang.NullPointerException
	at de.csbdresden.stardist.StarDist2D.splitPrediction(StarDist2D.java:338)
	at de.csbdresden.stardist.StarDist2D.run(StarDist2D.java:307)
	at org.scijava.command.CommandModule.run(CommandModule.java:199)
	at org.scijava.module.ModuleRunner.run(ModuleRunner.java:168)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:127)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:66)
	at org.scijava.thread.DefaultThreadService.lambda$wrap$2(DefaultThreadService.java:228)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more

Any ideas ?

Thanks.

CARE 3D

Describe the bug
ResourceExhaustedError in step 6.1. of CARE 3D notebook.

First, thank you so much for making these notebooks available to use. The CARE 2D works fantastically on my images!

Right now I am trying to apply my trained CARE 3D model on 3 of my stacks. They are all tiff files with 217 Slices. Though when running it, the following error comes up:
ResourceExhaustedError.
ResourceExhaustedError.pdf

  • OS: Windows10
  • Browser: Firefox

Fnet decompression issue

Hello, me again!

I have successfully used U-net, Stardist and fnet notebooks using the example training and test datasets. I have found them to be a really useful utility.

I have now moved on to using fnet for some of my own data. Specifically, I have phase contrast images of cells along with matched fluorescent staining of nuclei. I want to train a model such that I can detect nuclei and do cell counts on unstained images.

In Section 4.1 "Train a new model", I get the following error.

image

I presume this might be a result of the fact that the phase images are 8-bit grayscale TIFF and the fluorescence images are 16-bit grayscale TIFFs, as these seem to be the only allowable export formats from my instrument (Incucyte Zoom). If so I suppose I could look into converting?

Would you be able to comment?

Desktop:

  • OS: Windows 10 Enterprise
  • Browser chrome
  • Version 81.0.4044.122 (Official Build) (64-bit)

Thanks very much,

Ian

No hdf5 file produced for model in Unet

Describe the bug
When I try to generate the model with the training data downloaded from ISBI, no hdf5 file is produced in the model directory. The only item present in this directory is the Quality Control folder containing the csv file.

To Reproduce
Steps to reproduce the behavior:

  1. Click on 'Run this cell to check if you have GPU access'
  2. Click on 'Play the cell to connect your Google Drive to Colab'
  3. Click on 'Play to install U-net dependencies'
  4. Create folders 'Training_images' to store train_volume.tif and 'Training_masks' to store train_labels.tif
    ####Section 3.1####
  5. Paste in '/content/gdrive/My Drive/Colab Notebooks/Unet/Training_images' for training_source
  6. Paste in '/content/gdrive/My Drive/Colab Notebooks/Unet/Training_masks' for training_target
  7. 50 epochs
  8. Click on 'Augmentation options'
    ####Section 6.1####
  9. Create folders 'Test_data' to store test-volume.tif and 'Test_results'.
  10. Paste in '/content/gdrive/My Drive/Colab Notebooks/Unet/Test_data' for Data_folder
  11. Paste in '/content/gdrive/My Drive/Colab Notebooks/Unet/Test_results' for Results_folder
  12. Use the current trained model (ticked)
  13. Play the cell

Error message produced:

OSError Traceback (most recent call last)

in ()
54
55 # Load the model
---> 56 My_model = load_model(Prediction_model_path+'/'+Prediction_model_name+'/'+Prediction_model_name+'.hdf5')
57
58 layers=My_model.layers


/usr/local/lib/python3.6/dist-packages/h5py/_hl/files.py in make_fid(name, mode, userblock_size, fapl, fcpl, swmr)
171 if swmr and swmr_support:
172 flags |= h5f.ACC_SWMR_READ
--> 173 fid = h5f.open(name, flags, fapl=fapl)
174 elif mode == 'r+':
175 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/h5f.pyx in h5py.h5f.open()


OSError: Unable to open file (unable to open file: name = '/content/gdrive/My Drive/Colab Notebooks/Unet/Model/Unet_model_2020_5_9/Unet_model_2020_5_9.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

Screenshots
Unet_error

Desktop:

  • OS: Ubuntu 18.04.4 LTS; (gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04))
  • Browser: Firefox
  • Version: 76

U-net network training error

Hello!

As per my previous issues, I am now trying to use U-net to identify nuclei from brightfield microscopy images.

I made nuclear outlines using CellProfiler (data examples attached), converting to .png and numbered 0-15 (16 images, png and nomenclature to try and mirror the example data as much as possible).

0 (1)
0

However, when trying to train the network I get the following error:


WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4479: The name tf.truncated_normal is deprecated. Please use tf.random.truncated_normal instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4267: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:2239: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:793: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3657: The name tf.log is deprecated. Please use tf.math.log instead.

WARNING:tensorflow:From /tensorflow-1.15.2/python3.6/tensorflow_core/python/ops/nn_impl.py:183: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Model: "model_1"


Layer (type) Output Shape Param # Connected to

input_1 (InputLayer) (None, 1040, 1040, 1 0


conv2d_1 (Conv2D) (None, 1040, 1040, 6 640 input_1[0][0]


conv2d_2 (Conv2D) (None, 1040, 1040, 6 36928 conv2d_1[0][0]


max_pooling2d_1 (MaxPooling2D) (None, 520, 520, 64) 0 conv2d_2[0][0]


conv2d_3 (Conv2D) (None, 520, 520, 128 73856 max_pooling2d_1[0][0]


conv2d_4 (Conv2D) (None, 520, 520, 128 147584 conv2d_3[0][0]


max_pooling2d_2 (MaxPooling2D) (None, 260, 260, 128 0 conv2d_4[0][0]


conv2d_5 (Conv2D) (None, 260, 260, 256 295168 max_pooling2d_2[0][0]


conv2d_6 (Conv2D) (None, 260, 260, 256 590080 conv2d_5[0][0]


up_sampling2d_1 (UpSampling2D) (None, 520, 520, 256 0 conv2d_6[0][0]


conv2d_7 (Conv2D) (None, 520, 520, 128 131200 up_sampling2d_1[0][0]


concatenate_1 (Concatenate) (None, 520, 520, 256 0 conv2d_4[0][0]
conv2d_7[0][0]


conv2d_8 (Conv2D) (None, 520, 520, 128 295040 concatenate_1[0][0]


up_sampling2d_2 (UpSampling2D) (None, 1040, 1040, 1 0 conv2d_8[0][0]


conv2d_9 (Conv2D) (None, 1040, 1040, 6 32832 up_sampling2d_2[0][0]


concatenate_2 (Concatenate) (None, 1040, 1040, 1 0 conv2d_2[0][0]
conv2d_9[0][0]


conv2d_10 (Conv2D) (None, 1040, 1040, 6 73792 concatenate_2[0][0]


conv2d_11 (Conv2D) (None, 1040, 1040, 1 65 conv2d_10[0][0]

Total params: 1,677,185
Trainable params: 1,677,185
Non-trainable params: 0


None
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1033: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1020: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3005: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

Epoch 1/200
Found 15 images belonging to 1 classes.
Found 1 images belonging to 1 classes.
Found 1 images belonging to 1 classes.
Found 15 images belonging to 1 classes.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:197: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:207: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:216: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:223: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.


ResourceExhaustedError Traceback (most recent call last)
in ()
11
12 csv_log = CSVLogger(model_path+'/'+model_name+'/Quality Control/'+model_name+'_training.csv', separator=',', append=False)
---> 13 history = model.fit_generator(Generator,steps_per_epoch=steps,epochs=epochs, callbacks=[model_checkpoint,csv_log], validation_data=val_Generator, validation_steps=3, shuffle=True, verbose=1)
14
15

6 frames
/tensorflow-1.15.2/python3.6/tensorflow_core/python/client/session.py in call(self, *args, **kwargs)
1470 ret = tf_session.TF_SessionRunCallable(self._session._session,
1471 self._handle, args,
-> 1472 run_metadata_ptr)
1473 if run_metadata:
1474 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

ResourceExhaustedError: OOM when allocating tensor with shape[4,64,1040,1040] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node training/Adam/gradients/conv2d_11/convolution_grad/Conv2DBackpropInput}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


Apologies for the lack of screenshot, but the error would not entirely fit on my screen.

The training parameters are as below:
image

Desktop:

  • OS: Windows 10 Enterprise 64-bit
  • Browser: Chrome

Cheers,
Ian

Color segmentation masks in Stardist_2D

I have completed all steps in Stardist_2D notebook and got correct segmentation results on the two images in "Stardist/Test - Images". The results include two images in "Stardist/Test - Masks" with the same names. These images are four-plane TIFs, where the first three planes encode the mask color.

Then I repeated the whole process, but this time I added two of my own images in "Stardist/Test - Images". They were 16-bit and 8-bit versions of the same image of DAPI-stained nuclei. The resulting segmentation masks in "Stardist/Test - Masks" were the same for the original images, but for my images they had identical values in the first three planes. So, the segmentation image looked gray and the touching masks most often could not be distinguished.
original
masks

Please, advise.

Out of memory (RAM) using StarDist 3D colab script (pro version)

Describe the bug

When I try to run StarDist 3D prediction step in google colab on a small subset (four timepoints, 10 z slices each) of my data, the runtime crashes and informs me it ran out of RAM.

The kernel restarts, and I have to install stardist dependencies again.

I treid upgrading to colab pro, using a 'High RAM' runtime mode.

I also tried increasing the number of tiles It does not go away when I test a range of tiles from automatic to z=10 x=50 y=50 and beyond.

To Reproduce
Steps to reproduce the behavior:

Run the script on the data provided here (google drive link), using the 3D demo model weights.

https://drive.google.com/drive/folders/17q11-hAJjCs72YFbsVKoGve3e5nzXyJf?usp=sharing

Expected behavior

I would expect the 25 gb of RAM available to be enough for this, so it is strange.

Desktop (please complete the following information):

  • OS: Windows 10 home 64 bit
  • Browser: Chrome
  • Version: latest version of chrome

Additional context

This issue may be related to an as of yet unsolved tif file problem, where my tifs are different to yours somehow. Please note that these are ome tifs.

Thanks!

Michael

'Visual_validation_after_training' is not defined

Describe the bug
I have one error message when I star the Step 4

To Reproduce
Steps to reproduce the behavior:
1- I would like to create my model for stardist
2. I use Labkit to create two image labeling save as Tif (32bits)
3.I Create one googledrive folder: inside there are 2 anothers folder, one with name Training_source another one Training_Target. in this flder there are tif file source_1.tif source_2.tif and Target_1.tif and Target_2.tif
4. I copy paste the link and when I start the step 4 there is this message:
Screenshots
image

Desktop (please complete the following information):

  • OS: Windows
  • Chrome
  • Version 80.0.3987.149 (Build officiel) (64 bits)

Thank's

U-net patch generation

Dear All,

I started to use the U-net colab notebook with 1024x1024 images and masks and received the following error message during patch generation. I used the default parameter settings. I cropped these image because the original size was 1360x1024 and then the same error occurred. What else can I try out to solve this issue?

Thank you!

errorUnet

Label-free prediction | Sizes of tensors dimensions error when training the network | fnet | on Google Colab

Describe the bug
When loading my own images into the google colab workbook, when running "Train the model" for training it on my images I kept receiving the following error:
Error message:
"....
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 32 and 27 in dimension 2 at /pytorch/aten/src/TH/generic/THTensor.cpp:612"

To Reproduce
I just loaded the two folders (one for brightfield and one for fluorescence) with my z-stack images. The images had as I found out the wrong dimensions -> 20 images per stack (expected 32 or 16)

Expected behavior
The solution to this problem:
I found out that the number of images of the z-stack to be able to process by the NN needs to be in the power of 2 (2, 4, 8, 16, 32 ...). I tested it now with 16 and 32 and it works for me in that case. I think it is also important that all the images have the same number of images in the z-Stack.

I hope this helps some of you in the future!

White .Tif saved

Hi,
I'd like to first thank everyone who has helped creating this wonderful tool. This summer, I'm working with a lab that is trying to reduce noise in their fluorescent microscopy images (using MacOS).

To learn more about how the notebooks work, I used the provided sample datasets with both the Noise2Void (2D) and CARE (2D) notebooks. I've come across two different trouble spots. I've tried to resolve the problem, but would like you team's opinion on it.

First problem that I encounter is when I'm setting training parameters (in cell 3.1). I've tried to set the patch size to 32 and 16. But I receive the a value error when I'm training the network (cell 4.2).
See screenshots below. It seems that I can train the network only when the patch size is set to 64.
Screen Shot 2020-06-05 at 12 51 29 PM
Screen Shot 2020-06-05 at 12 53 30 PM
Screen Shot 2020-06-05 at 12 54 24 PM

The other problem that I encounter is when I'm trying to save the prediction outputs. When I'm assessing the outputs (in cell 6.2), the colab notebook displays two beautiful before and after images. However, the predicted .tif files in my drive and on my computer show up as completely white. See screenshots below.
Screen Shot 2020-06-05 at 12 57 44 PM
Screen Shot 2020-06-05 at 12 58 51 PM

I would appreciate hearing back from this amazing team.
Once again, thank you
Ellen

Training model from example then test with another image (larger size) Resource exhausted: OOM when allocating tensor with shape[1,61,1440,1920,32]

Hi,

I am trying to use your notebook which is really great and easy to use for someone like me without any coding experience.
I did all the testing using the notebook and the example which were fine.

Now I want to test if into one on my files but I have some error messages :
My file is a dapi staining (1channel) .tif
I rename the file starting with stack_channel1.
The only thing is perhaps the size which is much larger compared to the test examples provided, could it be the problem ?
Should I train the model with same larger files ?
How can I fix the problem ?

Could I upload my file somewhere for you to test perhaps ?

Thank you very much!
For info :

PC: 64gb RAM
graphic card quadro p2000 8 gb ram

MESSAGE BELOW :

The anthogoes3d network will be used.
Normalizing image channels independently.
Loading network weights from 'weights_best.h5'.
Loading thresholds from 'thresholds.json'.
Using default values: prob_thresh=0.662228, nms_thresh=0.3.

ResourceExhaustedError Traceback (most recent call last)
in ()
70 for i in range(lenght_of_X):
71 img = normalize(X[i], 1,99.8, axis=axis_norm)
---> 72 labels, polygons = model.predict_instances(img)
73
74 # Save the predicted mask in the result folder


6 frames


/usr/local/lib/python3.6/dist-packages/stardist/models/base.py in predict_instances(self, img, axes, normalizer, prob_thresh, nms_thresh, n_tiles, show_tile_progress, verbose, predict_kwargs, nms_kwargs, overlap_label)
407 _shape_inst = tuple(s for s,a in zip(_permute_axes(img).shape, _axes_net) if a != 'C')
408
--> 409 prob, dist = self.predict(img, axes=axes, normalizer=normalizer, n_tiles=n_tiles, show_tile_progress=show_tile_progress, **predict_kwargs)
410 return self._instances_from_prediction(_shape_inst, prob, dist, prob_thresh=prob_thresh, nms_thresh=nms_thresh, overlap_label = overlap_label, **nms_kwargs)
411

/usr/local/lib/python3.6/dist-packages/stardist/models/base.py in predict(self, img, axes, normalizer, n_tiles, show_tile_progress, **predict_kwargs)
338
339 else:
--> 340 prob, dist = predict_direct(x)
341
342 prob = resizer.after(prob, axes_net)

/usr/local/lib/python3.6/dist-packages/stardist/models/base.py in predict_direct(tile)
304 def predict_direct(tile):
305 sh = list(tile.shape); sh[channel] = 1; dummy = np.empty(sh,np.float32)
--> 306 prob, dist = self.keras_model.predict([tile[np.newaxis],dummy[np.newaxis]], **predict_kwargs)
307 return prob[0], dist[0]
308

/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
1460 verbose=verbose,
1461 steps=steps,
-> 1462 callbacks=callbacks)
1463
1464 def train_on_batch(self, x, y,

/usr/local/lib/python3.6/dist-packages/keras/engine/training_arrays.py in predict_loop(model, f, ins, batch_size, verbose, steps, callbacks)
322 batch_logs = {'batch': batch_index, 'size': len(batch_ids)}
323 callbacks._call_batch_hook('predict', 'begin', batch_index, batch_logs)
--> 324 batch_outs = f(ins_batch)
325 batch_outs = to_list(batch_outs)
326 if batch_index == 0:

/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/backend.py in call(self, inputs)
3474
3475 fetched = self._callable_fn(*array_vals,
-> 3476 run_metadata=self.run_metadata)
3477 self._call_fetch_callbacks(fetched[-len(self._fetches):])
3478 output_structure = nest.pack_sequence_as(

/tensorflow-1.15.2/python3.6/tensorflow_core/python/client/session.py in call(self, *args, **kwargs)
1470 ret = tf_session.TF_SessionRunCallable(self._session._session,
1471 self._handle, args,
-> 1472 run_metadata_ptr)
1473 if run_metadata:
1474 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[1,61,1440,1920,32] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node conv3d_31/convolution}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

     [[dist_2/add/_665]]

Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

(1) Resource exhausted: OOM when allocating tensor with shape[1,61,1440,1920,32] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node conv3d_31/convolution}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

0 successful operations.
0 derived errors ignored.

Name Output images

I use successfully the Unet Collab to train and segment images. Unfortunately, the predicted images names are different from the input images. Is it possible to have same names or same name with a pre/su fix?
Thks

Make a small logo for BioImage.IO

Could you please make a simple and small logo image for displaying in BioImage.IO?

The current one is a bit too big to fit in the community partners bar, and it takes long to load for slow network.

Squared png image, 100x100 pixels in size would be perfect.

Noise2Void 2D notebook not found

Hiya

I'm trying to open Noise2Void 2D notebook and I keep getting this error message - attached.
All other notebooks open without a problem.

cheers
leandro
Screenshot 2020-04-28 at 12 41 34

3D RCAN not training

Good morning,

First thank you very much for your effort into putting this great resource together.
I'm trying to use the 3D RCAN model (on the ER dataset https://github.com/AiviaCommunity/3D-RCAN). The training step (4.2) does fail with the following output: see attached file.
3DRCAN_training_error.txt

Happy to run test if needed and thank you for your help

Bertrand


Bertrand Vernay, PhD
Head of the Photonic Microscopy Facility
http://quest.igbmc.fr/
http://www.igbmc.fr/technologies/5/team/55/
https://github.com/bvernay
Office:+33 (0)3 69 48 51 27
Office: CBI, E1009
ORCID 0000-0002-7843-3872

IGBMC - CNRS UMR 7104 - Inserm U 1258
1 rue Laurent Fries
BP 10142
67404 Illkirch CEDEX
France Tél +33 (0)3 88 65 32 00
Fax +33 (0)3 88 65 32 01

start training error

Hi,
I'm trying to use de U-net training with the provided data, but in the step 4.2 the following error appears
image
until this step everything was fine.
Can you help me to understand what's happening?

Thank you very much.

U-net ResourceExhaust and Infinity

Unfortunately I am running into problems using the U-Net notebook. I want to see if I can train U-net to the point where it recognizes certain structures in EM images (similar to the training-dataset, which existed as a test). As input I am using the grayscale EM images and corresponding binary masks (marking a roughly circular structure in the middle of the image). Either it is throwing “ExhaustResources”-Errors from the training cell, or each of the epochs results is “val-loss did not improve from inf”. Both fails to generate a model. The Exhaust resource I can circumvent by reducing the number of training files / cropping and reducing the batch_size, but then I am really using low amounts of training files as input (eg. 4x images ~900 by 900 px which are already just a crop of the original files). I thought maybe the input is not in the right format, but I tried 16-bit, 8-bit and RGB for the raw Electron microscopy files and also 8-bit binary masks (both *.tif) already. Is 900x900 still to big and this has to be splitted in files of ~512x512 (original files are 2048x2048). The testfiles I got once were even in a *.png format, so I tried this as well, resulting in the “val-loss did not improve from inf”. Does it mean, that the input data are not specific enough / the structure is not really distinguishable from the background, so the model is not able to indetify these regions? At that moment I am quite stuck and do not know exactly what is the problem and how to improve.

Out of RAM in N2V 2D

Hello,
I tried to make a new 2D model with N2V. For the training I had 5,152 images which made 30.2 GB.
In 3.1. (Setting main training parameters) I ran out of RAM.

I see here that I have 12,72 GB of RAM.
image

Does this mean that the maximum amount of training material has to be 12GB?

Thank you very much.

3D stardist collab: request for preprocessing step: image sequence > separate z stack tiffs

Dear all

Thanks for this resource. We are planning to do a lot of different 3D segmentation, and have run into a simple issue.

Mostly we export out images from a proprietary microscope file format to either an image sequence or a 4D tiff stack. At the moment, we are using an imagej macro to then make the tiff stack into separate tiff stacks for each time point, each containing a full z volume.

It would be great to integrate this step into the collab script, and I have tried but I can't figure out what to do/where to put it.

Could this be included as a 'preprocessing' cell? It would make this very easy to use resource even easier.

Thanks,

Michael

p.s. happy to include example data if this request isn't clear

Learning behaviour when using pretrained model (n2v 3D)

Hello @ALL,

first of all, thank you very much for providing this tool!
I am currently trying to use the n2v 3D notebook to denoise fluorescence images of calcium signals in order to make them useable for quantitative analysis.
For training, I use a stack of dimensions 512x512x250, with data augmentation enabled. Usually, the model converges somewhere between 50-150 epochs. However, I found that further training might still improve the results, even though the validation loss can not be further decreased by a significant amount. To examine how well validation loss correlates with the actual quality of predictions, I have adapted the code in the notebook to include a simple loop that after N epochs of training creates a prediction, then again loads the model and continues training for another N iterations and so on. This way I get to see "intermediate results" to better understand the training process.
Pseudo-Code:

create training and validation dataset
do (num_of_epochs / N) times:
....load pretrained model
....train N epochs
....save model and evaluation file
....create prediction using current trained model
end

However, when using a pretrained model, the loss function behaves in an unexpected way: Even though I load weights_last, validation loss changes drastically from the last epoch prior to model saving to the first epoch after re-loading the model. I have attached the log of a 200 epoch training, with intermediate results being generated every 25 epochs: training_evaluation.pdf Notice for example the aprupt increase of validation loss from epoch 75 to 76. Any ideas why this is happening? I have noticed the same behaviour when loading a pretrained model, e.g. to continue training after the 12 hour timeout of colab, so it does not seem to be related to my custom loop. If, however, training runs straight for 200 epochs without saving and reloading the model in between, the validation loss function behaves "normal".

Any help in understanding and/or improving this behaviour would be greatly appreciated!

Thank you in advance!

(the log file is in .pdf since it is not possible to upload .csv to github. In case you need the .csv file, I have uploaded it here: https://drive.google.com/file/d/1kmPj6SZtw-miZOgduZlRKjfye-cbHTou/view?usp=sharing)

Patch Size Value Error and 3D Mask Creation

Hello,

I am currently trying to use the notebook to count cells in a 3D image. I spent some time looking around for a guide on creating a 3D mask similar to the one used for the 2D notebook but couldn't find one. I used the provided tutorial for creating a 2D mask with the LOCI ROI Map plugin for each slice of an image with 4 slices then concatenating the ROI Maps into one 4 slice tiff. I figured starting on this small stack would be best for learning to use this tool and finding a way to create a larger training set but I've been running into errors when using the notebook.

The first message that I saw was when I was setting the training parameters which read:
"Your chosen patch_size is not divisible by 8; therefore the patch_size chosen is now: 128
Parameters initiated."

When I continue with the notebook by using the data augmentation and creating the model, I get an error when trying to train the model which reads:

/usr/local/lib/python3.6/dist-packages/stardist/models/sample_patches.py in get_valid_inds(datas, patch_size, patch_filter)
38
39 if not all(( 0 < s <= d for s,d in zip(patch_size,datas[0].shape) )):
---> 40 raise ValueError("patch_size %s negative or larger than data shape %s along some dimensions" % (str(patch_size), str(datas[0].shape)))
41
42 if patch_filter is None:

ValueError: patch_size (4, 128, 128) negative or larger than data shape (143, 145, 4) along some dimensions

I've tried re-running everything without data augmentation and I've tried manually adjusting the patch heigh and patch size to see if I could get any values to work but no combination has yielded no errors. I'm assuming the main issue is that my training set of one 4 slice stack is too small and that is what is creating errors. The example training sets seem much much larger than what I'm currently working with.

If this is what is causing the errors, how can I get around this? The method I'm using right now to create the 3D Mask is pretty time consuming as I'm selecting every cell manually on each slice of the image. Using a full stack would take very long to create an ROI map. Is there a better way to create the target set of masks for a 3D image. I'm doubtful that I've been doing it the best way, and if there is a way to do it much quicker I should be able to use a larger stack and avoid the errors I've been getting.

Thanks!

StarDist 2D Augmentation fails - Image has wrong mode

Describe the bug
Data augmentation fails for 16bit/32 tiff images, if not only rotation/flip are used and multiplication factor > 5

Processing <PIL.Image.Image image mode=I;16B size=1024x1024 at 0x7F09FECC5898> [....]
-> ValueError: image has wrong mode

To Reproduce
Steps to reproduce the behavior:

  1. Load Sample Data for StarDist
  2. Follow the standard Workflow
  3. Augmentation by at least a factor of 4 or 5
  4. Set image shearing and skewing to some value above 0

Expected behavior
Data augmentation should save augmented images in "Augmented Folder", by shearing and skewing them.

Location of Bug

From side-package: Augmentor/Operations.py
/usr/local/lib/python3.6/dist-packages/Augmentor/Operations.py in do(image)
1290 image = image.crop((0, abs(shift_in_pixels), width, height))
1291
-> 1292 return image.resize((width, height), resample=Image.BICUBIC)
1293
1294 augmented_images = []

Problem coming from PIL version 7.0.0, and BICUBIC interpolation for skewing images.
(also see: python-pillow/Pillow#4402)

System:
StarDist2 ZeroCostDL4Mic Notebook on Google Colab (happened for multiple GPUs)

Potential Bug fix (happy to debate)

  • Change every "Image.BICUBIC" in Operations.py of augmenter to "Image.NEAREST" (maybe theres a better way to force augmenter to only use NEAREST-interpolation)
  • Forcing Pillow to install as version Pillow==6.2.1

TensorFlow 2.x or PyTorch Implementation

Is your feature request related to a problem? Please describe.
I see that the google colab file (at least I'm using the Stardist_2D_ZeroCostDL4Mic.ipynb) is using tensorflow 1.x.

Is there a plan to upgrade to TF 2.x or implement similar things in PyTorch?

Describe the solution you'd like
If there is a plan, great ! Otherwise, I'm interested in contributing to either TF2.x or PyTorch if it is helpful or of any value.

Describe alternatives you've considered
None.

Additional context
Add any other context or screenshots about the feature request here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.