Code Monkey home page Code Monkey logo

n2v's Introduction

License PyPI Python Version CI

N2V_video

Noise2Void - Learning Denoising from Single Noisy Images

Alexander Krull1,2, Tim-Oliver Buchholz2, Florian Jug
1[email protected], 2Authors contributed equally

The field of image denoising is currently dominated by discriminative deep learning methods that are trained on pairs of noisy input and clean target images. Recently it has been shown that such methods can also be trained without clean targets. Instead, independent pairs of noisy images can be used, in an approach known as NOISE2NOISE (N2N). Here, we introduce NOISE2VOID (N2V), a training scheme that takes this idea one step further. It does not require noisy image pairs, nor clean target images. Consequently, N2V allows us to train directly on the body of data to be denoised and can therefore be applied when other methods cannot. Especially interesting is the application to biomedical image data, where the acquisition of training targets, clean or noisy, is frequently not possible. We compare the performance of N2V to approaches that have either clean target images and/or noisy image pairs available. Intuitively, N2V cannot be expected to outperform methods that have more information available during training. Still, we observe that the denoising performance of NOISE2VOID drops in moderation and compares favorably to training-free denoising methods.

Paper: https://arxiv.org/abs/1811.10980

Our implementation is based on CSBDeep (github).

N2V2 - Fixing Noise2Void Checkerboard Artifacts with Modified Sampling Strategies and a Tweaked Network Architecture

Eva Höck1,⚹, Tim-Oliver Buchholz2,⚹, Anselm Brachmann1,⚹, Florian Jug3,⁜, and Alexander Freytag1,⁜
1Carl Zeiss AG, Germany
2Facility for Advanced Imaging and Microscopy, Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
3Jug Group, Fondazione Human Technopole, Milano, Italy
⚹, ⁜Equal contribution

In recent years, neural network based image denoising approaches have revolutionized the analysis of biomedical microscopy data. Self-supervised methods, such as Noise2Void (N2V), are applicable to virtually all noisy datasets, even without dedicated training data being available. Arguably, this facilitated the fast and widespread adoption of N2V throughout the life sciences. Unfortunately, the blind-spot training underlying N2V can lead to rather visible checkerboard artifacts, thereby reducing the quality of final predictions considerably. In this work, we present two modifications to the vanilla N2V setup that both help to reduce the unwanted artifacts considerably. Firstly, we propose a modified network architecture, i.e. using BlurPool instead of MaxPool layers throughout the used UNet, rolling back the residual-UNet to a non-residual UNet, and eliminating the skip connections at the uppermost UNet level. Additionally, we propose new replacement strategies to determine the pixel intensity values that fill in the elected blind-spot pixels. We validate our modifications on a range of microscopy and natural image data. Based on added synthetic noise from multiple noise types and at varying amplitudes, we show that both proposed modifications push the current state-of-the-art for fully self-supervised image denoising.

OpenReview: https://openreview.net/forum?id=IZfQYb4lHVq

Installation

This implementation requires Tensorflow. We have tested Noise2Void using Python 3.9 and TensorFlow 2.7, 2.10 and 2.13.

⚠️ n2v is not compatible with TensorFlow 2.16 ⚠️

Note: If you want to use TensorFlow 1.15 you have to install N2V v0.2.1. N2V v0.3.* supports TensorFlow 2 only.

If you start from scratch...

We recommend using miniconda. If you do not yet have a strong opinion, just use it too!

After installing Miniconda, create a conda environment:

conda create -n 'n2v' python=3.9
conda activate n2v

Install TensorFlow < 2.16

Since TensorFlow 2.16, n2v is no longer compatible with the latest tensorflow version. Therefore we recommend installing the latest versions with which it was tested. The following instructions are copied from TensorFlow website, using the WayBack machinbe.

2.10 (tested) - Nov 22

Linux:

conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/
python3 -m pip install tensorflow

macOs:

# There is currently no official GPU support for MacOS.
python3 -m pip install tensorflow

Windows

# Last supported Windows with GPU without WSL2
conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0
python3 -m pip install tensorflow

2.13 (untested) - Sep 23

Linux:

conda install -c conda-forge cudatoolkit=11.8.0
python3 -m pip install nvidia-cudnn-cu11==8.6.0.163 tensorflow==2.13.*
mkdir -p $CONDA_PREFIX/etc/conda/activate.d
echo 'CUDNN_PATH=$(dirname $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)"))' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
echo 'export LD_LIBRARY_PATH=$CUDNN_PATH/lib:$CONDA_PREFIX/lib/:$LD_LIBRARY_PATH' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
source $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh

macOs:

# There is currently no official GPU support for MacOS.
python3 -m pip install tensorflow

Windows WSL2

conda install -c conda-forge cudatoolkit=11.8.0
python3 -m pip install nvidia-cudnn-cu11==8.6.0.163 tensorflow==2.13.*
mkdir -p $CONDA_PREFIX/etc/conda/activate.d
echo 'CUDNN_PATH=$(dirname $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)"))' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
echo 'export LD_LIBRARY_PATH=$CUDNN_PATH/lib:$CONDA_PREFIX/lib/:$LD_LIBRARY_PATH' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
source $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh

2.15 (untested) - Nov 23

Linux:

python3 -m pip install tensorflow[and-cuda]

macOs:

# There is currently no official GPU support for MacOS.
python3 -m pip install tensorflow

Windows WSL2

python3 -m pip install tensorflow[and-cuda]

Option 1: PIP (current stable release)

$ pip install n2v

Option 2: Git-Clone and install from sources (current master-branch version)

This option is ideal if you want to edit the code. Clone the repository:

$ git clone https://github.com/juglab/n2v.git

Change into its directory and install it:

$ cd n2v
$ pip install -e .

You are now ready to run Noise2Void.

How to use it?

Jupyter notebooks

Have a look at our jupyter notebook:

In order to run the notebooks, install jupyter in your conda environment:

pip install jupyter

Coming soon:

  • N2V2 example notebooks.

Note: You can use the N2V2 functionality by providing the following three parameters to the N2V-Config object:

  • blurpool=True, by default set to False
  • skip_skipone=True, by default set to False
  • n2v_manipulator="median", by default set to "uniform_withCP"
  • unet_residual=False, by default set to False

Warning: Currently, N2V2 does only support 2D data.
Warning: We have not tested N2V2 together with struct-N2V.

napari

N2V, N2V2 and structN2V are available in napari!

How to cite:

@inproceedings{krull2019noise2void,
  title={Noise2void-learning denoising from single noisy images},
  author={Krull, Alexander and Buchholz, Tim-Oliver and Jug, Florian},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={2129--2137},
  year={2019}
}
N2V2 citation coming soon.

see here for more info on StructN2V.

Note on functional tests

The functional "tests" are meant to be run as regular programs. They are there to make sure that examples are still running after changes.

Note on GPU Memory Allocation

In some cases tensorflow is unable to allocate GPU memory and fails. One possible solution could be to set the following environment variable: export TF_FORCE_GPU_ALLOW_GROWTH=true

n2v's People

Contributors

alex-krull avatar anbn avatar cateek avatar colemanbroad avatar fjug avatar genevievebuckley avatar jdeschamps avatar lmanan avatar thawn avatar tibuch avatar turekg avatar veegalinova avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

n2v's Issues

Error Loading data with TZYX axes

Hello,

I am trying to update our protocol as per your new version but am unable ti lad data with
imgs = datagen.load_imgs_from_directory(directory = "Images", dims="TZYX")

I have stacks from Fiji with Slices and Timepoints (TZYX when imported with imread an read into a numpy array. When I run teh above line I get

ValueError: repeated axis in `destination` argument

Debugging and looking at the values of print (move_axis_from, move_axis_to) I see that they are
(0, 1, 2, 3) (0, 0, 1, 2)

So something somewhere isn't happy when reodering the axes...

Best
Oli

Edit: line is at

img = np.moveaxis(img, move_axis_from, move_axis_to)

N2V_DataGenerator error in 3D making patches

Dear All,

i tried to use the N2V_DataGenerator to make patches out of this image:
(1, 30, 1024, 1024, 1)

print(imgs[0].shape)
X = datagen.generate_patches(imgs[0], shape=(15,512,512),augment = False)

However i get:
(1, 30, 1024, 1024, 1)
Generated patches: (1, 15, 512, 512, 1)

Instead of:
Generated patches: (8, 15, 512, 512, 1)

print(imgs[0].shape)
X = datagen.generate_patches(imgs[0], shape=(14,511,511),augment = False)
works as expected:
(1, 30, 1024, 1024, 1)
Generated patches: (8, 14, 511, 511, 1)

Thanks a lot & Kind regards

Tobias

'layout failed' Error message

Behaviour

An error message is displayed in the background (not in the jupyter notebook but in the terminal jupyter notebook was called from):
E tensorflow/core/grappler/optimizers/meta_optimizer.cc:502] layout failed: Invalid argument: Unsupported tensor size: 1
This does not seem to cause noticeable errors in the ongoing jupyter session.

Context and Steps to Reproduce:

Running the file n2v/examples/2D/denoising2D_SEM/01_training.ipynb, from a fresh clone and the conda environment with Tensorflow-gpu 1.14.0, Keras 2.2.4, n2v 0.1.9.
The issue happens during:
history = model.train(X, X_val)
Right after the first epoch is completed, and is visible in the terminal.

Alternatively, running the same code in a .py file , the issue can also be seen right after the first epoch.

Again, this does not prevent the training to continue and the results of the resulting model look okay when running the file/n2v/examples/2D/denoising2D_SEM/02_prediction.ipynb

Additional Information:

The issue does not appear when running similar code using normal csbdeep models in the same environment. This is the reason why I report this here.
I open this issue to confirm whether this could this be something affecting in some way the results or performance of the library.

I meet some errors when I run 02_prediction.py : UnknownError

when I run this "pred_train = model.predict(input_train, axes='YX', n_tiles=(2,1))",
I have some errors:

UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[{{node down_level_0_no_0/convolution}} = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](down_level_0_no_0/convolution-0-TransposeNHWCToNCHW-LayoutOptimizer, down_level_0_no_0/kernel/read)]] [[{{node activation_11/Identity/_457}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_472_activation_11/Identity", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

what can i do?
Thanks for your help!

TF-GPU in fiji but CPU is max out

I am using N2V in Fiji. I installed properly tensor flow. But when I run N2V in Fiji, it seems that the PC is using the CPU and not the GPU. CPU is max out. It seems that Fiji is using the CPU and not the GPU. The Fiji console shows as is Fiji is using GPU-TF and not CPU-TF (see below). Thus, I do not understand why my CPU is max out. am I missing something here ?

Finally, why is not TF 1.15.0 GPU available in Edit>Option>TensorFlow ? I had to use the TF 1.14.0 GPU because I did not see the option for the 1.15 version. I am using a PC with Win10.

From Fiji-console:
[INFO] Load TensorFlow..
[INFO] Using native TensorFlow version: TF 1.14.0 GPU (CUDA 10.0, CuDNN >= 7.4.1)
Using 10% of training data for validation
[INFO] Tile training and validation data..
[INFO] Generated 200 tiles of shape [128, 128]
[INFO] Create session..
[INFO] Import graph..
[INFO] Normalizing..
[INFO] mean: 253.40125
[INFO] stdDev: 40.8419
[INFO] Augment tiles..
[INFO] Prepare training batches...
65 blind-spots will be generated per training patch of size [64, 64].
[INFO] Prepare validation batches..
65 blind-spots will be generated per training patch of size [64, 64].
[INFO] Start training..
[INFO] Epoch 1/300
1 / 200 [----------] - loss: 1.836171 mse: 1.836171 abs: 1.108038 lr: 0.000400
2 / 200 [----------] - loss: 1.304834 mse: 1.304835 abs: 0.941930 lr: 0.000400
3 / 200 [----------] - loss: 1.094565 mse: 1.094565 abs: 0.839286 lr: 0.000400
4 / 200 [----------] - loss: 1.069823 mse: 1.069823 abs: 0.795247 lr: 0.000400
...

N2V for TF2 ?

I saw that CSBDeep can now work with TF2. Does it mean I can use your N2V notebooks with TF2 now? If so, can you give a hint how? And which version of Keras should be used in this case. Thanks!

Can't open LZW compressed tiff files

tifffile does not support LZW compressed tiff files.

We should check if Pillow can read such files and if so, we should probably use Pillow to read the images.

One Net per Channel

Add a 'super-conservative' parameter which has one U-Net per channel. Essentially multiple U-Nets per model.

AttributeError when updating summary value during training

I generated a fresh N2V installation following the instructions in the readme, for TensorFlow 1.14 and Python 3.6.

When training, right after the first epoch, I would get an AttributeError ("'float' object has no attribute 'item'") from line 302 of nv_standard.py. Changing that line to:

summary_value.simple_value = value

in a local copy of N2V fixed the issue. Not sure if this is due to my setup or something else but thought I'd report.

Please let me know if other info about my setup or N2V config would be helpful!

Hallucinations

Hi,
first of all thanks for the new, pip-installable version ... I had wanted to give n2v a spin for a while and this new version just made it so easy.

This is not so much an issue with the code-base but some issue with the method itself, so I'm not sure whether a github issue is the best place to discuss this. Would be happy to move the discussion to image.sc if you think that would be a better place.

The issue I see is hallucinations. They are clearly visible when running your 3D example notebook (flywing). The following is a result for running the notebooks as is, without any additional parameter tuning:

image

I assume that the region on the right is outside of the wing and therefore should not contain any structure. However, n2v hallucinates cell membrane-like structures (blue outlined area, red arrow) and bright spots (blue arrow) in the empty space.

I'm not too surprised as the method learns to predict a pixel from its surroundings. If much of the training data contains such structures (cell membrane - honeycomb like pattern) this is probably to be expected.
What would be a good strategy to reduce the occurance of such artifacts? Include more patches with just background? That would probably reduce the predictive capability of the model.

I guess it comes down to the question under what scenario I can use n2v and whether I can trust data from subsequent image analysis when n2v has been used as a pre-processing step.

train and test on imagenet

The biggest problem with training and testing the model on the Imagenet Dataset is that the resulting images lose most of their color.I see the paper is showing black-and-white images, is there any problem about color generation in the model? Or Am I not allowed to use the IMAGENET Dataset?

notebook won't open from GCP instance

Hi, I'm trying to run your example notebook on a google cloud platform server. I successfully installed the requirements and the conda env.

However, after git cloning the repo, when I run
jupyter notebook
in n2v dir, the URL link doesn't work after I substitute the generated IP address with the external IP address of the instance I'm on.

It seems to be an issue specific to N2V: I can run jupyter notebooks from https://github.com/vanvalenlab/deepcell-tf without an issue.

Thanks,
Noah

strange results with out-of-sample prediction

Hi all,
I'm running into some strange results when making predictions on images not used in training. Both images were produced from the same microscope setup, so the assumption is noise should be similar.

I've included python files used to train and predict, based on the 2D RBG examples given in the n2v repo.

image 251 was cropped then used to train the u-net.
Then this model was used to make predictions for image 251, and another 'very noisy' image which is mostly out of focus.

The prediction on image 251 seems to have the effect of reducing some noise.
The prediction on the 'very noisy image' seems to change the image's entire color map. This is the main isssue

https://drive.google.com/drive/folders/1h2C4qWxS7g0NeaiM6-Guy69zEbue-qhr?usp=sharing

Generate patches form list

Generate patches from list with num_patches_per_img=n will extract n patches from each list item.

If one list item corresponds to 'SYXC' with (50, 1000, 1000, 1) it will only return 4 patches, but a user would expect 50*4 patches.

This has to be documented better or the behavior has to change.

Channel last in Prediction

The channel-wise normalization expects the images to have channel last. This is not necessarily given during prediction. Moving the channel to the last position in n2v.predict() should fix this.

Error messages while using the 2D example SEM in google colab

When running the 2D example SEM example of n2v in google colab I now receive several error messages. It was working fine yesterday.

The main error happens during the training. One Epoch is completed successfully but then it crashes:

W0620 10:18:39.233062 140231502292864 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.

W0620 10:18:47.475789 140231502292864 lazy_loader.py:50]
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:

W0620 10:18:47.630948 140231502292864 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/csbdeep/utils/tf.py:240: The name tf.summary.image is deprecated. Please use tf.compat.v1.summary.image instead.

W0620 10:18:48.145319 140231502292864 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/csbdeep/utils/tf.py:268: The name tf.summary.merge is deprecated. Please use tf.compat.v1.summary.merge instead.

W0620 10:18:48.149436 140231502292864 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/csbdeep/utils/tf.py:275: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.

Epoch 1/10
20/20 [==============================] - 23s 1s/step - loss: 1.2110 - n2v_mse: 1.2110 - n2v_abs: 0.8920 - val_loss: 1.5108 - val_n2v_mse: 1.5108 - val_n2v_abs: 0.9683

InvalidArgumentError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1355 try:
-> 1356 return fn(*args)
1357 except errors.OpError as e:

12 frames
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [?,?,?,1]
[[{{node Placeholder}}]]

During handling of the above exception, another exception occurred:

InvalidArgumentError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1368 pass
1369 message = error_interpolation.interpolate(message, self._graph)
-> 1370 raise type(e)(node_def, op, message)
1371
1372 def _extend_graph(self):

InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [?,?,?,1]
[[node Placeholder (defined at /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:517) ]]

Original stack trace for 'Placeholder':
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "/usr/local/lib/python3.6/dist-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelapp.py", line 477, in start
ioloop.IOLoop.instance().start()
File "/usr/local/lib/python3.6/dist-packages/tornado/ioloop.py", line 888, in start
handler_func(fd_obj, events)
File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 450, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 480, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 432, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 235, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/ipkernel.py", line 196, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/zmqshell.py", line 533, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
history = model.train(X, X_val)
File "/usr/local/lib/python3.6/dist-packages/n2v/models/n2v_standard.py", line 218, in train
callbacks=self.callbacks, verbose=1)
File "/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/training_generator.py", line 94, in fit_generator
callbacks.set_model(callback_model)
File "/usr/local/lib/python3.6/dist-packages/keras/callbacks.py", line 54, in set_model
callback.set_model(model)
File "/usr/local/lib/python3.6/dist-packages/csbdeep/utils/tf.py", line 213, in set_model
self.gt_outputs = [K.placeholder(shape=_gt_shape(K.int_shape(x))) for x in self.model.outputs]
File "/usr/local/lib/python3.6/dist-packages/csbdeep/utils/tf.py", line 213, in
self.gt_outputs = [K.placeholder(shape=_gt_shape(K.int_shape(x))) for x in self.model.outputs]
File "/usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py", line 517, in placeholder
x = tf.placeholder(dtype, shape=shape, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py", line 2143, in placeholder
return gen_array_ops.placeholder(dtype=dtype, shape=shape, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 6262, in placeholder
"Placeholder", dtype=dtype, shape=shape, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 3616, in create_op
op_def=op_def)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 2005, in init
self._traceback = tf_stack.extract_stack()

Another issue happened also earlier

a name used to identify the model

model_name = 'n2v_2D'

the base directory in which our model will live

basedir = 'models'

We are now creating our network model.

model = N2V(config, model_name, basedir=basedir)

Errors:

WARNING: Logging before flag parsing goes to stderr.
W0620 10:18:26.427209 140231502292864 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.

W0620 10:18:26.472275 140231502292864 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.

W0620 10:18:26.503654 140231502292864 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

W0620 10:18:26.519927 140231502292864 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.

W0620 10:18:26.521225 140231502292864 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.

W0620 10:18:29.414467 140231502292864 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.

W0620 10:18:29.695360 140231502292864 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

W0620 10:18:30.005542 140231502292864 deprecation_wrapper.py:119] From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:2018: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.

Improve Jupyter-Notebook Documentation

The information about the number of epochs used for proper training should be highlighted (red and large font).

In the 3D example we should additionally explain that more training-epochs could significantly improve the result.

why does it work so well (theoretical arguments)?

After reading your paper and being impressed by the results I was curious of how you implemented the blind-spot training in keras, so thanks for sharing your code! From looking at it (and what you also describe in the paper) it seems that you create image patches, randomly select pixel values (that are sufficiently far apart from one another) whose values you change and train a standard U-net on mean square error to predict the original values of the changed pixels, disregarding all pixels that were not masked.
I am kind of puzzled on why this works so well. What exactly keeps the network from still learning the identity there?
Say, we select a pixel whose (noisy observed) value is 42 and we change its value to 30 through the masking procedure. Now I would assume the network should still learn to output a value very close to 42 in order to minimize the loss. Why is it not doing that and instead even comes up with a noise removed value?
Did I miss something in the code that takes care of that or is it some more fundamental part of the idea that I did not get right? I would be glad if you could help me out on that.

Model does not exist or cannot be loaded

I managed to do the training of the data. Nevertheless, every time I tried to do the prediction, I have the same error(see below). Suggestions?

Note: I am trying this on my local PC using Fiji. I have not tried yep using Jupyter notebook. I did the training as well in Fiji.

name=Thu Jun 25 16:43:12 PDT 2020 last checkpoint, description=null, cite=[{text=Krull, A. and Buchholz, T. and Jug, F. Noise2void - learning denoising from single noisy images.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019), doi=arXiv:1811.10980}], authors=null, documentation=null, tags=[denoising, unet2d], license=null, format_version=0.2.0-csbdeep, language=java, framework=tensorflow, source=n2v, test_input=testinput.tif, test_output=testoutput.tif, inputs=[{name=input, axes=byxc, data_type=float32, data_range=[-inf, inf], halo=[0, 22, 22, 0], shape={min=[1, 16, 16, 1], step=[1, 16, 16, 0]}}], outputs=[{name=activation_11/Identity, axes=byxc, data_type=float32, data_range=[-inf, inf], shape={reference_input=input, scale=[1.0, 1.0, 1.0, 1.0], offset=[0, 0, 0, 0]}}], training={source=de.csbdresden.n2v.train.N2VTraining, kwargs={batchSize=64, learningRate=4.0E-4, trainDimensions=2, neighborhoodRadius=5, numEpochs=300, numStepsPerEpoch=200, patchShape=64, stepsFinished=60000}}, prediction={weights={source=./variables/variables}, preprocess=[{spec=de.csbdresden.n2v.predict.N2VPrediction::preprocess, kwargs={mean=[193.19666], stdDev=[30.815493]}}], postprocess=[{spec=de.csbdresden.n2v.predict.N2VPrediction::postprocess, kwargs={mean=[193.19666], stdDev=[30.815493]}}], dependencies=./dependencies.yaml}}
N2V prediction mean : 193.19666
N2V prediction stdDev: 30.815493
[INFO] Using native TensorFlow version: TF 1.14.0 GPU (CUDA 10.0, CuDNN >= 7.4.1)
[INFO] Loading TensorFlow model Thu Jun 25 16:43:12 PDT 2020 last checkpoint 2020-06-25T23:43:12.563207Z from source file file:/C:/Users/GarzonCoral/Desktop/n2v-3850187345415030802.bioimage.io.zip
[INFO] Unpacking dependencies.yaml
java.io.FileNotFoundException: C:\Fiji.app\models\Thu Jun 25 16:43:12 PDT 2020 last checkpoint 2020-06-25T23:43:12.563207Z\dependencies.yaml (The filename, directory name, or volume label syntax is incorrect)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.(FileOutputStream.java:213)
at java.io.FileOutputStream.(FileOutputStream.java:162)
at net.imagej.tensorflow.util.UnpackUtil.unZip(UnpackUtil.java:149)
at net.imagej.tensorflow.util.UnpackUtil.unZip(UnpackUtil.java:131)
at net.imagej.tensorflow.DefaultTensorFlowService.downloadAndUnpackResource(DefaultTensorFlowService.java:407)
at net.imagej.tensorflow.DefaultTensorFlowService.modelDir(DefaultTensorFlowService.java:384)
at net.imagej.tensorflow.DefaultTensorFlowService.loadCachedModel(DefaultTensorFlowService.java:131)
at net.imagej.modelzoo.consumer.model.tensorflow.TensorFlowModel.loadModelFile(TensorFlowModel.java:145)
at net.imagej.modelzoo.consumer.model.tensorflow.TensorFlowModel.loadModel(TensorFlowModel.java:110)
at net.imagej.modelzoo.DefaultModelZooArchive.createModelInstance(DefaultModelZooArchive.java:101)
at net.imagej.modelzoo.consumer.DefaultModelZooPrediction.loadModel(DefaultModelZooPrediction.java:107)
at net.imagej.modelzoo.consumer.DefaultModelZooPrediction.run(DefaultModelZooPrediction.java:81)
at de.csbdresden.n2v.predict.N2VPrediction.run(N2VPrediction.java:89)
at de.csbdresden.n2v.predict.N2VPrediction.predictPadded(N2VPrediction.java:105)
at de.csbdresden.n2v.command.N2VPredictCommand.run(N2VPredictCommand.java:123)
at org.scijava.command.CommandModule.run(CommandModule.java:199)
at org.scijava.module.ModuleRunner.run(ModuleRunner.java:168)
at org.scijava.module.ModuleRunner.call(ModuleRunner.java:127)
at org.scijava.module.ModuleRunner.call(ModuleRunner.java:66)
at org.scijava.thread.DefaultThreadService.lambda$wrap$2(DefaultThreadService.java:228)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] Model does not exist or cannot be loaded. Exiting.

n_tiles must be updated after line 379 to reflect new_axes

new_axes = axes
if 'C' in axes:
new_axes = axes.replace('C', '') + 'C'
normalized = self.__normalize__(np.moveaxis(img, axes.index('C'), -1), means, stds)
else:
normalized = self.__normalize__(img[..., np.newaxis], means, stds)
normalized = normalized[..., 0]
pred = self._predict_mean_and_scale(normalized, axes=new_axes, normalizer=None, resizer=resizer, n_tiles=n_tiles)[0]

N2V Patch Generation

It is not possible to create patches which have the same size as the input data.

It looks like the error is somewhere in here. Probably a +1 or >= is missing.

Issue with plotting

examples/3D/02_prediction.ipynb:

image

Subplot for pred shouldn't use percentiles from img for vmin and vmax.
Also doesn't work for 3-channel RGB images when pred is a float image with pixel values not in the range 0..1.

3D training error

I am try to run the notebook in \3D\examples\01_traing.pynb.
The code runs good in Epoch 1, but it fails in Epoch 2, and shows that

tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [?,?,?,?,1]
[[{{node Placeholder}} = Placeholderdtype=DT_FLOAT, shape=[?,?,?,?,1], _device="/job:localhost/replica:0/task:0/device:GPU:0"]]
[[{{node lambda_2/clip_by_value/_809}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_451_lambda_2/clip_by_value", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

It seems is a error in n2v_standard.py in line 216

history = self.keras_model.fit_generator(generator=training_data, validation_data=(validation_X, validation_Y),epochs=epochs, steps_per_epoch=steps_per_epoch, callbacks=self.callbacks, verbose=1)

Can you help me with this?
BTW, my tensorflow version is 1.12.0
keras version is 2.2.1
numpy version is 1.16.3

Reproducing of the paper results

Hello, I have one issue for reproducing the quantitative results of paper.
Even though I set up same parameter with paper, but I couldn’t get paper result for BSD68 with sigma=25 additive white gaussian noise.
So Could you give me the original code to get paper result? Because I want to cite your paper but I couldn’t reproduce exactly..
I mean original code would be training and testing code for BSD400 and BSD68 include dataset.

Clipping is missing in noisy image synthesis

Hi N2V team!
At first, thanks for your amazing work and reproducible code.
I can reproduce your result and obtain PSNR from 27.62 -> 27.68 for BSD68 grey level denoising.

However, I find there is a mistake in your code.
When you synthesize the Gaussian noisy image, you should always clip and transform them into unit8. (remember it is still an image).
Unfortunately, you did not clip them. (If you look at the training or the test_data, their data range can be from -40 to 350.

I have fixed the bug when I reproduce your result. If we add clip: np.clip(x, 0, 255.) in training and testing:
the final PSNR will be from 27.42 -> 27.49. (PNSR lower than before.)

However, this issue will not influence your major merit.
Please fix the mistake in your code, which will benefit the community and help the community always make a fair comparison.

Thank you!

A question about training with RGB images

Hi, nice work. I look at the example 2D and find you augment data with shape N x H x W x 1 by a mask such that the final input size would be N x H x W x 2. When it applies to RGB image, is that mean we should augment the input image size to N x H x W x 6? Thank you in advance.

Missing plt.show() in denoising2D_SEM/02_prediction.ipynb

Not sure whether this is the right place to report a problem, but I think there are plt.show() commands missing in the example script denoising2D_SEM/02_prediction.ipynb in sections 5 and 6.
I have very little experience with Python, but I could only get the script to show the plots when adding a plt.show() at the end of these two sections.

5D data is not loaded correctly

Trying to load 5D data, the dimensions are interpreted wrong.
The dimensions in my tif file are ordered: TCZYX
I run:
`imgs = datagen.load_imgs_from_directory(directory = "E:/PROJECTS/Olivier/n2v images/Training", dims='TCZYX')

Let's look at the shape of the image

print(imgs[0].shape)
and get(5, 2, 2048, 2048, 301)`
which corresponds to TCYXZ
It should be:
TZYXC

Thanks.

Add some more details and explanation to the sample notebooks

Especially for the section where we load data we often see people struggling to understand what dimensions in what order to put there. Even if they use Fiji, they still need to know to invert order.
Then, after loading, we show the shape of the loaded data and it confuses because we shuffle dimensions around (and add one).

Suggestion: add additional cells and text that first discovers what axis order to use, then some more text that describes why after loading it is changed.

Thanks!

Problems in prediction script

We have the following problems in the prediction script:

  1. It does not work when you have multiple input files (This bug is fixed but not merged yet)
  2. The output files should be named according to the input files.

output is same as input, is it something wrong with input format?

I got the result is same as input but noticed the loss error is decaying well. I have used the below script. can you please help me where I am doing wrong?my input image is tiff but gray image.I am using single channle only.

imgs = datagen.load_imgs_from_directory(directory = "data/input.tif", dims='YXC')
print(imgs[0].shape,imgs[1].shape)
imgs[0]=imgs[0][:,:,:, 0:1]
imgs[1]=imgs[1][:,:,:, 0:1]
print(imgs[0].shape)
print(imgs[1].shape)
the input dimensions are 
(1, 1024, 1024, 4) (1, 1024, 1024, 4)
(1, 1024, 1024, 1)
(1, 1024, 1024, 1)

X = datagen.generate_patches_from_list(imgs[:1], shape=(128,128))
Generated patches: (392, 128, 128, 1)

config = N2VConfig(X, unet_kern_size=3, 
                   train_steps_per_epoch=200, train_epochs=100, train_loss='mse', batch_norm=True, 
                   train_batch_size=128, n2v_perc_pix=0.198, n2v_patch_shape=(64, 64), 
                   unet_n_first = 96,
                   n2v_manipulator='uniform_withCP', n2v_neighborhood_radius=3)

###### Prediction######
input_train = imread('data/test_crop_1.tif')
input_train1=input_train[:,:,0:1]
pred_train1 = model.predict(input_train1, axes='YXC')

The result is same as input, is something wrong with the data format both train and test.?

RAM requirement for 4D dataset

Hi,

I am trying to train a model with a set of 4D movies. Each file is about 950MB big and the shape of the images are:
[211, 9, 512, 512, 1]
How much RAM do I need for a dataset like this? Also, will the size of patches affect the RAM usage?

Thanks!

Best,

Jack

Placeholder error

Hello, similar to the error reported yesterday, I am getting the following trace when running 3D data (including the provided example):

File "n2v_CL.py", line 31, in
history = model.train(X, X_val)
File "/home/shared/chris.law/n2v/n2v/models/n2v_standard.py", line 218, in train
callbacks=self.callbacks, verbose=1)
File "/home/shared/chris.law/.local/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/home/shared/chris.law/.local/lib/python3.6/site-packages/keras/engine/training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "/home/shared/chris.law/.local/lib/python3.6/site-packages/keras/engine/training_generator.py", line 94, in fit_generator
callbacks.set_model(callback_model)
File "/home/shared/chris.law/.local/lib/python3.6/site-packages/keras/callbacks.py", line 54, in set_model
callback.set_model(model)
File "/home/shared/chris.law/.local/lib/python3.6/site-packages/csbdeep/utils/tf.py", line 213, in set_model
self.gt_outputs = [K.placeholder(shape=_gt_shape(K.int_shape(x))) for x in self.model.outputs]
File "/home/shared/chris.law/.local/lib/python3.6/site-packages/csbdeep/utils/tf.py", line 213, in
self.gt_outputs = [K.placeholder(shape=_gt_shape(K.int_shape(x))) for x in self.model.outputs]
File "/home/shared/chris.law/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 517, in placeholder
x = tf.placeholder(dtype, shape=shape, name=name)
File "/home/shared/chris.law/.conda/envs/PythonGPU_CL/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 1747, in placeholder
return gen_array_ops.placeholder(dtype=dtype, shape=shape, name=name)
File "/home/shared/chris.law/.conda/envs/PythonGPU_CL/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 5206, in placeholder
"Placeholder", dtype=dtype, shape=shape, name=name)
File "/home/shared/chris.law/.conda/envs/PythonGPU_CL/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/shared/chris.law/.conda/envs/PythonGPU_CL/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "/home/shared/chris.law/.conda/envs/PythonGPU_CL/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/home/shared/chris.law/.conda/envs/PythonGPU_CL/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in init
self._traceback = tf_stack.extract_stack()

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [?,?,?,?,1]
[[node Placeholder (defined at /home/shared/chris.law/.local/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:517) = Placeholderdtype=DT_FLOAT, shape=[?,?,?,?,1], _device="/job:localhost/replica:0/task:0/device:GPU:0"]]
[[{{node lambda_2/clip_by_value/_809}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_451_lambda_2/clip_by_value", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

I am using the following versions:
n2v = 0.1.4 (updated this 10am EST on 2019/07/16)
tensorflow GPU = 1.12.0
keras = 2.2.4
csbdeep = 0.4.0

I have successfully run the previous version of n2v (before the commits on June 17th, I think, though I'm not certain - it was necessary to put the data into separate numpy arrays for validation and training).

Modelzoo Export

Export models according to the modelzoo specifications.

  • modelzoo.yml

Input images have to be 32-bit

The input image to model.predict(img, axes='YX', n_tiles=(2,1)) has to be a 32-bit image.
Using 16-bit images results in hot pixels in the background with values close to the saturation value of 16-bit.

Random Channel Masking

Mask independently sampled pixels for each channel. Currently the same pixels in each channel get masked, as discussed in #3.
It seems to be safe to assume that noise is channel independent and signal is interdependent across channels. Therefore channel-wise masking could be beneficial.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.