Code Monkey home page Code Monkey logo

deepplantphenomics's Introduction

DEPRECATED

Deep Plant Phenomics is no longer actively maintained. It is available here for historical purposes - however, it is provided as-is with no updates or bug fixes planned.

See this thread for discussion.

Deep Plant Phenomics

Deep Plant Phenomics (DPP) is a platform for plant phenotyping using deep learning. Think of it as Keras for plant scientists.

DPP integrates Tensorflow for learning. This means that it is able to run on both CPUs and GPUs, and scale easily across devices.

Read the doumentation for tutorials, or see the included examples. You can also read the paper.

DPP is maintained at the Plant Phenotyping and Imaging Research Center (P2IRC) at the University of Saskatchewan. 🌾🇨🇦

What's Deep Learning?

Principally, DPP provides deep learning functionality for plant phenotyping and related applications. Deep learning is a category of techniques which encompasses many different types of neural networks. Deep learning techniques lead the state of the art in many image-based tasks, including image classification, object detection and localization, image segmentation, and others.

What Can I Do With This?

This package provides two things:

1. Useful tools made possible using pre-trained neural networks

For example, calling tools.predict_rosette_leaf_count(my_files) will use a pre-trained convolutional neural network to estimate the number of leaves on each rosette plant.

2. An easy way to train your own models

For example, using a few lines of code you can easily use your data to train a convolutional neural network to rate plants for biotic stress. See the tutorial for how the leaf counting model was built.

Features

Example Usage

Train a simple regression model:

import deepplantphenomics as dpp

model = dpp.RegressionModel(debug=True)

# 3 channels for colour, 1 channel for greyscale
channels = 3

# Setup and hyperparameters
model.set_batch_size(64)
model.set_image_dimensions(256, 256, channels)
model.set_maximum_training_epochs(25)
model.set_test_split(0.2)
model.set_validation_split(0.0)

# Load dataset of images and ground-truth labels
model.load_multiple_labels_from_csv('./data/my_labels.csv')
model.load_images_with_ids_from_directory('./data')

# Use a predefined model
model.use_predefined_model('vgg-16')

# Train!
model.begin_training()

Installation

  1. git clone https://github.com/p2irc/deepplantphenomics.git
  2. pip install ./deepplantphenomics

Note: The package now requires Python 3.6 or greater. Python 2.7 is no longer supported.

deepplantphenomics's People

Contributors

donovanlavoie avatar jboles31 avatar jiansu-usask avatar jordan12376 avatar jubbens avatar jvanaret avatar logan-ncc avatar nicohiggs avatar travis-simmons avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deepplantphenomics's Issues

Input arrays differing dimensions

Having issues running the semantic_segmentation_tool.py All file locations are correct however when running the script I get 'ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 3 dimension(s) and the array at index 1 has 4 dimension(s)'.

I checked the example _rgb.png and it looks like it only has the three channels.

Full Error:

(dl4cv) F:\deepplantphenomics\examples>nano semantic_segmentation_tool.py (dl4cv) F:\deepplantphenomics\examples>python semantic_segmentation_tool.py 2019-10-08 21:41:36.599464: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll Performing segmentation... WARNING:tensorflow:From C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\deepplantphenomics\deepplantpheno.py:195: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. 2019-10-08 21:41:39.034906: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll 2019-10-08 21:41:39.182886: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: GeForce RTX 2070 major: 7 minor: 5 memoryClockRate(GHz): 1.62 pciBusID: 0000:03:00.0 2019-10-08 21:41:39.190076: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll 2019-10-08 21:41:39.203686: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll 2019-10-08 21:41:39.215906: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_100.dll 2019-10-08 21:41:39.227219: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_100.dll 2019-10-08 21:41:39.241056: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_100.dll 2019-10-08 21:41:39.253723: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_100.dll 2019-10-08 21:41:39.278110: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll 2019-10-08 21:41:39.284197: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2019-10-08 21:41:39.288513: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2019-10-08 21:41:39.301213: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: name: GeForce RTX 2070 major: 7 minor: 5 memoryClockRate(GHz): 1.62 pciBusID: 0000:03:00.0 2019-10-08 21:41:39.312740: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll 2019-10-08 21:41:39.322045: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll 2019-10-08 21:41:39.326896: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_100.dll 2019-10-08 21:41:39.336340: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_100.dll 2019-10-08 21:41:39.341408: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_100.dll 2019-10-08 21:41:39.351860: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_100.dll 2019-10-08 21:41:39.361200: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll 2019-10-08 21:41:39.367000: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0 2019-10-08 21:41:40.089071: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-10-08 21:41:40.094314: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] 0 2019-10-08 21:41:40.097433: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0: N 2019-10-08 21:41:40.105783: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6308 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2070, pci bus id: 0000:03:00.0, compute capability: 7.5) WARNING:tensorflow:From C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\deepplantphenomics\deepplantpheno.py:2043: string_input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(string_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...). WARNING:tensorflow:From C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\tensorflow_core\python\training\input.py:277: input_producer (from tensorflow.python.training.input) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensor_slices(input_tensor).shuffle(tf.shape(input_tensor, out_type=tf.int64)[0]).repeat(num_epochs). If shuffle=False, omit the .shuffle(...). WARNING:tensorflow:From C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\tensorflow_core\python\training\input.py:189: limit_epochs (from tensorflow.python.training.input) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.from_tensors(tensor).repeat(num_epochs). WARNING:tensorflow:From C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\tensorflow_core\python\training\input.py:198: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the tf.datamodule. WARNING:tensorflow:From C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\tensorflow_core\python\training\input.py:198: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use thetf.datamodule. WARNING:tensorflow:From C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\deepplantphenomics\deepplantpheno.py:2045: WholeFileReader.__init__ (from tensorflow.python.ops.io_ops) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced bytf.data. Use tf.data.Dataset.map(tf.read_file). WARNING:tensorflow:From C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\deepplantphenomics\deepplantpheno.py:2117: The name tf.image.resize_images is deprecated. Please use tf.image.resize instead. WARNING:tensorflow:From C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\tensorflow_core\python\ops\image_ops_impl.py:1518: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Deprecated in favor of operator or tf.math.divide. WARNING:tensorflow:From C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\deepplantphenomics\semantic_segmentation_model.py:257: batch (from tensorflow.python.training.input) is deprecated and will be removed in a future version. Instructions for updating: Queue-based input pipelines have been replaced by tf.data. Use tf.data.Dataset.batch(batch_size)(orpadded_batch(...)ifdynamic_pad=True). WARNING:tensorflow:From C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\deepplantphenomics\layers.py:36: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead. WARNING:tensorflow: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons * https://github.com/tensorflow/io (for I/O related ops) If you depend on functionality not listed there, please file an issue. WARNING:tensorflow:From C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\deepplantphenomics\deepplantpheno.py:910: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead. WARNING:tensorflow:From C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\deepplantphenomics\deepplantpheno.py:910: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead. WARNING:tensorflow:From C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\deepplantphenomics\deepplantpheno.py:203: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the tf.data module. 2019-10-08 21:41:41.950395: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll 2019-10-08 21:41:43.603376: W tensorflow/stream_executor/cuda/redzone_allocator.cc:312] Internal: Invoking ptxas not supported on Windows Relying on driver to perform ptx compilation. This message will be only logged once. 2019-10-08 21:41:43.660530: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll Traceback (most recent call last): File "semantic_segmentation_tool.py", line 20, in <module> y = dpp.tools.segment_vegetation(images) File "C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\deepplantphenomics\tools.py", line 34, in segment_vegetation predictions = net.forward_pass(x) File "C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\deepplantphenomics\networks.py", line 162, in forward_pass y = self.model.forward_pass_with_file_inputs(x) File "C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\deepplantphenomics\semantic_segmentation_model.py", line 311, in forward_pass_with_file_inputs total_outputs = np.append(total_outputs, img, axis=0) File "<__array_function__ internals>", line 6, in append File "C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\numpy\lib\function_base.py", line 4700, in append return concatenate((arr, values), axis=axis) File "<__array_function__ internals>", line 6, in concatenate ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 3 dimension(s) and the array at index 1 has 4 dimension(s)

Here is pip freeze

absl-py==0.8.0 alabaster==0.7.12 astor==0.8.0 atomicwrites==1.3.0 attrs==19.2.0 Babel==2.7.0 backcall==0.1.0 beautifulsoup4==4.8.1 bleach==3.1.0 certifi==2019.9.11 chardet==3.0.4 cloudpickle==1.2.2 colorama==0.4.1 cycler==0.10.0 Cython==0.29.13 cytoolz==0.10.0 dask==2.5.2 decorator==4.4.0 deepplantphenomics==0.0.0 defusedxml==0.6.0 docutils==0.15.2 entrypoints==0.3 gast==0.2.2 google-pasta==0.1.7 grpcio==1.24.1 h5py==2.10.0 idna==2.8 imageio==2.6.0 imagesize==1.1.0 imgaug==0.3.0 importlib-metadata==0.23 imutils==0.5.3 ipykernel==5.1.2 ipyparallel==6.2.4 ipython==7.8.0 ipython-genutils==0.2.0 ipywidgets==7.5.1 jedi==0.15.1 Jinja2==2.10.3 joblib==0.14.0 jsonschema==3.0.2 jupyter-client==5.3.3 jupyter-core==4.5.0 Keras==2.2.5 Keras-Applications==1.0.8 Keras-Preprocessing==1.1.0 kiwisolver==1.1.0 Markdown==3.1.1 MarkupSafe==1.1.1 matplotlib==3.1.1 mistune==0.8.4 mkl-service==2.3.0 mock==3.0.5 more-itertools==7.2.0 nbconvert==5.6.0 nbformat==4.4.0 networkx==2.3 nose==1.3.7 notebook==6.0.1 numpy==1.17.2 olefile==0.46 opencv-contrib-python==4.1.1.26 opencv-python==4.1.1.26 opencv-python-headless==4.1.1.26 opt-einsum==3.1.0 packaging==19.2 pandocfilters==1.4.2 parso==0.5.1 pickleshare==0.7.5 Pillow==6.2.0 pluggy==0.13.0 prometheus-client==0.7.1 prompt-toolkit==2.0.10 protobuf==3.10.0 py==1.8.0 pycocotools==2.0 Pygments==2.4.2 pyparsing==2.4.2 pyrsistent==0.15.4 pytest==5.2.1 python-dateutil==2.8.0 pytz==2019.3 PyWavelets==1.0.3 pywin32==225 pywinpty==0.5.5 PyYAML==5.1.2 pyzmq==18.1.0 qtconsole==4.5.5 requests==2.22.0 scikit-image==0.15.0 scikit-learn==0.21.3 scipy==1.3.1 Send2Trash==1.5.0 Shapely==1.6.4.post2 six==1.12.0 snowballstemmer==2.0.0 soupsieve==1.9.4 Sphinx==2.2.0 sphinxcontrib-applehelp==1.0.1 sphinxcontrib-devhelp==1.0.1 sphinxcontrib-htmlhelp==1.0.2 sphinxcontrib-jsmath==1.0.1 sphinxcontrib-qthelp==1.0.2 sphinxcontrib-serializinghtml==1.1.3 tb-nightly==2.1.0a20191007 tensorboard==1.15.0 tensorflow-estimator==1.15.1 tensorflow-gpu==1.15.0rc3 termcolor==1.1.0 terminado==0.8.2 testpath==0.4.2 toolz==0.10.0 tornado==6.0.3 tqdm==4.36.1 traitlets==4.3.3 urllib3==1.25.6 wcwidth==0.1.7 webencodings==0.5.1 Werkzeug==0.16.0 widgetsnbextension==3.5.1 wincertstore==0.2 wrapt==1.11.2 zipp==0.6.0

An error occurred while running pre-trained networks

I tried with pre-trained examples from tools. However, i always get
`
File "arabidopsis_strain_classifier_test.py", line 20, in
y = dpp.tools.classify_arabidopsis_strain(images)
...

File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1745, in restore
raise ValueError("Can't load save_path when it is None.")
ValueError: Can't load save_path when it is None.
`

My python version version is 3.6.5

Can't run `predict_rosette_leaf_count` twice

Running deepplantphenomics.tools.predict_rosette_leaf_count twice gives the following error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/s180893/.local/share/virtualenvs/openag_new_cv-G6jS5ejj/lib/python3.6/site-packages/deepplantphenomics/tools.py", line 17, in predict_rosette_leaf_count
    net = networks.rosetteLeafRegressor(batch_size=batch_size)
  File "/Users/s180893/.local/share/virtualenvs/openag_new_cv-G6jS5ejj/lib/python3.6/site-packages/deepplantphenomics/networks.py", line 108, in __init__
    self.model.add_input_layer()
  File "/Users/s180893/.local/share/virtualenvs/openag_new_cv-G6jS5ejj/lib/python3.6/site-packages/deepplantphenomics/deepplantpheno.py", line 1481, in add_input_layer
    " The input layer need to be the first layer added to the model.")
RuntimeError: Trying to add an input layer to a model that already contains other layers.  The input layer need to be the first layer added to the model.

where is the data?

I tried python cifar10_test.py , then I had an error as follows.

No such file or directory: './data/cifar10/train/train.txt'

where is the data?

typo?

Hi!

I think there's a typo in set_loss_function() for regression.

elif self.__problem_type == definitions.ProblemType.REGRESSION and loss_fn not in self.__supported_loss_fns_cls

Thanks!

compute_full_test_accuracy fails for classification

Got this message after training the cifar10 example:

Traceback (most recent call last):
  File "/home/jordan/deepplantphenomics/cifar10_classification.py", line 46, in <module>
    model.begin_training()
  File "/home/jordan/deepplantphenomics/deepplantphenomics/deepplantpheno.py", line 1279, in begin_training
    final_test_loss = self.compute_full_test_accuracy()
  File "/home/jordan/deepplantphenomics/deepplantphenomics/deepplantpheno.py", line 1425, in compute_full_test_accuracy
    all_losses = all_losses[mask_extra, ...]
IndexError: boolean index did not match indexed array along dimension 0; dimension is 0 but corresponding boolean dimension is 16000

Parsing Error in load_Multiple-labels_from_csv ?

when running load_Multiple-labels_from_csv then model.load_images_with_ids_from_directory I get an error where it states that it couldn't find the images. Looking at the error it looks like it is not parsing the name correctly as the characters "" are not in the csv file for the name of the files.

File "C:\Users\Dane Nguyen\Anaconda3\envs\dl4cv\lib\site-packages\deepplantphenomics\deepplantpheno.py", line 1867, in load_images_with_ids_from_directory assert len(path) == 1, 'Found no image or multiple images for %r' % image_id AssertionError: Found no image or multiple images for 'C1_06-14-2019_EXP2_A1.png'

TypeError when using tools

Hi,

I'm having the same error whenever I try to use any or your tools from the tools class.

System information :

Code :

`import deepplantphenomics as dpp
import numpy as np
from PIL import Image
import os

output_dir = 'D:\workspace\plant\images'

my_files = ["D:\workspace\plant\images\exemple1.jpg"]

y = dpp.tools.count_canola_flowers(my_files)`

Error message :
Traceback (most recent call last):
File "D:\workspace\plant\test.py", line 10, in
y = dpp.tools.count_canola_flowers(my_files)
File "D:\workspace\plant\deepplantphenomics\tools.py", line 49, in count_canola_flowers
predictions = net.forward_pass(x)
File "D:\workspace\plant\deepplantphenomics\networks.py", line 253, in forward_pass
y = self.model.forward_pass_with_interpreted_outputs(x)
File "D:\workspace\plant\deepplantphenomics\countception_object_counter_model.py", line 254, in forward_pass_with_interpreted_outputs
xx = self.forward_pass_with_file_inputs(x)
File "D:\workspace\plant\deepplantphenomics\countception_object_counter_model.py", line 240, in forward_pass_with_file_inputs
x_pred = self.forward_pass(image_data, deterministic=True)
File "D:\workspace\plant\deepplantphenomics\deepplantpheno.py", line 1106, in forward_pass
x = layer.forward_pass(x, deterministic)
File "D:\workspace\plant\deepplantphenomics\layers.py", line 62, in forward_pass
padding=self.padding)
File "C:\Users\tbaye\Miniconda3\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 1027, in conv2d
padding = _execute.make_str(padding, "padding")
File "C:\Users\tbaye\Miniconda3\lib\site-packages\tensorflow\python\eager\execute.py", line 110, in make_str
(arg_name, repr(v)))
TypeError: Expected string for argument 'padding' not [[0, 0], [32, 32], [32, 32], [0, 0]].

the last line veries depending on the tools i'm using :

  • [[0, 0], [2, 2], [2, 2], [0, 0]]. for the leaf count
  • [[0, 0], [1, 1], [1, 1], [0, 0]]. for the vegetation segmentation.

Thanks in advance !

set_learning_rate_decay

Hi!

I'm having an error when doing model.set_learning_rate_decay() after model.load_multiple_labels_from_csv().

self.__total_training_samples is set to 0 and is only changed when we do model.begin_training().
image

Thank you!

Loading custom datasets

When loading labels and images using load_multiple_labels_from_csv and
load_images_with_ids_from_directory errors can occur if the image directory contains png files other than those you expect to train with.

For example following the Leaf Counting Tutorial 1 and using the CVPPP dataset with:

model.load_multiple_labels_from_csv('./CVPPP/A1/A1.csv', id_column=0)
model.load_images_with_ids_from_directory('./CVPPP/A1')

The CVPPP dataset (derirved from the IPPN dataset) contains png files of each: plant (_rgb.png); the foreground mask (_fg.png); leaf instance masks (_mask.png) etc.
The error occurs because all png files (line 2537, deepplantpheno.py) are passed to split_raw_data (line 721, deepplantpheno.py) and result in unexpected input shapes in the model.

The documentation for load_images_with_ids_from_directory 2 says: "If you have specified a list of files (for example, using the ID column in load_multiple_labels_from_csv()), then you can use this function to load those images from a directory." However it seems to load all png files.

I believe the functionality of load_images_with_ids_from_directory should be changed to reflect the description in the documentation and remove the dependency on a specific image type (png).

ValueError on Training the Leaf Counter tutorial

Hello! I'm excited to start using this tool, and I'm in the middle of setting up an environment for it on a cloud based platform. To make sure everything is working as intended, I'm working through the leaf counting tutorial here.

I would say the biggest difference between my code and the tutorial is this line:

model.load_ippn_leaf_count_dataset_from_directory('/repos/deepplantphenomics/deepplantphenomics/test_data/test_Ara2013_Canon')

I'm wondering if the dataset here is somehow different from when the tutorial was first written?

Everything executes fine, but once I get to model.begin_training() I get a value error. Here's my code:

import sys
sys.path.append('/repos/plantcv')
sys.path.append('/repos/deepplantphenomics')

from plantcv import plantcv as pcv
import cv2

import deepplantphenomics as dpp

model = dpp.DPPModel(debug=True, save_checkpoints=False, tensorboard_dir='/mnt/Setup and Troubleshooting/tensorlogs', report_rate=20)
# 3 channels for colour, 1 channel for greyscale
channels = 3

# Setup and hyperparameters
model.set_batch_size(4)
model.set_number_of_threads(8)
model.set_image_dimensions(128, 128, channels)
model.set_resize_images(True)

model.set_problem_type('regression')
model.set_num_regression_outputs(1)
model.set_train_test_split(0.8)
model.set_learning_rate(0.0001)
model.set_weight_initializer('xavier')
model.set_maximum_training_epochs(500)

# Augmentation options
model.set_augmentation_brightness_and_contrast(True)
model.set_augmentation_flip_horizontal(True)
model.set_augmentation_flip_vertical(True)
model.set_augmentation_crop(True)

# Load all data for IPPN leaf counting dataset
model.load_ippn_leaf_count_dataset_from_directory('/repos/deepplantphenomics/deepplantphenomics/test_data/test_Ara2013_Canon')

# Define a model architecture
model.add_input_layer()

model.add_convolutional_layer(filter_dimension=[5, 5, channels, 32], stride_length=1, activation_function='tanh')
model.add_pooling_layer(kernel_size=3, stride_length=2)

model.add_convolutional_layer(filter_dimension=[5, 5, 32, 64], stride_length=1, activation_function='tanh')
model.add_pooling_layer(kernel_size=3, stride_length=2)

model.add_convolutional_layer(filter_dimension=[3, 3, 64, 64], stride_length=1, activation_function='tanh')
model.add_pooling_layer(kernel_size=3, stride_length=2)

model.add_convolutional_layer(filter_dimension=[3, 3, 64, 64], stride_length=1, activation_function='tanh')
model.add_pooling_layer(kernel_size=3, stride_length=2)

model.add_output_layer()

# Begin training the regression model
model.begin_training()

I'm executing it in a Jupyter Notebook. Everything executes fine until I get to that final line, which throws me this error:

03:27PM: Parsing dataset...

InvalidArgumentError                      Traceback (most recent call last)
/usr/local/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
   1575   try:
-> 1576     c_op = c_api.TF_FinishOperation(op_desc)
   1577   except errors.InvalidArgumentError as e:

InvalidArgumentError: Dimensions must be equal, but are 8 and 0 for 'DynamicPartition' (op: 'DynamicPartition') with input shapes: [8], [0].

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
<ipython-input-9-7fb97535fd52> in <module>()
      1 # Begin training the regression model
----> 2 model.begin_training()

/repos/deepplantphenomics/deepplantphenomics/deepplantpheno.py in begin_training(self, return_test_loss)
    896 
    897         with self.__graph.as_default():
--> 898             self.__assemble_graph()
    899 
    900             # Either load the network parameters from a checkpoint file or start training

/repos/deepplantphenomics/deepplantphenomics/deepplantpheno.py in __assemble_graph(self)
    557                                            self.__validation_split, self.__all_moderation_features,
    558                                            self.__training_augmentation_images, self.__training_augmentation_labels,
--> 559                                            self.__split_labels)
    560 
    561                 # parse the images and set the appropriate environment variables

/repos/deepplantphenomics/deepplantphenomics/loaders.py in split_raw_data(images, labels, test_ratio, validation_ratio, moderation_features, augmentation_images, augmentation_labels, split_labels)
     41     # create partitions, we set train/validation to None if they're not being used
     42     if test_ratio != 0 and validation_ratio != 0:
---> 43         train_images, test_images, val_images = tf.dynamic_partition(images, mask, 3)
     44         train_labels, test_labels, val_labels = tf.dynamic_partition(labels, mask, 3)
     45     elif test_ratio != 0 and validation_ratio == 0:

/usr/local/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/gen_data_flow_ops.py in dynamic_partition(data, partitions, num_partitions, name)
    607     _, _, _op = _op_def_lib._apply_op_helper(
    608         "DynamicPartition", data=data, partitions=partitions,
--> 609         num_partitions=num_partitions, name=name)
    610     _result = _op.outputs[:]
    611     _inputs_flat = _op.inputs

/usr/local/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
    785         op = g.create_op(op_type_name, inputs, output_types, name=scope,
    786                          input_types=input_types, attrs=attr_protos,
--> 787                          op_def=op_def)
    788       return output_structure, op_def.is_stateful, op
    789 

/usr/local/anaconda3/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py in new_func(*args, **kwargs)
    452                 'in a future version' if date is None else ('after %s' % date),
    453                 instructions)
--> 454       return func(*args, **kwargs)
    455     return tf_decorator.make_decorator(func, new_func, 'deprecated',
    456                                        _add_deprecated_arg_notice_to_docstring(

/usr/local/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in create_op(***failed resolving arguments***)
   3153           input_types=input_types,
   3154           original_op=self._default_original_op,
-> 3155           op_def=op_def)
   3156       self._create_op_helper(ret, compute_device=compute_device)
   3157     return ret

/usr/local/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in __init__(self, node_def, g, inputs, output_types, control_inputs, input_types, original_op, op_def)
   1729           op_def, inputs, node_def.attr)
   1730       self._c_op = _create_c_op(self._graph, node_def, grouped_inputs,
-> 1731                                 control_input_ops)
   1732 
   1733     # Initialize self._outputs.

/usr/local/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
   1577   except errors.InvalidArgumentError as e:
   1578     # Convert to ValueError for backwards compatibility.
-> 1579     raise ValueError(str(e))
   1580 
   1581   return c_op

ValueError: Dimensions must be equal, but are 8 and 0 for 'DynamicPartition' (op: 'DynamicPartition') with input shapes: [8], [0].

Any help would be appreciated. Thanks for putting all this together!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.