Code Monkey home page Code Monkey logo

pytorch_fnet's People

Contributors

calystay avatar counkomol avatar donovanr avatar fcollman avatar gregjohnso avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch_fnet's Issues

Specify dimension requirement

It looks like there is a requirement on the minimum number of z slices.

"ValueError: Input array must be at least length 32 in first dimension"

When batch-processing on a large set of images, it would be great to skip the "bad" image and raise a flag, rather than assertion fail.

"ValueError" in BufferedPatchDataset

Example error message:

Traceback (most recent call last):
  File "train_model.py", line 206, in <module>
    main()
  File "train_model.py", line 95, in main
    for data in ds_patch:
  File "/root/projects/pytorch_fnet/fnet/data/bufferedpatchdataset.py", line 61, in __getitem__
    return self.get_random_patch()
  File "/root/projects/pytorch_fnet/fnet/data/bufferedpatchdataset.py", line 81, in get_random_patch
    starts = np.array([np.random.randint(0, d-p) for d, p in zip(datum[0].size(), self.patch_size)])
  File "/root/projects/pytorch_fnet/fnet/data/bufferedpatchdataset.py", line 81, in <listcomp>
    starts = np.array([np.random.randint(0, d-p) for d, p in zip(datum[0].size(), self.patch_size)])
  File "mtrand.pyx", line 988, in mtrand.RandomState.randint
ValueError: low >= high

pip install doesn't install submodules

pip install git+https://github.com/AllenCellModeling/pytorch_fnet.git will install the base modules but does not install fnet.nn_modules for instance.

import fnet works and imports the module and the defined imports from its __init__.py

import fnet.nn_modules fails due to two things:

  1. setup.py needs to use packages=find_packages(exclude=['doc/*', 'docker/*', 'data/*', 'scripts/*', 'tests/*'])

  2. Create an __init__.py file for the nn_modules submodule, and for all submodules you want to include in the final package structured as so:

from . import fnet_nn_2d
from . import fnet_nn_3d

Discovered while trying to implement something similar to the predict.py script in the base directory.

make pre prediction cropping, and post prediction pad with zero

make test time dataset that has cropping and padding options that make the prediction the same size as the output.

test time model input modification options... pad/crop width and height parameters (can be negative to crop, positive to pad), default is to crop down to the nearest 16 option.
at test time output .. reverse operation of test time modifications.

modify dataset object to have these crop options/choices recorded, modify dataset class to have function to reverse this operation.

needs to be done before #62

Problem when running with another dataset

I was trying to use tiff format images obtained after training and prediction (signal.tiff and target.tiff) to train the model instead of using czi format images. I created a new csv file with path_signal and path_target elements and their corresponding paths below.

But when I ran the training part, this error pumped out:
/home/xuecongf/.conda/envs/fnet/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, ma y indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
Using existing train/test split.
model loaded from: saved_models/dna/model.p
fnet_nn_3d | {} | iter: 50011
History loaded from: saved_models/dna/losses.csv
/home/.../.conda/envs/fnet/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
buffering images: 0%| | 0/30 [00:00<?, ?it/s]
<fnet.data.tiffdataset.TiffDataset object at 0x2b3ebbc2cf60>

Traceback (most recent call last):
File "train_model.py", line 161, in
main()
File "train_model.py", line 112, in main
dataloader_train = get_dataloader(n_remaining_iterations, opts)
File "train_model.py", line 31, in get_dataloader
**opts.bpds_kwargs,
File "/home/…/pytorch_fnet/fnet/data/bufferedpatchdataset.py", line 55, in init
datum = dataset[datum_index]
File "/home/…/pytorch_fnet/fnet/data/tiffdataset.py", line 49, in getitem
im_out[0] = t(im_out[0])
File "/home/…/pytorch_fnet/fnet/transforms.py", line 193, in call
return scipy.ndimage.zoom(x, (self.factors), mode='nearest')
File "/home/…/.conda/envs/fnet/lib/python3.6/site-packages/scipy/ndimage/interpolation.py", line 573, in zoom
zoom = _ni_support._normalize_sequence(zoom, input.ndim)
File "/home/…/.conda/envs/fnet/lib/python3.6/site-packages/scipy/ndimage/_ni_support.py", line 65, in _normalize_sequence
raise RuntimeError(err)
RuntimeError: sequence argument must have length equal to input rank

csv&bashscript.zip

Syntax eror in test script

Hello there, we are experiencing a syntax error while testing the test script. Do you guys have any ideas? Thank you very much in advance
File "scripts/train_model.py", line 3
DATASET = ${1:-dna}
^
SyntaxError: invalid syntax
---- script:
#!/bin/bash -x

DATASET=${1:-dna}
N_ITER=50000
RUN_DIR="saved_models/${DATASET}"
PATH_DATASET_ALL_CSV="data/csvs/${DATASET}.csv"
PATH_DATASET_TRAIN_CSV="data/csvs/${DATASET}/train.csv"
GPU_IDS=${2:-0}

Create cross modal registration code

takes the csv from #64 and attempts to run automated registration between prediction and MBP image, and compares results to 'ground truth' produces metrics of success for making figures.

error when I try to run ./train_model.sh dna 0

loaded all the data to /data/ folder. and run ./train_model.sh dna 0, got the following error. does anyone know the reason? Thanks,

Using existing train/test split.
DEBUG: Initializing new model!
*** Model ***
fnet.nn_modules.fnet_nn_3d.Net(**{})
iter: 0
gpu: [0]

0%| | 0/1 [00:00<?, ?it/s]
100%|██████████| 1/1 [00:18<00:00, 18.49s/it]
{'buffer_size': 1, 'npatches': 1200000}
Traceback (most recent call last):
File "train_model.py", line 147, in
main()
File "train_model.py", line 119, in main
args, n_remaining_iterations, validation=True
File "train_model.py", line 22, in get_dataloader
ds = str_to_class(args.dataset_class)(**dataset_kwargs)
File "/local_disk0/fnet/pytorch_fnet-master/fnet/data/czidataset.py", line 13, in init
super().init(**kwargs)
File "/local_disk0/fnet/pytorch_fnet-master/fnet/data/fnetdataset.py", line 45, in init
self.df = pd.read_csv(self.path_csv)
File "/databricks/python/lib/python3.6/site-packages/pandas/io/parsers.py", line 678, in parser_f
return _read(filepath_or_buffer, kwds)
File "/databricks/python/lib/python3.6/site-packages/pandas/io/parsers.py", line 424, in _read
filepath_or_buffer, encoding, compression)
File "/databricks/python/lib/python3.6/site-packages/pandas/io/common.py", line 218, in get_filepath_or_buffer
raise ValueError(msg.format(_type=type(filepath_or_buffer)))
ValueError: Invalid file path or buffer object type: <class 'NoneType'>

Create a csv for registration testing

EM_image, EM_prediction_image, MBP_registration_target,M00,M10,M01,M11,B0,B1
path to EM input image, path to EM prediction image, path to MBP raw IF image, 'ground truth' registration parameters describing how EM prediction image should be transformed to match MBP image.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.