Code Monkey home page Code Monkey logo

rising's Issues

[FeatureRequest] Queue for GPU transforms

Description & Proposal
Introduce a queue for GPU transforms to enable optional asynchronous augmentation is training and augmentation is performed on different GPUs.

[FeatureRequest] Improve Pull Request Template

Description
This issue is intended to provide a forum to improve our PR template

Proposals

Additional todos:

  • update codeowners.md
  • update changelog

Furthermore, I would like to propose to structure the todos a little bit more. Something like:
Developer (the person who implements the PR), RisingMember (someone who has rights to add lables, projects, modify codeowner , chagelog ... )

We could also introduce a reviewer section where we add some points which every reviewer should check (probably with an link to our contribution guideline)

Do you have additional points or do not like any of the above? @haarburger @justusschock

[Bug] GPU transforms are not fed GPU data for keys other than 'data'

Description
If I supply GPU transforms that operate on multiple keys to a DataLoader, then only data for the data key is transferred to the GPU prior to feeding it to the transforms. For example, if I'm doing spatial transforms (such as flipping), I want to flip both the data and labels - and I want it to happen on the GPU for speed.

The error seems to happen at line 187 of loading/loader.py:

if gpu_transforms is not None:
if device is None:
device = torch.cuda.current_device()
to_gpu_trafo = ToDevice(device=device, non_blocking=pin_memory)
gpu_transforms = Compose(to_gpu_trafo, gpu_transforms)
gpu_transforms = gpu_transforms.to(device)

No keys argument is given to ToDevice so it uses its default which is keys = ('data',), c.f. transforms/tensor.py:52.

Environment

  • OS: Windows 10
  • Python version: Python 3.8.5
  • rising version: 0.2.0post0

Reproduction

from rising.loading import DataLoader
from rising.transforms.abstract import BaseTransform


def check_on_gpu(x):
    assert(x.is_cuda)
    return x


class GpuChecker(BaseTransform):
    def __init__(self, keys=('data',)):
        super().__init__(augment_fn=check_on_gpu, keys=keys)


if __name__ == '__main__':
    # Data is definitely on CPU...
    data = [
        { 'data': 1, 'label': 1 },
        { 'data': 2, 'label': 2 },
        { 'data': 3, 'label': 3 }
    ]

    # This will work
    print('Only data')
    loader = DataLoader(data, gpu_transforms=GpuChecker())
    for x in loader:
        print(x)

    # This will crash
    print('Both data and labels')
    loader = DataLoader(data, gpu_transforms=GpuChecker(('data', 'label')))
    for x in loader:
        print(x)

[FeatureRequest] `Per Sample` option transforms

Description
Introduce a Per Sample option to transforms.

Proposal
How could the feature be implemented?
Spatial Transforms:

  1. Transforms which use pytorch function -> use loop inside the functional.
  2. If possible, introduce an affine equivalent which can be stacked with other affine transforms and will support per sample augmentation without any loop.

Cropping: Probably only possible with an internal loop

Affine Transforms: already support this option

Intensity/ Channel Transforms: tbd

[Bug] Mirror transformation does not accept prob keyword parameter

Description
When rising.transforms.spatial.Mirror is called with the prob keyword parameter, it is stored in **kwargs and forwarded in Mirror.init() to the parents BaseTransform.init() which in turn forwards it in BaseTransform.forward() to the functional.mirror() function, which does not take prob as an keyword argument. It seems like the prob argument is not handled at all. The documentation for rising.transforms.spatial.Mirror probably is just wrong and should drop the argument prob.
Environment

  • OS: linux/ubuntu
  • Python version: 3.7.7
  • rising version master
  • How did you install rising?
git clone [email protected]:PhoenixDL/rising.git
cd rising
pip install -e .

Reproduction
rtr.Mirror(dims=DiscreteParameter([0, 1]), keys=["data"], prob=0.5)

[Bug] Wrong point/image trafo

Description
@mibaumgartner
The point transformation has to be the inverse of the image transformation.

Environment

  • OS:
  • Python version:
  • rising version 0.2.0.post0
  • How did you install rising? [ pip]

Reproduction

imgT = parametrize_matrix(rotation=30, scale=1, translation=0, image_transform=True, batchsize=1, ndim=2)
pointT = parametrize_matrix(rotation=30, scale=1, translation=0, image_transform=False, batchsize=1, ndim=2)
print("img Trafo A:")
print(to_homogeneus_matrix(imgT))
print("point Trafo B:")
print(to_homogeneus_matrix(pointT))
print("C should be equal to B:")
print(to_homogeneus_matrix(imgT).inverse())

The code works if the points are not xy but yx because
matrix_revert_coordinate_order(pointT) produces matrix C.

More over the permutation of the sub transformations if wrong because it changes with the inverse operation.

[FeatureRequest] ApplyMask transform

Description
Add a transform that applies a binary mask to an image

Proposal
Given a mask key, apply the mask to the image to set all background voxels to a predefined value

*Are you able/willing to implement the feature yourself (with some guidance from us)?
yes

Update docs of Resize

Docs of resize wrongly states that the new size must contain the new size including batch size and channels in both, the functional interface as well as the module interface.

[Question] Multi-gpu support?

Description
Hi, I am interested in doing GPU transforms on batched data, but I am wondering if there is support for multi-gpu? Right now I am using pytorch lightning and its data modules. In distributed training, each gpu gets its own process to run a data module -- so I am thinking that by virtue of torch.cuda.current_device(), gpu transforms will just run correctly on the right GPU. I will test this theory tomorrow, but advice is appreciated. Thanks!

[Bug] Progressive resizing only works with one process

Description
Internal state of progressive resizing is not updated correctly when used with multiple processes.

Environment

  • OS: MacOS, Ubuntu
  • Python version: 3.7
  • rising version: 0.0.a

Reproduction
An additional integration test for this transform needs to be created.

[Bug] Cachedataset needs pickable load function if num_worker>0

Description
Pickle error when num_workers>0 and

  • mode == "extend"

  • load function can not be pickled

  • tqdm can also be used with multiprocessing, this should also be addressed

Environment

  • OS: MacOS, Ubuntu
  • Python version: 3.7
  • rising version 0.0.a

Reproduction
Just change up the test case

Solution
Could try something like 'pathos' or 'dill' https://stackoverflow.com/questions/8804830/python-multiprocessing-picklingerror-cant-pickle-type-function

[FeatureRequest] Interface for random number

Benefits:
-> much easier testing
-> complete control over parameters for transformations for users

e.g.
Mirror(dims=choose(0, 1, 2)) would sample mirror axis
Mirror(dims=(0,)) would always mirror the 0th dim

Implementation:
Based on classes
Enable sampling of multiple values (probably a tensor) in one iteration

[Bug] Docstring for random_crop claims it returns crops corner but it doesn't

Description
When calling rising.transforms.functional.crop.random_crop the docstring says it returns the crop corner. It doesn't.

Environment

  • OS: Windows 10
  • Python version: Python 3.8.5
  • rising version: 0.2.0post0
  • How did you install rising? pip

Reproduction

import torch
from rising.transforms.functional import random_crop

x = torch.zeros(1, 1, 10, 10)
print(random_crop(x, (3, 3)))  # Should have returned both crop and corner

[FeatureRequest] FollowUp on docs

  • On my local build the collapse on the right does not work as on pytorch docs (no clue why)
  • Integrate notebooks properly
  • For each file: Write introductory section, what this file does. (They will be included automatically)

[FeatureRequest] Determine transform call inside compose function instead of Batchtransformer

Description
Currently the Batchtransformer decides how to call the transformation

if self._transforms is not None:
if isinstance(batch, Mapping):
batch = self._transforms(**batch)
elif isinstance(batch, Sequence):
batch = self._transforms(*batch)
else:
batch = self._transforms(batch)

Proposal
I would propose to call the transforms with a simple positional argument inside the Batchtransformer and add a keyword argument to the respective compose functions. By default the keyword argument does the same thing as the Batchtransformer and tries to identify the optimal way by checking the type but the user has the option to influence this.

Additional context
Instead of automatically unpacking the batch when passing it to the transforms, the user can control this behaviour (if needed).

Thoughts @justusschock ?

Lightning segmentation missing visualisation

Hi and thanks for the great module! I went through your segmentation example and that's very useful for my project. Unfortunately, you don't show how one can visualise the predictions of the network (i.e. show examples of the network predict to the test/validation dataset using the ground truth and the images).

Let me know if I just missed that! :)

[Bug] Compose doesn't forward pytorch functions to internal transforms

Description
Compose uses a list to store the transformation which leads to some problems when specific functions of torch.nn.Module should also be applied to children (e.g. the to() method).

Quick fix: change list to torch.nn.ModuleList which limits our transformation to rising transforms which are subclasses of torch.nn.Module

Other solutions: look at which functions we really need and overwrite them appropriately (should at least fix problems with to())

Status: looking for a better solution because that is not really satisfying...
Any ideas @justusschock @haarburger ?

[FeatureRequest] GPU transforms

Description
Currently, the data loader can not execute transformations on the GPU due to multiprocessing and some pickling issues. A workaround is to manually apply the transforms during training before the networks gets the data.

Proposal
How could the feature be implemented?
Native support of GPU transformations for the data loader.

[FeatureRequest] Generalize affine and grid transforms to support keys with different spatial size

Description
The grid is only created for the first element in the batch. This behaviour should be generalised to support keys with different spatial size without introducing computation overhead (e.g. computing and augmenting a grid multiple times even though the keys have the same spatial size)

it should be save to ignore the number of channels and only focus on spatial size of the grid because pytorch does not use that anyway (even though affine_grid wants the number of channels) https://github.com/pytorch/pytorch/blob/74b65c32be68b15dc7c9e8bb62459efbfbde33d8/aten/src/ATen/native/AffineGridGenerator.cpp#L34-L62

[Bug] Stacking affines/grid transforms with different keys

Description
When stacking affine/grid transforms with different keys per transform, the whole transformation will be applied to all keys.

Proposal
Temporarily, check keys when stacking transforms and raise error if keys are changing. Additionally, open a feature request issue to support this special case (I do not think we should add support for this right away because it will make things fairly complicated)

[Bug] Scale(...,adjust_size=True) does not result in image with all the content of original image

Description
What happens? What should happen?
Using the Scale transformation rt.Scale( scale=1.25, adjust_size = True ) with input size (D,H,W) = (32, 192, 192) does not result in a scaled version of the input image with all the content of the image present and resolution (D,H,W)*1.25, but instead results in an image with resolution (D,H,W) / 1.25 and the same image content as the result of rt.Scale( scale=1.25, adjust_size = False ), which is, as expected, effectively a center crop of 3/4 the input image size.

  • Input
    input
  • Scale(adjust_size=False)
    adjust_size=False
  • Scale(adjust_size=True) (Note same content as False but resolution (D,H,W) / 1.25 )
    adjust_size=True

Maybe this stems from parametrize_matrix, create_scale, _check_new_img_size in transforms.functional.affine.py, as create_scale is called with default value image_transform=True in parametrize_matrix, which appears to inverse the scale so that the ' scale' parameter in the ' rt.Scale()' effectively refers to the image scale and not the GridSampler scale. This seems to be not correctly handled in _check_new_img_size

I think this
Environment

  • OS: ubtunu/mint
  • Python version: 3.7
  • rising version: 0.2.0.post0+3.g2a580e9
  • How did you install rising? cloned master

Reproduction
Use any volumetric input image and the above mentioned transformations.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.