Code Monkey home page Code Monkey logo

Comments (8)

Paddy-Xu avatar Paddy-Xu commented on July 23, 2024 1

Thanks!

Actually this small trick seems to work😂. I will try to see if it is better to crop first when I have non-90 degree rotations.


class CustomQueue(tio.Queue):
    def __int__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)

    def __getitem__(self, _):
        sample_patch = super().__getitem__(_)

        augment = tio.Compose([
            tio.RandomAffine(scales=(1, 1), degrees=(90, 90, 0, 0, 0, 0), translation=0, p=0.6),
            tio.RandomAffine(scales=(1, 1), degrees=(0, 0, 90, 90, 0, 0), translation=0, p=0.6),
            tio.RandomAffine(scales=(1, 1), degrees=(0, 0, 0, 0, 90, 90), translation=0, p=0.6),
        ])

        sample_patch = augment(sample_patch)

        return sample_patch

from torchio.

romainVala avatar romainVala commented on July 23, 2024

No this is not equivalent :

the transformations are much more realist if you perform them on the all volume. Rotation for instance will add padding value in the border of the FOV. So performing a rotation on small patches may not be a good idea (definitively not the same as doing it on the whole volume before taking a patch)

I never tested but I think you can already do it.
if you take the torchio.queue (without transform) you can apply transform on each items ... no ?

May be an other alternative would be to have a torchio transform RandomCrop (with a fix size of patch) similar to what the torchio.Queue is doing. So that you can reduce your input size before applying other augmentations. As proposed by #847

from torchio.

Paddy-Xu avatar Paddy-Xu commented on July 23, 2024

No this is not equivalent :

the transformations are much more realist if you perform them on the all volume. Rotation for instance will add padding value in the border of the FOV. So performing a rotation on small patches may not be a good idea (definitively not the same as doing it on the whole volume before taking a patch)

I never tested but I think you can already do it. if you take the torchio.queue (without transform) you can apply transform on each items ... no ?

May be an other alternative would be to have a torchio transform RandomCrop (with a fix size of patch) similar to what the torchio.Queue is doing. So that you can reduce your input size before applying other augmentations. As proposed by #847

Hi,

Thanks a lot for your reply! Ok I understand the borders will not be equivalent, but besides that they should be quite same, especially let's say if I only do around k * 90 degree rotations. My input volume is really huge and I simply cannot do these operations on-the-fly. I will have to store them to the disk beforehand if doing a volume-wise transformation.

However, the input of transforms can be one of torchio.Subject, torchio.Image, numpy.ndarray, torch.Tensor, SimpleITK.Image, it cannot take tio.Queue type. Maybe I will have to manually modify the __getitem__ function defined in tio.Queue, or I might need to define another custom dataclass with tio.Queue as input and define transformations inside the new __getitem__ function before passing to pytorch DataLoader?

from torchio.

romainVala avatar romainVala commented on July 23, 2024

Yes you are rigth it is not that easy. I forgot that torch Dataloader, when instantiate with a tio.Queue dataset, returns Dict and not torchio.Subject. But there is all the necessary information to create a Subject ...

An other argument is speed, since with the Queue you can take several patches per volume but you perform the transform one.

If the input image is really to big, I would go for a compromise,
let say you want a patch of 128^3 then I would go for volume transformation but starting with a random crop with a target shape of for instance 212^3 . (doing so so have a smaller volume size and can perform as we do with 3D MRI)

out of curiosity which modality are you working with ?

from torchio.

romainVala avatar romainVala commented on July 23, 2024

I did not test this PR #847 but I thinks it should solve your issue.
May be there is a need to improve this transform so that it takes as input argument a patch sampler. It would be necessary if you need to weight the location of the chosen patches. (I guess the current implementation is a random uniform "patch" distribution)

from torchio.

Paddy-Xu avatar Paddy-Xu commented on July 23, 2024

Yes you are rigth it is not that easy. I forgot that torch Dataloader, when instantiate with a tio.Queue dataset, returns Dict and not torchio.Subject. But there is all the necessary information to create a Subject ...

An other argument is speed, since with the Queue you can take several patches per volume but you perform the transform one.

If the input image is really to big, I would go for a compromise, let say you want a patch of 128^3 then I would go for volume transformation but starting with a random crop with a target shape of for instance 212^3 . (doing so so have a smaller volume size and can perform as we do with 3D MRI)

out of curiosity which modality are you working with ?

Thanks! That is a good idea. Just to be sure, the patch is generated after the volume transformation, so the cropping size does need to be larger than the patch size, otherwise the patches will all be from the same location. But after each batch, the volume will be cropped from a different center?

I am working on a special kind of CT scan with around 800 * 800 * 800 dimension.

from torchio.

romainVala avatar romainVala commented on July 23, 2024

yes the target_shape of the RandomCropOrPad need to be larger if you use the Queue and several patch per volumes

(which is a good idea to gain speed) and it will also help for the "border efect" you may affine with affine and elastic deformation

but yes after the queue has selected x samples_per_volume, a new volume will be taken and a new RandomCropOrPad will pick a different center

from torchio.

fepegar avatar fepegar commented on July 23, 2024

Thank you both for sharing!

from torchio.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.