Comments (8)
Thanks!
Actually this small trick seems to work
class CustomQueue(tio.Queue):
def __int__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def __getitem__(self, _):
sample_patch = super().__getitem__(_)
augment = tio.Compose([
tio.RandomAffine(scales=(1, 1), degrees=(90, 90, 0, 0, 0, 0), translation=0, p=0.6),
tio.RandomAffine(scales=(1, 1), degrees=(0, 0, 90, 90, 0, 0), translation=0, p=0.6),
tio.RandomAffine(scales=(1, 1), degrees=(0, 0, 0, 0, 90, 90), translation=0, p=0.6),
])
sample_patch = augment(sample_patch)
return sample_patch
from torchio.
No this is not equivalent :
the transformations are much more realist if you perform them on the all volume. Rotation for instance will add padding value in the border of the FOV. So performing a rotation on small patches may not be a good idea (definitively not the same as doing it on the whole volume before taking a patch)
I never tested but I think you can already do it.
if you take the torchio.queue (without transform) you can apply transform on each items ... no ?
May be an other alternative would be to have a torchio transform RandomCrop (with a fix size of patch) similar to what the torchio.Queue is doing. So that you can reduce your input size before applying other augmentations. As proposed by #847
from torchio.
No this is not equivalent :
the transformations are much more realist if you perform them on the all volume. Rotation for instance will add padding value in the border of the FOV. So performing a rotation on small patches may not be a good idea (definitively not the same as doing it on the whole volume before taking a patch)
I never tested but I think you can already do it. if you take the torchio.queue (without transform) you can apply transform on each items ... no ?
May be an other alternative would be to have a torchio transform RandomCrop (with a fix size of patch) similar to what the torchio.Queue is doing. So that you can reduce your input size before applying other augmentations. As proposed by #847
Hi,
Thanks a lot for your reply! Ok I understand the borders will not be equivalent, but besides that they should be quite same, especially let's say if I only do around k * 90 degree rotations. My input volume is really huge and I simply cannot do these operations on-the-fly. I will have to store them to the disk beforehand if doing a volume-wise transformation.
However, the input of transforms can be one of torchio.Subject, torchio.Image, numpy.ndarray, torch.Tensor, SimpleITK.Image, it cannot take tio.Queue type. Maybe I will have to manually modify the __getitem__
function defined in tio.Queue, or I might need to define another custom dataclass with tio.Queue as input and define transformations inside the new __getitem__
function before passing to pytorch DataLoader?
from torchio.
Yes you are rigth it is not that easy. I forgot that torch Dataloader, when instantiate with a tio.Queue dataset, returns Dict and not torchio.Subject. But there is all the necessary information to create a Subject ...
An other argument is speed, since with the Queue you can take several patches per volume but you perform the transform one.
If the input image is really to big, I would go for a compromise,
let say you want a patch of 128^3 then I would go for volume transformation but starting with a random crop with a target shape of for instance 212^3 . (doing so so have a smaller volume size and can perform as we do with 3D MRI)
out of curiosity which modality are you working with ?
from torchio.
I did not test this PR #847 but I thinks it should solve your issue.
May be there is a need to improve this transform so that it takes as input argument a patch sampler. It would be necessary if you need to weight the location of the chosen patches. (I guess the current implementation is a random uniform "patch" distribution)
from torchio.
Yes you are rigth it is not that easy. I forgot that torch Dataloader, when instantiate with a tio.Queue dataset, returns Dict and not torchio.Subject. But there is all the necessary information to create a Subject ...
An other argument is speed, since with the Queue you can take several patches per volume but you perform the transform one.
If the input image is really to big, I would go for a compromise, let say you want a patch of 128^3 then I would go for volume transformation but starting with a random crop with a target shape of for instance 212^3 . (doing so so have a smaller volume size and can perform as we do with 3D MRI)
out of curiosity which modality are you working with ?
Thanks! That is a good idea. Just to be sure, the patch is generated after the volume transformation, so the cropping size does need to be larger than the patch size, otherwise the patches will all be from the same location. But after each batch, the volume will be cropped from a different center?
I am working on a special kind of CT scan with around 800 * 800 * 800 dimension.
from torchio.
yes the target_shape of the RandomCropOrPad need to be larger if you use the Queue and several patch per volumes
(which is a good idea to gain speed) and it will also help for the "border efect" you may affine with affine and elastic deformation
but yes after the queue has selected x samples_per_volume, a new volume will be taken and a new RandomCropOrPad will pick a different center
from torchio.
Thank you both for sharing!
from torchio.
Related Issues (20)
- Add pythonic slicing support to torchio.Image HOT 2
- Proposal of a new way for DDP, Distributed Sampler, and Queue to work HOT 3
- Tests for queue are failing on Windows HOT 1
- Pad modifies labels unexpectedly HOT 4
- Torchio Transformation Does not Working properly HOT 2
- `masked_select` on Z-Normalization slower than indexing HOT 2
- GridSampler without subject for validation HOT 3
- Resize sometimes adds a dimension to LabelMap images HOT 5
- Documentation is not built
- Patch Based Training With Queue Is Not Working Properly HOT 1
- Error when using a lambda function as masking_method in ZNormalization HOT 2
- Combined Affine/Elastic augmentations HOT 1
- The rotation given by Random Affine not accurate HOT 5
- (Optional) state_dict for each transform (reproducibility) HOT 2
- Wrong link to docs in Getting Started tutorial HOT 2
- Suggestions the modifying default value of prefetch_factor and the argument to set it for minimize the blocking-bottleneck between fetch subject and generate patch in Queue HOT 1
- Different transforms applied to CT and label HOT 11
- The Affine matrix does not change after applying the augmentations HOT 3
- Custom loader not used when loading data lazily HOT 2
- Seed is not working HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from torchio.