Comments (18)
I do not understand, transform is done to modify the inputs ???
from torchio.
Some of the transforms (most?) modify the input sample, which is probably not desirable. Ideally, if one does sample_transformed = transform(sample)
, the variable sample
should remain intact.
from torchio.
Ok, it seems indeed a good idea, and easy to modify
Not sure if it worth a PR,
here are the changes I made for random_motion
change
def apply_transform(self, sample):
to
def apply_transform(self, sample_orig):
from copy import deepcopy
sample = deepcopy(sample_orig)
(since you have list of list, a simple sample_orig.copy() is not enough !)
from torchio.
I suppose that's the simplest solution for all the transforms. Are there any obvious disadvantages of deepcopying the whole sample every time a transform is applied? Time? Memory?
We would need a unit test that checks that everything that is in the original sample is untouched after the transform, so a PR is definitely worth it.
from torchio.
for the classical augmentation point of view, the data is transform, and there is no need to keep the original input (it is replace any way). so we should make this deepcopy only if realy needed
So may be it should be an option within the init with something like Keep_original_sample_unchanged which is by default False
I am not very use to unit test, ... so I will let you do it ...
from torchio.
Is this a priority anyway? Does it affect you at the moment?
from torchio.
not yet
from torchio.
Closing for now, will reopen if it becomes a real issue.
from torchio.
Hi
I do need it, because I would like to use this lib for artefact correction.
So I'll use the transform to simulate a specific artefact and in the model I need to have both, the original and the transform
To achieve this, I made a small modification in the Transform class
I add an input argument keep_original=False
and at the begining of call function I add this :
if self.keep_orignial:
for image_name in list(sample):
image_dict = sample[image_name]
if not is_image_dict(image_dict):
continue
if image_dict['type'] == INTENSITY:
new_key = image_name +'_orig'
if new_key not in sample:
sample[new_key] = dict(data=image_dict['data'])
It does the job, if you are interested to add it, ...
but there may be problem in case of composition, since the new added key may be detected as an intensity image, and the new transform will be apply ... (this is why I just copy the data keys, but not sure if it is correct ...)
then the only modification to do then is to add the keyword "keep_original" in all transform you want ...
from torchio.
Conceptually, I don't like the idea of the output containing the input. That's different to keeping the input as it was, which I guess would need to be done with deepcopy
.
b = transform(a)
I'd like a
to remain unchanged after applying the transform, not to be included in b
.
from torchio.
I agree it is strange, but it is more 'logical' with the use of dataset
dataset = ImagesDataset(suj, transform=t)
sample = dataset[0]
for my use case I need to have acces to both in sample, if you just use deepcopy then sample will still contain only the transformed data
what I am asking is not really related to the meaning of this initial issue (sorry), it is an other feature ..
from torchio.
So your transform t
is a composition of different torchio
transforms and for one of them, you set keep_original=True
? Are you using the queue?
from torchio.
You're right though, this should be a different issue.
from torchio.
yes that the idea (you use keep_original for only one of the transform (the last one)
yes I would like to use the queue, and then I will need to access to the same patch original and transform
I did not test it yet
from torchio.
Regarding the original issue, I was quite surprised when I encountered this in generating some example transformed images.
Maybe the best solution is add an inplace
argument (default False
)?
from torchio.
Absolutely, I think this behavior should be suppressed by default ASAP. If it doesn't have a huge impact on time, I think the easiest is deep-copying the input as soon as it's received by the Transform
.
from torchio.
As far as I can see in the commit you choose deepcopy without an argument ?
How this will impact the memory load ?
if you have a composition of 3 transform, does the memory requirement grow with a factor 3 ?
For the use b = transform(a) we expect inplace = False
but with the use of dataloader, with a succession of transform, I would say the best choice is the opposit default ...
from torchio.
I think the memory doesn't grow because the old variable goes out of scope after applying the transform:
from torchio.
Related Issues (20)
- Proposal of a new way for DDP, Distributed Sampler, and Queue to work HOT 3
- Tests for queue are failing on Windows HOT 1
- Pad modifies labels unexpectedly HOT 4
- Torchio Transformation Does not Working properly HOT 2
- `masked_select` on Z-Normalization slower than indexing HOT 2
- GridSampler without subject for validation HOT 3
- Resize sometimes adds a dimension to LabelMap images HOT 5
- Documentation is not built
- Patch Based Training With Queue Is Not Working Properly HOT 1
- Error when using a lambda function as masking_method in ZNormalization HOT 2
- Combined Affine/Elastic augmentations HOT 1
- The rotation given by Random Affine not accurate HOT 5
- (Optional) state_dict for each transform (reproducibility) HOT 2
- Wrong link to docs in Getting Started tutorial HOT 2
- Suggestions the modifying default value of prefetch_factor and the argument to set it for minimize the blocking-bottleneck between fetch subject and generate patch in Queue HOT 1
- Different transforms applied to CT and label HOT 11
- The Affine matrix does not change after applying the augmentations HOT 3
- Custom loader not used when loading data lazily HOT 2
- Seed is not working HOT 2
- Silenced exception makes it harder to debug custom Transforms HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from torchio.