Code Monkey home page Code Monkey logo

Comments (2)

davidkvcs avatar davidkvcs commented on August 29, 2024

Hi I am not expert and just a random user of this -- but I was attempting something similar, so maybe this helps.

The script is adapted specifically to the BTCV dataset so it is different from nnUNet.

I can recommend you check out the BTCV tutorial.

Here it is clear, that the train_trainforms and val_transforms are adapted to the BTCV dataset.

That means you have to adapt these for your dataset specifically.

For example, I have 3d dataset with 2 channels. I changed my nifti files to have the following dimentions, (output of fslinfo):

dim1 2
dim2 512
dim3 512
dim4 175

I now remove the line
AddChanneld(keys=["image", "label"])
from
train_transforms = Compose( ...'
Since I already have dimentions as my first channel. After these changes, I got the expected dimensions while running the scripts on my data.

For my case I also did normalization beforehand and wrote my own script to check orientations are all the correnct, so I also removed those lines from the script. Of course, what you should do depends on your data. :)

If you are in doubt what dimensions to expect, I can recommend that you download the BTCV dataset online and make a print of the tensor shapes fx under "Check data shape and visualize" in the turorial. You should then be able to deduce what the corresponding dimensions should be for your specific case.

Note that if you use the test.py script later on, this uses transforms from get_loader in data_utils.
So changes you make in your transforms during training should also be matched in the get_loader of data_utils, so you continue to load the data as expected.

from research-contributions.

tangy5 avatar tangy5 commented on August 29, 2024

Hi @davidkvcs , thanks for the question and interest of the work. The problem should be related to the patch size. The original configuration used 96x96x96 as the sub-volume for all experiments. The SwinUNETR model contains several downsampling operation. Not sure this is the best solution, but using 64 for each dimension is the minimum requirement for Training SwinUNETR. Hope this can help you re-design your data transformations.

from research-contributions.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.