Hi,
I am encountering an issue with the ImageToImageDataset
class in the MONAI library, specifically regarding the shape of loaded images.
- Modification of
__len__
Method:
First, I noticed that I had to modify the __len__
method to:
'''
def len(self) -> int:
return len(self.first_type_image_files) '''
Second, Issue with Image Loading:
The main concern arises with the loading of images in the getitem method. The class uses the LoadImage transformer to load the images, and I added print statements to check the shape of the loaded images. However, instead of getting a 3D tensor shape, which I expected for medical images (like MRI or CT scans), the loaded images are in 2D shape.
Here is the output I observed:
Loaded MR Image Shape: torch.Size([244, 204])
Loaded CT Image Shape: torch.Size([244, 204])
Loaded Mask Image Shape: torch.Size([244, 204])
I am trying to load and process 3D medical images, but the dataset seems to be returning 2D slices or incorrectly shaped tensors.
Could you please advise on how to resolve this issue? Is there a specific parameter or method I should use to ensure the images are correctly loaded as 3D tensors?
Thank you for your assistance.