This repository contains official code implementation for the paper: "Eigenvector Grouping for Point Cloud Vessel Labeling" from the Geometric Deep Learning in Medical Imaging (GeoMedIA) 2022 Workshop.
I have a trained a PointNet++ with eigenvector grouping stategy on a dataset containing point clouds with points belonging to pulmonary arteries and background. When applying the trained model to the test set, all points of each patient are labelled as artery. Furthermore, all points of all patients get exactly the same prediction for being an artery or background. So apparently, the model has learnt nothing about distinguishing points belonging to arteries from background.
Do you have some suggestions how this could have happened?
I have a dataset of the manual (label) and automatic (by Unet) segmentation of human carotid vessels. I have converted the segmentations of the manual and automatic segmentations to 3D point clouds. How should I structure the data to be able to train the PointNet++ on these point clouds? That doesn't become clear to me from your repository of the Pointnet2 repository. Could you give an example how you structured your data as an input?