aimagelab-zip / alveolar_canal Goto Github PK
View Code? Open in Web Editor NEWThis repository contains the material from the paper "Improving Segmentation of the Inferior Alveolar Nerve through Deep Label Propagation"
This repository contains the material from the paper "Improving Segmentation of the Inferior Alveolar Nerve through Deep Label Propagation"
There is no canal inference in this repo, so i can't test pre-trained checkpoint
How can I test pre-trained model from whole volume data
Hello, I'm trying to run inference by executing the main.py with a configuration file based on the gen-inference-unet in the configs folder, I'm pointing my dataset to a folder with the images I want to segment in .npy format. I get the following error:
INFO:root:loading preprocessing
Traceback (most recent call last):
File "/home/renan/anaconda3/envs/alveolar_canal/lib/python3.9/site-packages/munch/__init__.py", line 103, in __getattr__
return object.__getattribute__(self, k)
AttributeError: 'Munch' object has no attribute 'preprocessing'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/renan/anaconda3/envs/alveolar_canal/lib/python3.9/site-packages/munch/__init__.py", line 106, in __getattr__
return self[k]
KeyError: 'preprocessing'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/renan/alveolar_canal/main.py", line 92, in <module>
if config.data_loader.preprocessing is None:
File "/home/renan/anaconda3/envs/alveolar_canal/lib/python3.9/site-packages/munch/__init__.py", line 108, in __getattr__
raise AttributeError(k)
AttributeError: preprocessing
looking at the code there seems to be a section which should handle none preprocessing fields, but it doesn't seem to be working, here is my yml:
# title of the experiment
title: canal_generator_train
# Where to output everything, in this path a folder with
# the same name as the title is created containing checkpoints,
# logs and a copy of the config used
project_dir: './results'
seed: 47
# which experiment to execute: Segmentation or Generation
experiment:
name: Segmentation
data_loader:
dataset: ./data/MG_scan_test.nii.gz
# null to use training_set, generated to used the generated dataset
training_set: null
# which augmentations to use, see: augmentations.yaml
augmentations: configs/augmentations.yaml
background_suppression: 0
batch_size: 2
labels:
BACKGROUND: 0
INSIDE: 1
mean: 0.08435
num_workers: 8
# shape of a single patch
patch_shape:
- 120
- 120
- 120
# reshape of the whole volume before extracting the patches
resize_shape:
- 168
- 280
- 360
sampler_type: grid
grid_overlap: 0
std: 0.17885
volumes_max: 2100
volumes_min: 0
weights:
- 0.000703
- 0.999
# which network to use
model:
name: PosPadUNet3D
loss:
name: Jaccard
lr_scheduler:
name: Plateau
optimizer:
learning_rate: 0.1
name: SGD
trainer:
# Reload the last checkpoints?
reload: False
checkpoint: ./checkpoints/seg-checkpoint.pth
# train the network
do_train: False
# do a single test of the network with the loaded checkpoints
do_test: False
# generate the synthetic dense dataset
do_inference: True
epochs: 100
Any help is appreciated, thanks.
I wonder what is the gt that is used to train the Deep Label Expansion stage?From my observation,it is seemd that labels generated from torchio.LabelMap are used.Thanks very much!
Hello author! I have an error in code reproduction that I would like to get your answer to. When I run the command python main.py --configs config/seg-pretraining.yam
l, I get a divide by zero error, the error comes from expierments
line 204 running continue operation on the whole dataset, what causes this? Because torch.sum(gt_count)
is always equal to 0, and does synthetic_loader only use dense tags generated by gen-inference.yaml? Looking for your answer to the above error when trying to reproduce your work.
As stated in potpov/alveolar_canal#3 the checkpoints for the baseline network (PosPadUNet3D) seems broken and must be fixed.
Sorry for bothering you.I get NaN errors during canal_pretrain process.
From another issue https://github.com/AImageLab-zip/alveolar_canal/issues/7
I find that my prediction will get NaN during training.Could you help me to find the problem.
I am trying to run this code by using maxillo dataset. But it occurs a error below .
Traceback (most recent call last):
File "/home/syu/Documents/maison/alveolar_canal/main.py", line 170, in
val_iou, val_dice = experiment.test(phase="Validation")
File "/home/syu/Documents/maison/alveolar_canal/experiments/experiment.py", line 276, in test
loss = self.loss(output.unsqueeze(0), gt.unsqueeze(0), partition_weights)
File "/home/syu/Documents/maison/alveolar_canal/losses/LossFactory.py", line 60, in call
raise ValueError(f'Loss {loss_name} has some NaN')
ValueError: Loss DiceLoss has some NaN
I'm wondering if this code and dataset work as written on the readme or if there are no corrupted files in this dataset.
Big congratulation ti your acivements in cvpr!
Hi,
I was trying to test your network and your wirghts on another database. But It fails each time...
Nevertheles, the size of each data (364,704,704) is bigger than the one you provided. Do you have any idea why it fails each time ?.
You will find attached one exemple of the database that I am trying to work with. Thank you
Best Regards,
Hamid FSIAN
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.