mrgiovanni / abdomenatlas Goto Github PK
View Code? Open in Web Editor NEW[NeurIPS 2023] AbdomenAtlas 1.0 (5,195 CT volumes + 9 annotated classes)
Home Page: https://www.cs.jhu.edu/~alanlab/Pubs23/qu2023abdomenatlas.pdf
License: Other
[NeurIPS 2023] AbdomenAtlas 1.0 (5,195 CT volumes + 9 annotated classes)
Home Page: https://www.cs.jhu.edu/~alanlab/Pubs23/qu2023abdomenatlas.pdf
License: Other
I encountered a minor issue when running your code: the shape of the generated label does not match the original image. Here are the details:
I used the epoch_450.pth
file provided by you (seems like a beta version :>), and tested it on a private dataset.
The shape of my original image is [64, 368, 576], while the shape of the generated label is [127, 171, 267].
They appear like this in 3D Slicer:
I'm wondering if this is due to my operation or if it's a bug?
In addition, regarding lines 233-241 in test.py:
#Load pre-trained weights
store_dict = model.state_dict()
checkpoint = torch.load(args.resume)
load_dict = checkpoint['net']
# args.epoch = checkpoint['epoch']
for key, value in load_dict.items():
name = '.'.join(key.split('.')[1:])
store_dict[name] = value
The key in store_dict
will have an additional 'backbone.' ,while the key in load_dict
seem to have an additional 'module.' due to the use of nn.DataParallel
during training and do not have a 'backbone.'.
In this case, the above code may not work properly.
You can directly use:
#Load pre-trained weights
store_dict = model.state_dict()
store_dict_keys = [key for key, value in store_dict.items()]
checkpoint = torch.load(args.resume)
load_dict = checkpoint['net']
load_dict_value = [value for key, value in load_dict.items()]
# args.epoch = checkpoint['epoch']
for i in range(len(store_dict)):
store_dict[store_dict_keys[i]] = load_dict_value[i]
Hi, Thanks for sharing! I noticed that you provided two pre-trained models for download. So regarding nnUnet, will pre-training models be provided? I saw in your article that you used three models including nnUnet.
Nice work. When will you release the dataset?
Is this dataset a combination of existing public datasets? If so, could you provide a list of referenced datasets?
I am getting the following error while checkpoint loading
RuntimeError: Error(s) in loading state_dict for Universal_model:
Unexpected key(s) in state_dict: "swinViT.patch_embed.proj.weight", "swinViT.patch_embed.proj.bias", "swinViT.layers1.0.blocks.0.norm1.weight", "swinViT.layers1.0.blocks.0.norm1.bias", "swinViT.layers1.0.blocks.0.attn.relative_position_bias_table", "swinViT.layers1.0.blocks.0.attn.relative_position_index", "swinViT.layers1.0.blocks.0.attn.qkv.weight", "swinViT.layers1.0.blocks.0.attn.qkv.bias", "swinViT.layers1.0.blocks.0.attn.proj.weight", "swinViT.layers1.0.blocks.0.attn.proj.bias", "swinViT.layers1.0.blocks.0.norm2.weight", "swinViT.layers1.0.blocks.0.norm2.bias", "swinViT.layers1.0.blocks.0.mlp.linear1.weight", "swinViT.layers1.0.blocks.0.mlp.linear1.bias", "swinViT.layers1.0.blocks.0.mlp.linear2.weight", "swinViT.layers1.0.blocks.0.mlp.linear2.bias", "swinViT.layers1.0.blocks.1.norm1.weight", "swinViT.layers1.0.blocks.1.norm1.bias", "swinViT.layers1.0.blocks.1.attn.relative_position_bias_table", "swinViT.layers1.0.blocks.1.attn.relative_position_index", "swinViT.layers1.0.blocks.1.attn.qkv.weight", "swinViT.layers1.0.blocks.1.attn.qkv.bias", "swinViT.layers1.0.blocks.1.attn.proj.weight", "swinViT.layers1.0.blocks.1.attn.proj.bias", "swinViT.layers1.0.blocks.1.norm2.weight", "swinViT.layers1.0.blocks.1.norm2.bias", "swinViT.layers1.0.blocks.1.mlp.linear1.weight", "swinViT.layers1.0.blocks.1.mlp.linear1.bias", "swinViT.layers1.0.blocks.1.mlp.linear2.weight", "swinViT.layers1.0.blocks.1.mlp.linear2.bias", "swinViT.layers1.0.downsample.reduction.weight", "swinViT.layers1.0.downsample.norm.weight", "swinViT.layers1.0.downsample.norm.bias", "swinViT.layers2.0.blocks.0.norm1.weight", "swinViT.layers2.0.blocks.0.norm1.bias", "swinViT.layers2.0.blocks.0.attn.relative_position_bias_table", "swinViT.layers2.0.blocks.0.attn.relative_position_index", "swinViT.layers2.0.blocks.0.attn.qkv.weight", "swinViT.layers2.0.blocks.0.attn.qkv.bias", "swinViT.layers2.0.blocks.0.attn.proj.weight", "swinViT.layers2.0.blocks.0.attn.proj.bias", "swinViT.layers2.0.blocks.0.norm2.weight", "swinViT.layers2.0.blocks.0.norm2.bias", "swinViT.layers2.0.blocks.0.mlp.linear1.weight", "swinViT.layers2.0.blocks.0.mlp.linear1.bias", "swinViT.layers2.0.blocks.0.mlp.linear2.weight", "swinViT.layers2.0.blocks.0.mlp.linear2.bias", "swinViT.layers2.0.blocks.1.norm1.weight", "swinViT.layers2.0.blocks.1.norm1.bias", "swinViT.layers2.0.blocks.1.attn.relative_position_bias_table", "swinViT.layers2.0.blocks.1.attn.relative_position_index", "swinViT.layers2.0.blocks.1.attn.qkv.weight", "swinViT.layers2.0.blocks.1.attn.qkv.bias", "swinViT.layers2.0.blocks.1.attn.proj.weight", "swinViT.layers2.0.blocks.1.attn.proj.bias", "swinViT.layers2.0.blocks.1.norm2.weight", "swinViT.layers2.0.blocks.1.norm2.bias", "swinViT.layers2.0.blocks.1.mlp.linear1.weight", "swinViT.layers2.0.blocks.1.mlp.linear1.bias", "swinViT.layers2.0.blocks.1.mlp.linear2.weight", "swinViT.layers2.0.blocks.1.mlp.linear2.bias", "swinViT.layers2.0.downsample.reduction.weight", "swinViT.layers2.0.downsample.norm.weight", "swinViT.layers2.0.downsample.norm.bias", "swinViT.layers3.0.blocks.0.norm1.weight", "swinViT.layers3.0.blocks.0.norm1.bias", "swinViT.layers3.0.blocks.0.attn.relative_position_bias_table", "swinViT.layers3.0.blocks.0.attn.relative_position_index", "swinViT.layers3.0.blocks.0.attn.qkv.weight", "swinViT.layers3.0.blocks.0.attn.qkv.bias", "swinViT.layers3.0.blocks.0.attn.proj.weight", "swinViT.layers3.0.blocks.0.attn.proj.bias", "swinViT.layers3.0.blocks.0.norm2.weight", "swinViT.layers3.0.blocks.0.norm2.bias", "swinViT.layers3.0.blocks.0.mlp.linear1.weight", "swinViT.layers3.0.blocks.0.mlp.linear1.bias", "swinViT.layers3.0.blocks.0.mlp.linear2.weight", "swinViT.layers3.0.blocks.0.mlp.linear2.bias", "swinViT.layers3.0.blocks.1.norm1.weight", "swinViT.layers3.0.blocks.1.norm1.bias", "swinViT.layers3.0.blocks.1.attn.relative_position_bias_table", "swinViT.layers3.0.blocks.1.attn.relative_position_index", "swinViT.layers3.0.blocks.1.attn.qkv.weight", "swinViT.layers3.0.blocks.1.attn.qkv.bias", "swinViT.layers3.0.blocks.1.attn.proj.weight", "swinViT.layers3.0.blocks.1.attn.proj.bias", "swinViT.layers3.0.blocks.1.norm2.weight", "swinViT.layers3.0.blocks.1.norm2.bias", "swinViT.layers3.0.blocks.1.mlp.linear1.weight", "swinViT.layers3.0.blocks.1.mlp.linear1.bias", "swinViT.layers3.0.blocks.1.mlp.linear2.weight", "swinViT.layers3.0.blocks.1.mlp.linear2.bias", "swinViT.layers3.0.downsample.reduction.weight", "swinViT.layers3.0.downsample.norm.weight", "swinViT.layers3.0.downsample.norm.bias", "swinViT.layers4.0.blocks.0.norm1.weight", "swinViT.layers4.0.blocks.0.norm1.bias", "swinViT.layers4.0.blocks.0.attn.relative_position_bias_table", "swinViT.layers4.0.blocks.0.attn.relative_position_index", "swinViT.layers4.0.blocks.0.attn.qkv.weight", "swinViT.layers4.0.blocks.0.attn.qkv.bias", "swinViT.layers4.0.blocks.0.attn.proj.weight", "swinViT.layers4.0.blocks.0.attn.proj.bias", "swinViT.layers4.0.blocks.0.norm2.weight", "swinViT.layers4.0.blocks.0.norm2.bias", "swinViT.layers4.0.blocks.0.mlp.linear1.weight", "swinViT.layers4.0.blocks.0.mlp.linear1.bias", "swinViT.layers4.0.blocks.0.mlp.linear2.weight", "swinViT.layers4.0.blocks.0.mlp.linear2.bias", "swinViT.layers4.0.blocks.1.norm1.weight", "swinViT.layers4.0.blocks.1.norm1.bias", "swinViT.layers4.0.blocks.1.attn.relative_position_bias_table", "swinViT.layers4.0.blocks.1.attn.relative_position_index", "swinViT.layers4.0.blocks.1.attn.qkv.weight", "swinViT.layers4.0.blocks.1.attn.qkv.bias", "swinViT.layers4.0.blocks.1.attn.proj.weight", "swinViT.layers4.0.blocks.1.attn.proj.bias", "swinViT.layers4.0.blocks.1.norm2.weight", "swinViT.layers4.0.blocks.1.norm2.bias", "swinViT.layers4.0.blocks.1.mlp.linear1.weight", "swinViT.layers4.0.blocks.1.mlp.linear1.bias", "swinViT.layers4.0.blocks.1.mlp.linear2.weight", "swinViT.layers4.0.blocks.1.mlp.linear2.bias", "swinViT.layers4.0.downsample.reduction.weight", "swinViT.layers4.0.downsample.norm.weight", "swinViT.layers4.0.downsample.norm.bias", "encoder1.layer.conv1.conv.weight", "encoder1.layer.conv2.conv.weight", "encoder1.layer.conv3.conv.weight", "encoder2.layer.conv1.conv.weight", "encoder2.layer.conv2.conv.weight", "encoder3.layer.conv1.conv.weight", "encoder3.layer.conv2.conv.weight", "encoder4.layer.conv1.conv.weight", "encoder4.layer.conv2.conv.weight", "encoder10.layer.conv1.conv.weight", "encoder10.layer.conv2.conv.weight", "decoder5.transp_conv.conv.weight", "decoder5.conv_block.conv1.conv.weight", "decoder5.conv_block.conv2.conv.weight", "decoder5.conv_block.conv3.conv.weight", "decoder4.transp_conv.conv.weight", "decoder4.conv_block.conv1.conv.weight", "decoder4.conv_block.conv2.conv.weight", "decoder4.conv_block.conv3.conv.weight", "decoder3.transp_conv.conv.weight", "decoder3.conv_block.conv1.conv.weight", "decoder3.conv_block.conv2.conv.weight", "decoder3.conv_block.conv3.conv.weight", "decoder2.transp_conv.conv.weight", "decoder2.conv_block.conv1.conv.weight", "decoder2.conv_block.conv2.conv.weight", "decoder2.conv_block.conv3.conv.weight", "decoder1.transp_conv.conv.weight", "decoder1.conv_block.conv1.conv.weight", "decoder1.conv_block.conv2.conv.weight", "decoder1.conv_block.conv3.conv.weight".
Are you going to release the datasets or do we have to run the code to obtain the annotations ? I am a little confused. You have mentioned in FAQ that ~5000 volume annotation will be release but i don't see any link where i can download.
Hi ! Thank you for your awesome work. Could you please to disclose the composition of the dataset, as it intersects with many public datasets and may require consideration of the source of the data when using it
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.