Code Monkey home page Code Monkey logo

fss-1000's People

Contributors

hkustcv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fss-1000's Issues

Labels not normalized

After going through the training.py script I found that images are normalized but the labels are not normalized. As such the model never converges to local minima. After normalizing the labels the model converges well.

The arrangement of the dataset-fewshot_data.zip???

I have downloaded the fewshot_data.zip and read the code of the training.py,but I'm not sure the arrangement of the folders.As you know,the dataset has 1000 classes,should I create one folder named'image' for each class?and put 10 images with the form of.jpg in it?And then, should I create one folder named'label' for each class?and put 10 images with the form of.png in it?When I run the code,there are also some size problem with the picture?
QQ截图20201117201121

How to label dataset?

Hi,

I would like to add my own support image and label.
What tool can I use to create the label? Are there any specific steps?

Regards,
Arvin

evaluate code

Could you publish the evaluate code? I think this repo is a good pipeline for few-shot segmentation. Thanks! @HKUSTCV

Wrong pixel size of some instances

There are some categories in the dataset where the pixel size of an instance is incorrect:
crt_screen
peregine_falcon
rocking_chair
rubber_eraser
ruler
screw
shower_curtain

What is the training shot

What is the training shot? The function says it is one shot. And also are the training and test shot the same ?

Problems in remote sensing imagery segmentation

Thank you so much for providing the amazing methods, I tried your script and can get good few-shot learning segmentation in several cases. However, I wanna adopt the trained model directly into the remote-sensing field, such as building mapping(I know you provide the river sample in the paper), but cannot obtain ideal results.
I am pretty sure that I prepared the support and query set as you suggested, and the results as follows:
support samples:
supp1
supp3
query results:
test10
test11

Would you kindly give me some suggestions? I wonder the issues come from the difficulty of building features? How can I solve the problem, and do I need to retrain the model?

Thanks in advance!

数据库错误

附加文件的中的类别pdf和数据库中的类别不一致
数据库中的peregine_falcon中的8.png是RGB彩色图而且大小不是224
代码写的也不是很规范
就这也能进cvpr?

config: docker

Can you show the docker setting file? The vision of cv2/torch is confused. Thanks!

What is your Directory Structure

When I python training.py I don't know the “./fewshot/support/%s/image/%s” in your computer directory stucture.Why you use two %s there?

Reproducibility

A good work, thank you for your codes.
I trained the network from scratch, using provided training.py, but got output of '1' for every pixels. Any suggestions? Thank you in advance.

How to extend to C-way setting?

What does

a general C-way-K-shot segmentation Could be solved by a union of C Binary Segmentation tasks

in your paper mean?

And how to implement it in your code?

Some problems in the dataset

Hi,

Thank you very much for preparing such a wonderful dataset. However, in the process of using it, I noticed a few problems.

Some excessive files which are not annotated images:

  • beam_bridge: 10.jpeg
  • banana_boat: 1.jpeg
  • flying_geckos: 1.jpeg
  • har_gow: 4.jpeg
  • peregine_falcon: 8.png
  • pteropus: 9.jpeg
  • wandering_albatross: 6.jpeg

Some files which are not in the usual size (I do not know if the dataset is intended to have equal size, so I just list these as they are anomalies):

  • rubber_eraser: 8.png (629, 622)
  • shower_curtain: 3.png (450, 450)
  • crt_screen: 6.png (500, 500)
  • rocking_chair: 9.png (640, 550)
  • screw: 8.png (562, 424)
  • ruler: 10.png (1024, 700)

datadir in the training.py

Excuse me, the data set directory in training.py of the code directly contains support, and the image and label folders in it, but in fact the directory in the fewshot_data.zip data set is not like this. How can I modify the code in training.py so that Is the training successful? Or is there any other version of fewshot_dataset that contains a support folder?
image
image

I‘ m looking forward to your reply. Thanks!

cannot load pretrained model weights

Hi,

The pretrained model seems not the vgg16 model as described in the paper and source code.
The loaded state_dict keys is as following:
odict_keys(['features.0.weight', 'features.0.bias', 'features.0.running_mean', 'features.0.running_var', 'features.0.num_batches_tracked', 'features.3.0.conv1.weight', 'features.3.0.bn1.weight', 'features.3.0.bn1.bias', 'features.3.0.bn1.running_mean', 'features.3.0.bn1.running_var', 'features.3.0.bn1.num_batches_tracked', 'features.3.0.conv2.weight', 'features.3.0.bn2.weight', 'features.3.0.bn2.bias', 'features.3.0.bn2.running_mean', 'features.3.0.bn2.running_var', 'features.3.0.bn2.num_batches_tracked', 'features.3.0.conv3.weight', 'features.3.0.bn3.weight', 'features.3.0.bn3.bias', 'features.3.0.bn3.running_mean', 'features.3.0.bn3.running_var', 'features.3.0.bn3.num_batches_tracked', 'features.3.0.downsample.0.weight', 'features.3.0.downsample.1.weight', 'features.3.0.downsample.1.bias', 'features.3.0.downsample.1.running_mean', 'features.3.0.downsample.1.running_var', 'features.3.0.downsample.1.num_batches_tracked', 'features.3.1.conv1.weight', 'features.3.1.bn1.weight', 'features.3.1.bn1.bias', 'features.3.1.bn1.running_mean', 'features.3.1.bn1.running_var', 'features.3.1.bn1.num_batches_tracked', 'features.3.1.conv2.weight', 'features.3.1.bn2.weight', 'features.3.1.bn2.bias', 'features.3.1.bn2.running_mean', 'features.3.1.bn2.running_var', 'features.3.1.bn2.num_batches_tracked', 'features.3.1.conv3.weight', 'features.3.1.bn3.weight', 'features.3.1.bn3.bias', 'features.3.1.bn3.running_mean', 'features.3.1.bn3.running_var', 'features.3.1.bn3.num_batches_tracked', 'features.3.2.conv1.weight', 'features.3.2.bn1.weight', 'features.3.2.bn1.bias', 'features.3.2.bn1.running_mean', 'features.3.2.bn1.running_var', 'features.3.2.bn1.num_batches_tracked', 'features.3.2.conv2.weight', 'features.3.2.bn2.weight', 'features.3.2.bn2.bias', 'features.3.2.bn2.running_mean', 'features.3.2.bn2.running_var', 'features.3.2.bn2.num_batches_tracked', 'features.3.2.conv3.weight', 'features.3.2.bn3.weight', 'features.3.2.bn3.bias', 'features.3.2.bn3.running_mean', 'features.3.2.bn3.running_var', 'features.3.2.bn3.num_batches_tracked', 'features.4.0.conv1.weight', 'features.4.0.bn1.weight', 'features.4.0.bn1.bias', 'features.4.0.bn1.running_mean', 'features.4.0.bn1.running_var', 'features.4.0.bn1.num_batches_tracked', 'features.4.0.conv2.weight', 'features.4.0.bn2.weight', 'features.4.0.bn2.bias', 'features.4.0.bn2.running_mean', 'features.4.0.bn2.running_var', 'features.4.0.bn2.num_batches_tracked', 'features.4.0.conv3.weight', 'features.4.0.bn3.weight', 'features.4.0.bn3.bias', 'features.4.0.bn3.running_mean', 'features.4.0.bn3.running_var', 'features.4.0.bn3.num_batches_tracked', 'features.4.0.downsample.0.weight', 'features.4.0.downsample.1.weight', 'features.4.0.downsample.1.bias', 'features.4.0.downsample.1.running_mean', 'features.4.0.downsample.1.running_var', 'features.4.0.downsample.1.num_batches_tracked', 'features.4.1.conv1.weight', 'features.4.1.bn1.weight', 'features.4.1.bn1.bias', 'features.4.1.bn1.running_mean', 'features.4.1.bn1.running_var', 'features.4.1.bn1.num_batches_tracked', 'features.4.1.conv2.weight', 'features.4.1.bn2.weight', 'features.4.1.bn2.bias', 'features.4.1.bn2.running_mean', 'features.4.1.bn2.running_var', 'features.4.1.bn2.num_batches_tracked', 'features.4.1.conv3.weight', 'features.4.1.bn3.weight', 'features.4.1.bn3.bias', 'features.4.1.bn3.running_mean', 'features.4.1.bn3.running_var', 'features.4.1.bn3.num_batches_tracked', 'features.4.2.conv1.weight', 'features.4.2.bn1.weight', 'features.4.2.bn1.bias', 'features.4.2.bn1.running_mean', 'features.4.2.bn1.running_var', 'features.4.2.bn1.num_batches_tracked', 'features.4.2.conv2.weight', 'features.4.2.bn2.weight', 'features.4.2.bn2.bias', 'features.4.2.bn2.running_mean', 'features.4.2.bn2.running_var', 'features.4.2.bn2.num_batches_tracked', 'features.4.2.conv3.weight', 'features.4.2.bn3.weight', 'features.4.2.bn3.bias', 'features.4.2.bn3.running_mean', 'features.4.2.bn3.running_var', 'features.4.2.bn3.num_batches_tracked', 'features.4.3.conv1.weight', 'features.4.3.bn1.weight', 'features.4.3.bn1.bias', 'features.4.3.bn1.running_mean', 'features.4.3.bn1.running_var', 'features.4.3.bn1.num_batches_tracked', 'features.4.3.conv2.weight', 'features.4.3.bn2.weight', 'features.4.3.bn2.bias', 'features.4.3.bn2.running_mean', 'features.4.3.bn2.running_var', 'features.4.3.bn2.num_batches_tracked', 'features.4.3.conv3.weight', 'features.4.3.bn3.weight', 'features.4.3.bn3.bias', 'features.4.3.bn3.running_mean', 'features.4.3.bn3.running_var', 'features.4.3.bn3.num_batches_tracked', 'features.5.0.conv1.weight', 'features.5.0.bn1.weight', 'features.5.0.bn1.bias', 'features.5.0.bn1.running_mean', 'features.5.0.bn1.running_var', 'features.5.0.bn1.num_batches_tracked', 'features.5.0.conv2.weight', 'features.5.0.bn2.weight', 'features.5.0.bn2.bias', 'features.5.0.bn2.running_mean', 'features.5.0.bn2.running_var', 'features.5.0.bn2.num_batches_tracked', 'features.5.0.conv3.weight', 'features.5.0.bn3.weight', 'features.5.0.bn3.bias', 'features.5.0.bn3.running_mean', 'features.5.0.bn3.running_var', 'features.5.0.bn3.num_batches_tracked', 'features.5.0.downsample.0.weight', 'features.5.0.downsample.1.weight', 'features.5.0.downsample.1.bias', 'features.5.0.downsample.1.running_mean', 'features.5.0.downsample.1.running_var', 'features.5.0.downsample.1.num_batches_tracked', 'features.5.1.conv1.weight', 'features.5.1.bn1.weight', 'features.5.1.bn1.bias', 'features.5.1.bn1.running_mean', 'features.5.1.bn1.running_var', 'features.5.1.bn1.num_batches_tracked', 'features.5.1.conv2.weight', 'features.5.1.bn2.weight', 'features.5.1.bn2.bias', 'features.5.1.bn2.running_mean', 'features.5.1.bn2.running_var', 'features.5.1.bn2.num_batches_tracked', 'features.5.1.conv3.weight', 'features.5.1.bn3.weight', 'features.5.1.bn3.bias', 'features.5.1.bn3.running_mean', 'features.5.1.bn3.running_var', 'features.5.1.bn3.num_batches_tracked', 'features.5.2.conv1.weight', 'features.5.2.bn1.weight', 'features.5.2.bn1.bias', 'features.5.2.bn1.running_mean', 'features.5.2.bn1.running_var', 'features.5.2.bn1.num_batches_tracked', 'features.5.2.conv2.weight', 'features.5.2.bn2.weight', 'features.5.2.bn2.bias', 'features.5.2.bn2.running_mean', 'features.5.2.bn2.running_var', 'features.5.2.bn2.num_batches_tracked', 'features.5.2.conv3.weight', 'features.5.2.bn3.weight', 'features.5.2.bn3.bias', 'features.5.2.bn3.running_mean', 'features.5.2.bn3.running_var', 'features.5.2.bn3.num_batches_tracked', 'features.5.3.conv1.weight', 'features.5.3.bn1.weight', 'features.5.3.bn1.bias', 'features.5.3.bn1.running_mean', 'features.5.3.bn1.running_var', 'features.5.3.bn1.num_batches_tracked', 'features.5.3.conv2.weight', 'features.5.3.bn2.weight', 'features.5.3.bn2.bias', 'features.5.3.bn2.running_mean', 'features.5.3.bn2.running_var', 'features.5.3.bn2.num_batches_tracked', 'features.5.3.conv3.weight', 'features.5.3.bn3.weight', 'features.5.3.bn3.bias', 'features.5.3.bn3.running_mean', 'features.5.3.bn3.running_var', 'features.5.3.bn3.num_batches_tracked', 'features.5.4.conv1.weight', 'features.5.4.bn1.weight', 'features.5.4.bn1.bias', 'features.5.4.bn1.running_mean', 'features.5.4.bn1.running_var', 'features.5.4.bn1.num_batches_tracked', 'features.5.4.conv2.weight', 'features.5.4.bn2.weight', 'features.5.4.bn2.bias', 'features.5.4.bn2.running_mean', 'features.5.4.bn2.running_var', 'features.5.4.bn2.num_batches_tracked', 'features.5.4.conv3.weight', 'features.5.4.bn3.weight', 'features.5.4.bn3.bias', 'features.5.4.bn3.running_mean', 'features.5.4.bn3.running_var', 'features.5.4.bn3.num_batches_tracked', 'features.5.5.conv1.weight', 'features.5.5.bn1.weight', 'features.5.5.bn1.bias', 'features.5.5.bn1.running_mean', 'features.5.5.bn1.running_var', 'features.5.5.bn1.num_batches_tracked', 'features.5.5.conv2.weight', 'features.5.5.bn2.weight', 'features.5.5.bn2.bias', 'features.5.5.bn2.running_mean', 'features.5.5.bn2.running_var', 'features.5.5.bn2.num_batches_tracked', 'features.5.5.conv3.weight', 'features.5.5.bn3.weight', 'features.5.5.bn3.bias', 'features.5.5.bn3.running_mean', 'features.5.5.bn3.running_var', 'features.5.5.bn3.num_batches_tracked', 'features.5.6.conv1.weight', 'features.5.6.bn1.weight', 'features.5.6.bn1.bias', 'features.5.6.bn1.running_mean', 'features.5.6.bn1.running_var', 'features.5.6.bn1.num_batches_tracked', 'features.5.6.conv2.weight', 'features.5.6.bn2.weight', 'features.5.6.bn2.bias', 'features.5.6.bn2.running_mean', 'features.5.6.bn2.running_var', 'features.5.6.bn2.num_batches_tracked', 'features.5.6.conv3.weight', 'features.5.6.bn3.weight', 'features.5.6.bn3.bias', 'features.5.6.bn3.running_mean', 'features.5.6.bn3.running_var', 'features.5.6.bn3.num_batches_tracked', 'features.5.7.conv1.weight', 'features.5.7.bn1.weight', 'features.5.7.bn1.bias', 'features.5.7.bn1.running_mean', 'features.5.7.bn1.running_var', 'features.5.7.bn1.num_batches_tracked', 'features.5.7.conv2.weight', 'features.5.7.bn2.weight', 'features.5.7.bn2.bias', 'features.5.7.bn2.running_mean', 'features.5.7.bn2.running_var', 'features.5.7.bn2.num_batches_tracked', 'features.5.7.conv3.weight', 'features.5.7.bn3.weight', 'features.5.7.bn3.bias', 'features.5.7.bn3.running_mean', 'features.5.7.bn3.running_var', 'features.5.7.bn3.num_batches_tracked', 'features.5.8.conv1.weight', 'features.5.8.bn1.weight', 'features.5.8.bn1.bias', 'features.5.8.bn1.running_mean', 'features.5.8.bn1.running_var', 'features.5.8.bn1.num_batches_tracked', 'features.5.8.conv2.weight', 'features.5.8.bn2.weight', 'features.5.8.bn2.bias', 'features.5.8.bn2.running_mean', 'features.5.8.bn2.running_var', 'features.5.8.bn2.num_batches_tracked', 'features.5.8.conv3.weight', 'features.5.8.bn3.weight', 'features.5.8.bn3.bias', 'features.5.8.bn3.running_mean', 'features.5.8.bn3.running_var', 'features.5.8.bn3.num_batches_tracked', 'features.5.9.conv1.weight', 'features.5.9.bn1.weight', 'features.5.9.bn1.bias', 'features.5.9.bn1.running_mean', 'features.5.9.bn1.running_var', 'features.5.9.bn1.num_batches_tracked', 'features.5.9.conv2.weight', 'features.5.9.bn2.weight', 'features.5.9.bn2.bias', 'features.5.9.bn2.running_mean', 'features.5.9.bn2.running_var', 'features.5.9.bn2.num_batches_tracked', 'features.5.9.conv3.weight', 'features.5.9.bn3.weight', 'features.5.9.bn3.bias', 'features.5.9.bn3.running_mean', 'features.5.9.bn3.running_var', 'features.5.9.bn3.num_batches_tracked', 'features.5.10.conv1.weight', 'features.5.10.bn1.weight', 'features.5.10.bn1.bias', 'features.5.10.bn1.running_mean', 'features.5.10.bn1.running_var', 'features.5.10.bn1.num_batches_tracked', 'features.5.10.conv2.weight', 'features.5.10.bn2.weight', 'features.5.10.bn2.bias', 'features.5.10.bn2.running_mean', 'features.5.10.bn2.running_var', 'features.5.10.bn2.num_batches_tracked', 'features.5.10.conv3.weight', 'features.5.10.bn3.weight', 'features.5.10.bn3.bias', 'features.5.10.bn3.running_mean', 'features.5.10.bn3.running_var', 'features.5.10.bn3.num_batches_tracked', 'features.5.11.conv1.weight', 'features.5.11.bn1.weight', 'features.5.11.bn1.bias', 'features.5.11.bn1.running_mean', 'features.5.11.bn1.running_var', 'features.5.11.bn1.num_batches_tracked', 'features.5.11.conv2.weight', 'features.5.11.bn2.weight', 'features.5.11.bn2.bias', 'features.5.11.bn2.running_mean', 'features.5.11.bn2.running_var', 'features.5.11.bn2.num_batches_tracked', 'features.5.11.conv3.weight', 'features.5.11.bn3.weight', 'features.5.11.bn3.bias', 'features.5.11.bn3.running_mean', 'features.5.11.bn3.running_var', 'features.5.11.bn3.num_batches_tracked', 'features.5.12.conv1.weight', 'features.5.12.bn1.weight', 'features.5.12.bn1.bias', 'features.5.12.bn1.running_mean', 'features.5.12.bn1.running_var', 'features.5.12.bn1.num_batches_tracked', 'features.5.12.conv2.weight', 'features.5.12.bn2.weight', 'features.5.12.bn2.bias', 'features.5.12.bn2.running_mean', 'features.5.12.bn2.running_var', 'features.5.12.bn2.num_batches_tracked', 'features.5.12.conv3.weight', 'features.5.12.bn3.weight', 'features.5.12.bn3.bias', 'features.5.12.bn3.running_mean', 'features.5.12.bn3.running_var', 'features.5.12.bn3.num_batches_tracked', 'features.5.13.conv1.weight', 'features.5.13.bn1.weight', 'features.5.13.bn1.bias', 'features.5.13.bn1.running_mean', 'features.5.13.bn1.running_var', 'features.5.13.bn1.num_batches_tracked', 'features.5.13.conv2.weight', 'features.5.13.bn2.weight', 'features.5.13.bn2.bias', 'features.5.13.bn2.running_mean', 'features.5.13.bn2.running_var', 'features.5.13.bn2.num_batches_tracked', 'features.5.13.conv3.weight', 'features.5.13.bn3.weight', 'features.5.13.bn3.bias', 'features.5.13.bn3.running_mean', 'features.5.13.bn3.running_var', 'features.5.13.bn3.num_batches_tracked', 'features.5.14.conv1.weight', 'features.5.14.bn1.weight', 'features.5.14.bn1.bias', 'features.5.14.bn1.running_mean', 'features.5.14.bn1.running_var', 'features.5.14.bn1.num_batches_tracked', 'features.5.14.conv2.weight', 'features.5.14.bn2.weight', 'features.5.14.bn2.bias', 'features.5.14.bn2.running_mean', 'features.5.14.bn2.running_var', 'features.5.14.bn2.num_batches_tracked', 'features.5.14.conv3.weight', 'features.5.14.bn3.weight', 'features.5.14.bn3.bias', 'features.5.14.bn3.running_mean', 'features.5.14.bn3.running_var', 'features.5.14.bn3.num_batches_tracked', 'features.5.15.conv1.weight', 'features.5.15.bn1.weight', 'features.5.15.bn1.bias', 'features.5.15.bn1.running_mean', 'features.5.15.bn1.running_var', 'features.5.15.bn1.num_batches_tracked', 'features.5.15.conv2.weight', 'features.5.15.bn2.weight', 'features.5.15.bn2.bias', 'features.5.15.bn2.running_mean', 'features.5.15.bn2.running_var', 'features.5.15.bn2.num_batches_tracked', 'features.5.15.conv3.weight', 'features.5.15.bn3.weight', 'features.5.15.bn3.bias', 'features.5.15.bn3.running_mean', 'features.5.15.bn3.running_var', 'features.5.15.bn3.num_batches_tracked', 'features.5.16.conv1.weight', 'features.5.16.bn1.weight', 'features.5.16.bn1.bias', 'features.5.16.bn1.running_mean', 'features.5.16.bn1.running_var', 'features.5.16.bn1.num_batches_tracked', 'features.5.16.conv2.weight', 'features.5.16.bn2.weight', 'features.5.16.bn2.bias', 'features.5.16.bn2.running_mean', 'features.5.16.bn2.running_var', 'features.5.16.bn2.num_batches_tracked', 'features.5.16.conv3.weight', 'features.5.16.bn3.weight', 'features.5.16.bn3.bias', 'features.5.16.bn3.running_mean', 'features.5.16.bn3.running_var', 'features.5.16.bn3.num_batches_tracked', 'features.5.17.conv1.weight', 'features.5.17.bn1.weight', 'features.5.17.bn1.bias', 'features.5.17.bn1.running_mean', 'features.5.17.bn1.running_var', 'features.5.17.bn1.num_batches_tracked', 'features.5.17.conv2.weight', 'features.5.17.bn2.weight', 'features.5.17.bn2.bias', 'features.5.17.bn2.running_mean', 'features.5.17.bn2.running_var', 'features.5.17.bn2.num_batches_tracked', 'features.5.17.conv3.weight', 'features.5.17.bn3.weight', 'features.5.17.bn3.bias', 'features.5.17.bn3.running_mean', 'features.5.17.bn3.running_var', 'features.5.17.bn3.num_batches_tracked', 'features.5.18.conv1.weight', 'features.5.18.bn1.weight', 'features.5.18.bn1.bias', 'features.5.18.bn1.running_mean', 'features.5.18.bn1.running_var', 'features.5.18.bn1.num_batches_tracked', 'features.5.18.conv2.weight', 'features.5.18.bn2.weight', 'features.5.18.bn2.bias', 'features.5.18.bn2.running_mean', 'features.5.18.bn2.running_var', 'features.5.18.bn2.num_batches_tracked', 'features.5.18.conv3.weight', 'features.5.18.bn3.weight', 'features.5.18.bn3.bias', 'features.5.18.bn3.running_mean', 'features.5.18.bn3.running_var', 'features.5.18.bn3.num_batches_tracked', 'features.5.19.conv1.weight', 'features.5.19.bn1.weight', 'features.5.19.bn1.bias', 'features.5.19.bn1.running_mean', 'features.5.19.bn1.running_var', 'features.5.19.bn1.num_batches_tracked', 'features.5.19.conv2.weight', 'features.5.19.bn2.weight', 'features.5.19.bn2.bias', 'features.5.19.bn2.running_mean', 'features.5.19.bn2.running_var', 'features.5.19.bn2.num_batches_tracked', 'features.5.19.conv3.weight', 'features.5.19.bn3.weight', 'features.5.19.bn3.bias', 'features.5.19.bn3.running_mean', 'features.5.19.bn3.running_var', 'features.5.19.bn3.num_batches_tracked', 'features.5.20.conv1.weight', 'features.5.20.bn1.weight', 'features.5.20.bn1.bias', 'features.5.20.bn1.running_mean', 'features.5.20.bn1.running_var', 'features.5.20.bn1.num_batches_tracked', 'features.5.20.conv2.weight', 'features.5.20.bn2.weight', 'features.5.20.bn2.bias', 'features.5.20.bn2.running_mean', 'features.5.20.bn2.running_var', 'features.5.20.bn2.num_batches_tracked', 'features.5.20.conv3.weight', 'features.5.20.bn3.weight', 'features.5.20.bn3.bias', 'features.5.20.bn3.running_mean', 'features.5.20.bn3.running_var', 'features.5.20.bn3.num_batches_tracked', 'features.5.21.conv1.weight', 'features.5.21.bn1.weight', 'features.5.21.bn1.bias', 'features.5.21.bn1.running_mean', 'features.5.21.bn1.running_var', 'features.5.21.bn1.num_batches_tracked', 'features.5.21.conv2.weight', 'features.5.21.bn2.weight', 'features.5.21.bn2.bias', 'features.5.21.bn2.running_mean', 'features.5.21.bn2.running_var', 'features.5.21.bn2.num_batches_tracked', 'features.5.21.conv3.weight', 'features.5.21.bn3.weight', 'features.5.21.bn3.bias', 'features.5.21.bn3.running_mean', 'features.5.21.bn3.running_var', 'features.5.21.bn3.num_batches_tracked', 'features.5.22.conv1.weight', 'features.5.22.bn1.weight', 'features.5.22.bn1.bias', 'features.5.22.bn1.running_mean', 'features.5.22.bn1.running_var', 'features.5.22.bn1.num_batches_tracked', 'features.5.22.conv2.weight', 'features.5.22.bn2.weight', 'features.5.22.bn2.bias', 'features.5.22.bn2.running_mean', 'features.5.22.bn2.running_var', 'features.5.22.bn2.num_batches_tracked', 'features.5.22.conv3.weight', 'features.5.22.bn3.weight', 'features.5.22.bn3.bias', 'features.5.22.bn3.running_mean', 'features.5.22.bn3.running_var', 'features.5.22.bn3.num_batches_tracked', 'features.6.0.conv1.weight', 'features.6.0.bn1.weight', 'features.6.0.bn1.bias', 'features.6.0.bn1.running_mean', 'features.6.0.bn1.running_var', 'features.6.0.bn1.num_batches_tracked', 'features.6.0.conv2.weight', 'features.6.0.bn2.weight', 'features.6.0.bn2.bias', 'features.6.0.bn2.running_mean', 'features.6.0.bn2.running_var', 'features.6.0.bn2.num_batches_tracked', 'features.6.0.conv3.weight', 'features.6.0.bn3.weight', 'features.6.0.bn3.bias', 'features.6.0.bn3.running_mean', 'features.6.0.bn3.running_var', 'features.6.0.bn3.num_batches_tracked', 'features.6.0.downsample.0.weight', 'features.6.0.downsample.1.weight', 'features.6.0.downsample.1.bias', 'features.6.0.downsample.1.running_mean', 'features.6.0.downsample.1.running_var', 'features.6.0.downsample.1.num_batches_tracked', 'features.6.1.conv1.weight', 'features.6.1.bn1.weight', 'features.6.1.bn1.bias', 'features.6.1.bn1.running_mean', 'features.6.1.bn1.running_var', 'features.6.1.bn1.num_batches_tracked', 'features.6.1.conv2.weight', 'features.6.1.bn2.weight', 'features.6.1.bn2.bias', 'features.6.1.bn2.running_mean', 'features.6.1.bn2.running_var', 'features.6.1.bn2.num_batches_tracked', 'features.6.1.conv3.weight', 'features.6.1.bn3.weight', 'features.6.1.bn3.bias', 'features.6.1.bn3.running_mean', 'features.6.1.bn3.running_var', 'features.6.1.bn3.num_batches_tracked', 'features.6.2.conv1.weight', 'features.6.2.bn1.weight', 'features.6.2.bn1.bias', 'features.6.2.bn1.running_mean', 'features.6.2.bn1.running_var', 'features.6.2.bn1.num_batches_tracked', 'features.6.2.conv2.weight', 'features.6.2.bn2.weight', 'features.6.2.bn2.bias', 'features.6.2.bn2.running_mean', 'features.6.2.bn2.running_var', 'features.6.2.bn2.num_batches_tracked', 'features.6.2.conv3.weight', 'features.6.2.bn3.weight', 'features.6.2.bn3.bias', 'features.6.2.bn3.running_mean', 'features.6.2.bn3.running_var', 'features.6.2.bn3.num_batches_tracked', 'layer1.0.weight'])

I also printed the shapes of those conv layers in features.n.k.weights, there are blocks consists of 1x1,3x3,1x1 conv layers and batchnorm layers. Could you share the network architecture or upload another vgg16 pretrained weights for testing?

Thank you.

How to split dataset?

In paper,

The train/validation/test split used in the experiments consists of 5,200/2,400/2,400 image and label pairs.

In code, we only have 240 test classes.
So, when training, should we split train/val set random from other 760 classes?

你好,
论文中描述 :

The train/validation/test split used in the experiments consists of 5,200/2,400/2,400 image and label pairs.

代码只给出了test set

fss_test_set.txt

所以训练阶段,是从测试类之外,随机选取520个train classes和240个val classes吗,多次试验取test st mIoU平均值作为最终结果吗?

question about supplementary material.

In your supplementary material, there is a class named bloodhound ,but it doesnot exist in the dataset, except this, there are also which cannot find in dataset,Why ? Can you help me solve this problem? Thank you very much.

I'm having trouble sleeping and eating

Train/Validation/Test split

In arXiv paper, the train/validation/test split consists of 5200/2400/2400 image&label pairs is used for experiment.

However, I can not find the split detail from this repository.

Can you please upload the split detail?

dataset mask error and image shape error

There are some shapes of image not 224 x 224:
fewshot_data/crt_screen/6.png
fewshot_data/screw/8.png
fewshot_data/shower_curtain/3.png

There is a mask not 0-1 martrix
fewshot_data/bamboo_slip/7.png

instance annotation

Hi,
thanks for sharing the data !

There is a problem that bothers me.
In the Figure 4 of paper, different instances of a object class are segmented by different colors. However, the annotations are for binary segmentation instead of instance segmentation.
how could I separate different instance according to the annotations you present?

Looking forward to your reply.

Regards

Unseen classes

Hi,
thanks for sharing the code!

There is a problem that bothers me.
In the Sec3.1 of paper, "we fill in the other 486 by new classes unseen in any existing datasets",
how could I get relevant info about "unseen classes"?

Looking forward to your reply.

Regards

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.