jmkim0309 / fewshot-egnn Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
Hi, your model is quite amazing. But as I use windows, my root is always invalid, what more is supposed to change except tt.arg.dataset_root in train.py. Only change this with the absolute path of the file mini_imagenet_train.pickle always rise FileNotFoundError.
While going through your data loader, I noticed that in fewshot-egnn/data.py
you randomly pick class labels with task_class_list = random.sample(full_class_list, num_ways)
, but never really pass these labels in your batch. Instead we have
https://github.com/khy0809/fewshot-egnn/blob/205fa80ec7cb12550f7b52a63f921171f92dac4c/data.py#L106
https://github.com/khy0809/fewshot-egnn/blob/205fa80ec7cb12550f7b52a63f921171f92dac4c/data.py#L111
which assigns the wrong class labels ([0, ..., #ways]) to the picked data points. Shouldn't the support as well as the query label be assigned task_class_list[c_idx]
?
Hi, this is good work and very helpful. But I wonder how you generate the pickle files from original images. Because when I use my pickles (generated based on original images through Resize(84) and CenterCrop(84) from torchvsion.transforms). The performance decrease significantly from 77% to 73%
Hello:
In the 5-W 1-S setting, the query set label of each batch during training and testing is [0,1,2,3,4], no scrambling is performed,Will this make the network remember this setting, and the accuracy will increase?
In other papers (GNN, relational network) that I read for few shot learning, the labels of the query set are out of order, so I follow this idea of out of order and only use the source code of each batch test query set label randomly scramble, maybe [1,4,2,0,3], [1,2,4,0,3], [2,0,4,3,1], etc., init_edge is also based on the modified label,the sequence generated is still a 10*10 symmetric matrix, and the accuracy value is only about 43%, which is far from the 66.27% accuracy of my source code.I also scrambled during the training and testing phases, and the result was about 43%.What I thought about the graph network at the beginning was that the order of the node labels should have no effect on the accuracy rate, because we made the form of the data into the graph, data structured, and relative, but this huge accuracy difference makes me,I don't quite understand it. Did I set it wrong?
thank you very much!
hello, i find full_edge_loss_layers = [self.edge_loss((1-full_logit_layer[:, 0]), (1-full_edge[:, 0])) for full_logit_layer in full_logit_layers]
why it is self.edge_loss((1-full_logit_layer[:, 0]), (1-full_edge[:, 0]))
not self.edge_loss(full_logit_layer[:, 0], full_edge[:, 0])
Hi,
How to get the images from mini_imagenet_train.pickle? Thanks!
I can't get the pickle file of tiered-imagenet dataset. Give me some help please.Thanks!
Hi,
I did not find a link to tiered-Imagenet dataset in README.md.
First, thanks for your great job!
I have trouble to open the log file to see print information.
For example: events.out.tfevents.1566895592.DA-DL-02
I have already searched it on google, but I can't find solutions.
Hi! Thanks for your amazing work. I was trying to load the feature ,but I was wondering what should I set the node and edge feature if I load the feature of my own which is extracted based skeleton.
Hi! Your work is really good!
But could you please provide the .csv file and the link to download tiered-imagenet images?
Thanks.
I think you have done great work, could you please cancel the limitation of google drive.
May I ask if I want to test with my own image data, should I first change it to CSV file format and then to pickle data format?
Hello,
your work is really good.
But as i try a 5way 5shot experiment in miniImageNet(transductive method), the result of my experiment could not achieve 76.37% as you reported. I did this experiment just with the code 'python3 trainer.py --dataset mini --num_ways 5 --num_shots 5 --trainsductive True' in readme. Since i don't have a file named 'trainer.py', so i changed 'trainer.py' to 'train.py'. Were you having some other adjustments when doing experiments?
Thank you very much!
How to use the wget command to download ‘mini_imagenet_test.pickle’?
Hi, your work was really good, but I wander why don't you use the Dataloader from the pytorch? My training time was about 14 hours on 2 1080ti.
Hi, It is a nice work! But I have a question about the training and testing settings.
For training, a task has one query data for each class (total 1 * 5 query data of a task) in the task. When testing, the performance of the model should not change much with the query number of each task. But my experiment shows that the performance will decrease a lot when there are more than one query data for each class in the task (>1 * 5 query data in a task) when testing. Specifically, the performance decreases to 11% when each test episode is formed by sampling 15 queries for each of 5 classes.
I am wondering if the model is overfitting for the specific setting: 1 query for each of 5 classes for a task.
Hello, I'm just a little confused about the evaluation setup.
In Section 4.2 of your paper, it is said that 'For evaluation, each test episode was formed by randomly sampling 15 queries for each of 5 classes, and the performance is averaged over 600 randomly generated episodes from the test set.' I think it means every test episode has (5 * 15 =) 75 queries and 75 graphs are formed from each test episode under the non-transductive setting.
However, according to your released code, it seems that every val/test episode only has (5 * 1 =) 5 queries. And you randomly sample 10,000 episodes for validating/testing.
I'm just wondering which evaluation setup you use when obtaining the results in your paper. And have you tried them both? If so, is there any difference between the results obtained by following these two evaluation setups? Thank you!
Hi, very nice work! I just have a little question. How can I train this model under semi-supervised learning? I cannot find the arg for semi-supervised learning in the files. Thx!
why do your do such a operation" edge_feat = edge_feat + force_edge_feat"
Hi! Your work is really good!
But could you please provide the .csv file and the link to download tiered-imagenet images?
Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.