egyptdj / stagin Goto Github PK
View Code? Open in Web Editor NEWSTAGIN: Spatio-Temporal Attention Graph Isomorphism Network
Home Page: https://arxiv.org/abs/2105.13495
License: GNU General Public License v3.0
STAGIN: Spatio-Temporal Attention Graph Isomorphism Network
Home Page: https://arxiv.org/abs/2105.13495
License: GNU General Public License v3.0
Can you upload the complete runnable code?
I am very interested in your research, and I want to know about the hcp.csv file in your code dataset. Could you please help to provide it?
Because I want to know the specific format and information contained in the CSV table, such as which subjects are involved and their corresponding information, and whether the information is consistent with what I downloaded directly from the HCP.
My Email: [email protected]
It seems that the validation process and the testing process have used the same data partitioning method and input the same data. I'm not quite clear if I'm seeing this wrong, but I think the validation set should be divided from the training set, right?
Just out of my curiosity, how long does it take for the HCPRest dataset to be loaded as a timeseries dictionary?
(In dataset.py: lines 35-40)
I'm trying on my own dataset and it seems quite slow. But I assume that it is not such a big problem since it will be saved as pth file.
Hello, can you explain the meaning of the resting state cluster diagram, why the ratio is greater than 1? How are DMN and other networks determined? Actually, I just don’t understand Figure 3 and Figure 6
Hello, I am very interested in your work. I wonder if you can provide me with the processed data.
Hi, I used the codes you provided and the HCP dataset provided by ST-GCN (https://github.com/sgadgil6/cnslab_fmri) for resting-state fMRI classification, but found that the performance of STAGIN was much worse than ST-GCN and the results reported in your paper cannot be reproduced. It seems that in this dataset of about 1000 samples, the complex STAGIN is very easy to be overfitting, while ST-GCN performs better.
Hi, Great work you have done. And thanks for your frequently reply before. But as i'm new in the field of fMRI analysis, i''m still a little confused about the '7_400_coord.csv' you provided. It seems like an coordinate bench used for carlibration. But i haven't found this file in HCP dataset. Could you please explain how to get this file and the share some descriptions about the content. A link leading to relating doc is also appericiated.
Hi great work. I have tried your model on ABIDE dataset and could not get validation accuracy greater than 60% (chance level is around 55%), whereas with SVM I can get to 70% levels. I did optimize and play around with hyperparameters but with no luck.
To be honest, I would love to hear your speculations on this situation. Do you think the multi-site nature is the source of the problem or just "autism" is not much detectable on fMRI data? Thanks...
Hello, I am reading your article recently. Appreciate your ideas. I have a question. If the HCP data is prepared according to the instructions in the Readme, the storage space consumption will be large. When you use it, download it to the local and perform subsequent operations.
Hi,
Would appreciate if someone could please provide a sample of the preprocessed data to run the code. Thank you.
How can I use multi GPU to train the data?
I tried to add torch.nn.DataParallel() in experiment.py as following,
model = ModelSTAGIN(
input_dim=dataset.num_nodes,
hidden_dim=argv.hidden_dim,
num_classes=dataset.num_classes,
num_heads=argv.num_heads,
num_layers=argv.num_layers,
sparsity=argv.sparsity,
dropout=argv.dropout,
cls_token=argv.cls_token,
readout=argv.readout)
model = torch.nn.DataParallel(model),
model.to(device)
but I get the following error:
Original Traceback (most recent call last):
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/stagin/model.py", line 159, in forward
h = torch.cat([v, time_encoding], dim=3)
RuntimeError: Sizes of tensors must match except in dimension 3. Expected size 4 but got size 15 for tensor number 1 in the list.
Hi, again great work.
In your paper, you mention for training, sampling the task data with 150 dynamic_length and sampling resting state data with 600 dynamic_length. Although the code to do this is there for task dataset class, it is not for resting state dataset class. I think this is reflected on the accuracy results I get when I run the code for gender classification, ~83% accuracy compared to %87 mentioned in the paper.
Just wanted to mention it, maybe you could fix it. Thanks...
Hi, great work. I wonder which .nii.gz file should we feed to the model with, LR phase encoded or RL phase encoded recordings? Thanks a bunch...
Hello! How can I get the files in ‘behavioral’? What does the task data look like? In addition, where was the ‘7_400_coord.csv’ in the article downloaded? Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.