Code Monkey home page Code Monkey logo

stagin's People

Contributors

egyptdj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

stagin's Issues

'behavioral', 'hcp.csv' in dataset.py ?

I am very interested in your research, and I want to know about the hcp.csv file in your code dataset. Could you please help to provide it?
Because I want to know the specific format and information contained in the CSV table, such as which subjects are involved and their corresponding information, and whether the information is consistent with what I downloaded directly from the HCP.

My Email: [email protected]

Regarding Test Data and Validation Data

It seems that the validation process and the testing process have used the same data partitioning method and input the same data. I'm not quite clear if I'm seeing this wrong, but I think the validation set should be divided from the training set, right?

Loading data

Just out of my curiosity, how long does it take for the HCPRest dataset to be loaded as a timeseries dictionary?
(In dataset.py: lines 35-40)
I'm trying on my own dataset and it seems quite slow. But I assume that it is not such a big problem since it will be saved as pth file.

K-means clustering

Hello, can you explain the meaning of the resting state cluster diagram, why the ratio is greater than 1? How are DMN and other networks determined? Actually, I just don’t understand Figure 3 and Figure 6

Data

Hello, I am very interested in your work. I wonder if you can provide me with the processed data.

Reproducibility

Hi, I used the codes you provided and the HCP dataset provided by ST-GCN (https://github.com/sgadgil6/cnslab_fmri) for resting-state fMRI classification, but found that the performance of STAGIN was much worse than ST-GCN and the results reported in your paper cannot be reproduced. It seems that in this dataset of about 1000 samples, the complex STAGIN is very easy to be overfitting, while ST-GCN performs better.

Details about the 7_400_coord.csv

Hi, Great work you have done. And thanks for your frequently reply before. But as i'm new in the field of fMRI analysis, i''m still a little confused about the '7_400_coord.csv' you provided. It seems like an coordinate bench used for carlibration. But i haven't found this file in HCP dataset. Could you please explain how to get this file and the share some descriptions about the content. A link leading to relating doc is also appericiated.

ABIDE Performance

Hi great work. I have tried your model on ABIDE dataset and could not get validation accuracy greater than 60% (chance level is around 55%), whereas with SVM I can get to 70% levels. I did optimize and play around with hyperparameters but with no luck.

To be honest, I would love to hear your speculations on this situation. Do you think the multi-site nature is the source of the problem or just "autism" is not much detectable on fMRI data? Thanks...

About data storage

Hello, I am reading your article recently. Appreciate your ideas. I have a question. If the HCP data is prepared according to the instructions in the Readme, the storage space consumption will be large. When you use it, download it to the local and perform subsequent operations.

NaNs during training

Hi, I am trying to replicate your work. During training for almost all folds (except 4'th fold) loss becomes NaN. I do believe I got the data part and configs right. Can you see, or point me to any reasons why this happens. Thanks.

Loss Graph:
Screenshot from 2021-12-06 12-37-10
Train Acc Graph:
Screenshot from 2021-12-06 12-41-15

Brain connectome sample data

Hi,
Would appreciate if someone could please provide a sample of the preprocessed data to run the code. Thank you.

Multi-GPU sopprt

How can I use multi GPU to train the data?
I tried to add torch.nn.DataParallel() in experiment.py as following,

model = ModelSTAGIN(
input_dim=dataset.num_nodes,
hidden_dim=argv.hidden_dim,
num_classes=dataset.num_classes,
num_heads=argv.num_heads,
num_layers=argv.num_layers,
sparsity=argv.sparsity,
dropout=argv.dropout,
cls_token=argv.cls_token,
readout=argv.readout)

model = torch.nn.DataParallel(model),

model.to(device)

but I get the following error:

Original Traceback (most recent call last):
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/user/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user/stagin/model.py", line 159, in forward
h = torch.cat([v, time_encoding], dim=3)
RuntimeError: Sizes of tensors must match except in dimension 3. Expected size 4 but got size 15 for tensor number 1 in the list.

Dynamic length missing from rest dataset code

Hi, again great work.

In your paper, you mention for training, sampling the task data with 150 dynamic_length and sampling resting state data with 600 dynamic_length. Although the code to do this is there for task dataset class, it is not for resting state dataset class. I think this is reflected on the accuracy results I get when I run the code for gender classification, ~83% accuracy compared to %87 mentioned in the paper.

Just wanted to mention it, maybe you could fix it. Thanks...

LR or RL

Hi, great work. I wonder which .nii.gz file should we feed to the model with, LR phase encoded or RL phase encoded recordings? Thanks a bunch...

About directory

Hello! How can I get the files in ‘behavioral’? What does the task data look like? In addition, where was the ‘7_400_coord.csv’ in the article downloaded? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.