Code Monkey home page Code Monkey logo

prfmodel's People

Contributors

dlinhardt avatar garikoitz avatar kiminsub avatar wandell avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prfmodel's Issues

[BUG] solve the overwriting problem for all the dockers

One solution is to add a timestamp, but that's not BIDS like right?
so we should have an -force option to overwrite, otherwise it should store the files in a temp folder and throw a warning. We don't want to just fail as sometimes it takes a lot of time to finish the calculations.

Edit default prfAnalyze-config-default files

Hi Noah,
we need to edit these files, because I added the option for all tools now.

It should look like this now:

  1 {
  2     "subjectName": "test2",
  3     "sessionName": "sess3",
  4     "solver":      "vista",
  5     "options.vista": {     <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
  6         "model":    "one gaussian",
  7         "grid":     false,
  8         "wSearch":  "coarse to fine",
  9         "detrend":  1,
 10         "keepAllPoints": false,
 11         "numberStimulusGridPoints": 50
 12     }
 13 }

This should be the same for options.afni and options.aprf. Don;t know about popeye yet, haven't checked, but it should be analogous right?

Add relative contrast

If we set up a contrast, if all voxels are independent, the contrast for all voxels will be the same. When we want to generate a bunch of voxels in the fov and that the results make relative sense, we can do this. The simplest example case is that if we have a RF that goes exactly in the middle of the stimuli versus a RF that is in a corner and is never hit with a stimuli. In the second case the matrix multilication should be much smaller because it only takes one tail of the gaussian. We proposed to match the max value that can be obtained (RF on the center of stimuli) with the set contrast, introduce this value, and linearly scale

[ENH] make prfanalyze/base/run.py more flexible

This line (testdat, x0, y0, th, sigmin, sigmaj, pred) = [np.squeeze(u) for u in dat] requires that the columns are always in the same order and that there are always the same elements.

I am going to edit pmModelFit for now to remove other outputs and make the order is always the same, but this won't be the reality when we start using other options, dat will have multiple columns. I think the routine should read fieldnames regardless the count and order and process them.

Large, off-center RFs do not sum to 1?

Hi!
I am reconstructing my RFs with pmGaussian2d.m and noticed that the RF volume is relatively close to 1 when centered on 0 and sigma < 6 deg. But for pRFs not centered at 0 or that are relatively large, the area does not sum up to 1.

I ran an example below for [x0,y0] = [7.8, 3.9] (the "off-center" Gaussian) and [x0,y0]=[0,0] (the "center" Gaussian), and found that with my sampling grid (101 by 101, ranging from -12 to 12 deg):

  1. I only get a volume of 1 for the off-center Gaussian when sigma = 1 deg. For the centered Gaussian, a sigma of 1-6 deg results in a volume of 1. But for larger sigmas, the volume is smaller than 1. Especially for the offset Gaussian.

  2. When we divide the RF by 2pi*sigma^2, we assume a continuous function. But in my case (and presumably other cases), this assumption does not hold, and therefore the normalization does not result in a volume of one.

  3. The only way I can get a volume of 1 for different RF sizes is when I divide by the sum(RF(:)).

I wonder if we are better off when dividing the RF by its volume (as in RF3) to get a volume of 1.

Any thoughts?

sigmas = [1:11]; % deg
x0 = 7.8; % deg
y0 = 4.9; % deg
x = -12:0.24:12;  % deg
y = x;

[X,Y] = meshgrid(x,y);

for s = 1:length(sigmas)
    
    sigmaMajor = sigmas(s); % deg
    sigmaMinor = sigmaMajor;

    RF1 = pmGaussian2d(X,Y, sigmaMajor, sigmaMinor, [], x0, y0);
    sRF1(s) = sum(RF1(:));

    % or do a simple version ourselves:  
    X = X - x0;
    Y = Y - y0;
    RF_tmp = exp(-0.5*((Y./sigmaMajor).^2 + (X./sigmaMinor).^2));

    % Either with normalization assuming continuous sampling
    RF2 = RF_tmp ./ (sigmaMajor .* 2 .* pi .* sigmaMinor);
    sRF2(s) = sum(RF2(:));
    
    % Or with normalization assuming discrete sampling
    RF3 = RF_tmp ./ sum(RF_tmp(:));
    sRF3(s) = sum(RF3(:));

end

figure; hold on;
plot(sigmas, sRF1, 'ro-', 'lineWidth',2);
plot(sigmas, sRF2, 'bo-', 'lineWidth',2);
plot(sigmas, sRF3, 'go-', 'lineWidth',2);
legend('RF1 - pmGaussian2d','RF2 - norm continous','RF3 - norm discrete'); 
xlabel('pRF sigma (deg)')
ylabel('pRF volume (deg2)')
title(sprintf('PRF at [x0,y0]=[%1.1f,%1.1f]',x0,y0))

image

[TEST] Run in human data

Per Jon's comment:
One issue to consider: I think it would be useful to run the 4 docker containers on human data. A good test case would be the HCP datasets (maybe the group averaged surface data). This will raise another issue, which I see was discussed on GitHub, namely the nifti1 limit of 32K per dimension--the HCP surface data are more than that I believe. It might also raise a third issue, which is that the HCP data consists of multiple runs. I don't know if this poses a problem for any of the docker containers or not. In any case, it will be a good test of the code to see if we can do this: run an HCP dataset with 6 scans and more than 32L time series with all 4 dockers

Using prfanalyze-base to implement new analysis toolbox

Hello! Your prf validation framework is a great step towards being able to compare multiple toolboxes designed for the same or similar purposes and enables us to validate the underlying implementations. For exactly this purpose I am trying to integrate our own prf analysis toolbox into your validation framework and I am running into a couple of problems. The following steps explain the present scenario:

  1. A synthetic dataset has been created with the prfsynth dockerimage, using the default settings.
  2. I duplicated the structure of the other prfanalyze toolboxes in order to create my own integration into the validation framework.
  3. I adjusted the dockerfile to include the necessary scripts and to create a conda environment in which python3 is installed in order to be able to run our toolbox. This was initially also a problem because python2 is installed in the base image. However, we managed to circumvent this.
  4. I implemented the calling of our analysis script the same way as already implemented for the other toolboxes (i.e. using the solve.sh which in turn is calling our analysis python script) and validated that the script is running and doing what it is supposed to do (mainly, going from a BIDS data structure to a BIDS data structure creating a range of output files)
  5. By downloading the actual code base for the prfmodel and using the following command I managed to start the analysis. However, this does not achieve the direct implementation into the validation framework:
    ./PRFmodel/gear/prfanalyze/run_prfanalyze.sh prfpy $basedir prfanalyze-prfpy/default_config.json

Given the above scenario the following problems arise:

  1. Using the proposed call from the wiki to create the default config file
    docker run --rm -it \ -v $basedir/empty:/flywheel/v0/input:ro \ -v $basedir:/flywheel/v0/output \ garikoitz/prfanalyze-prfpy:latest
    the following output is produced:
    [garikoitz/prfanalyze] No config file found. Writing default JSON file and exiting.
    cp: cannot create regular file '/flywheel/v0/input/config.json': Read-only file system
    That is, the config file that is indeed contained in the build dockerimage (can be verified by starting the image in debug mode) is trying to be copied into the mounted input directory (which is read-only) which however should be copied into the output directory. As for the other toolboxes this problem does not arise. By looking at the dockerfile for the other toolboxes there are no extra steps needed in order to make this process working. Can you verify that this is indeed the case?

  2. From the above arises the next problem. While trying to understand the scripts that are responsible for the setup for the analysis (that are the run.py and run.sh script from the prfanalyze-base image) I wanted to debug this by changing the code. However, when trying to build the prfanalyze-base image on my machine in order to be able to run it with local changes I am running into version conflicts (build_output.txt).

  3. Lastly and unfortunately, I was not able to find a comprehensive documentation about how the integration of new toolboxes into the validation framework is supposed to work. As the possibility to do this has been stated in your paper showing the usefulness and results of this framework it would be a great help to have some guidance in how to do this.

The repository for our code base can be found here.

Thank you in advance and I am very much looking forward to here back from you.

prfanalyze DEBUG not working as desired

entering the DEBUG mode generated config.json file with root access, and there is no way of removing it. If you remove it, and try to exit the gear, it will write the config.json file back with root access.

As said before, I think we should implement something analogous to the other gears. Let's comment it

looking for the dockerfiles

Hello

Would it be possible to add to the repo the dockerfiles that were used to create the docker images?

I love docker for reproducibility ๐Ÿš€ but I love it even more when I can check the recipes to know how it was built and if adding any extra layer will "mess" things up.

Thank you in advance.

Create an output file with the command line used to run the tool

Maybe we can do it in the same file where we output the config file. Let's talk tomorrow about the content of the output file:

  • Datastring of the analysis
  • Version of the docker container
  • Version of the tool installed (AFNI version...)
  • Config file used to invoke the Docker container
  • Environment variables

check sigmaMinor is always < sigmaMajor

When synthesizing, the system creates a mesh, regardless of the name sigmaMinor or sigmaMajor. The pmForwardModelTableCreate() function should filter the values and make sure that sigmaMajor >= sigmaMinor always.
sigmaMajor = [1,2,3]
sigmaMinor = [1,2,3]
should generate the following combination of values:
[sigmaMinor, sigmaMajor]
[1,1]
[1,2]
[1,3]
[2,2]
[2,3]
[3,3]

[ENH] prfreport: add SNR

I edited the prfsynth engine so that it adds the SNR to the list of parameters required to generate each time series. prfreport should be able to read this file, extract the SNR and add it to every line of the results.

Voxels with multiple receptive field

Hey, thanks for making the great prf-synthesize
I have a question that will the prf-synthesize simulate voxels with more than one receptive fields, as reported in "Carvalho, J., Invernizzi, A., Ahmadi, K., Hoffmann, M. B., Renken, R. J., & Cornelissen, F. W. (2020). Micro-probing enables fine-grained mapping of neuronal populations using fMRI. Neuroimage, 209, 116423. https://doi.org/10.1016/j.neuroimage.2019.116423"
If it can, then how could I make it
Thanks~

[BUG] config.json returned as a folder

When running this code, it throws a folder instead of a file

docker run --rm -it \

  -v $basedir:/flywheel/v0/input \
  -v empty:/flywheel/v0/input/config.json \
  -v $basedir:/flywheel/v0/output \
  garikoitz/prfanalyze-vista

[BUG] BIDS creates new files instead of overwriting or sending warning

This is what happens if we run the same analysis with the same config file:

sub-alexsynth_ses-stim5v02_task-prf_estimates.mat
sub-alexsynth_ses-stim5v02_task-prf_modelpred.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_results.mat
sub-alexsynth_ses-stim5v02_task-prf_sigmamajor.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_sigmaminor.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_estimates.mat
sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_modelpred.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_results.mat
sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_sigmamajor.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_sigmaminor.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_testdata.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_theta.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_x0.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_y0.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_testdata.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_theta.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_x0.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_sub-alexsynth_ses-stim5v02_task-prf_y0.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_testdata.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_theta.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_x0.nii.gz
sub-alexsynth_ses-stim5v02_task-prf_y0.nii.gz

Error running example

Hello! I am getting an error when trying to run the prfsynth image in the example. I am wondering if i need install a matlab container image inorder for the docker command to run properly.

Here is a snapshot of my terminal as I run prfsynth if it helps.

`Setting up environment variables

LD_LIBRARY_PATH is .:/opt/mcr/v95//runtime/glnxa64:/opt/mcr/v95//bin/glnxa64:/opt/mcr/v95//sys/os/glnxa64:/opt/mcr/v95//sys/opengl/lib/glnxa64
Number of workers:

NumWorkers =

 8

Starting parallel pool (parpool) using the 'local' profile ...

Error using parpool (line 113)
Parallel pool failed to start with the following error.

Error in pmForwardModelCalculate (line 57)

Error in synthBOLDgenerator (line 184)

Caused by:
Error using parallel.internal.pool.InteractiveClient>iThrowWithCause (line 676)
Failed to initialize the interactive session.
Error using parallel.internal.pool.InteractiveClient>iThrowIfBadParallelJobStatus (line 790)
The interactive communicating job failed with no message.

parallel:cluster:PoolCreateFailed

real 3m1.451s
user 1m29.344s
sys 1m9.530s
[garikoitz/prfsynth] An error occurred during execution of the Matlab executable. Exiting!
`

I'm not sure if this is sufficient info to solve my problem but anything will help.

Popeye returns -y instead of y

Check just in case that x and y and correct, what it is clear is that it needs flipping. I remember doing it in popeye-mini

compatibility with multiple runs

Currently, the vistasoft docker operates only one run (e.g., a bar scan, which might be the average of several acquisitions). However, one might have runs that cannot be averaged together, such as one set of runs with wedges and another with rings. The vistasoft Matlab code can handle this, but the docker cannot. The winawerlab would like to be able run dockers on multiple runs. How can we help implement this in the docker?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.