Code Monkey home page Code Monkey logo

fsl_sub's Introduction

FSL_SUB

About

Note: this repository is based on an old version of fsl_sub. Users should consider the recent fsl_sub release.

FSL is a popular tool for brain imaging. If you set it up on your laptop or desktop, many of the commands will take a long time to complete as they are run on a single CPU at a time. Big data centers typically set up a Grid Engine to run FSL tasks in parallel, but this is complicated and not possible on all operating systems (e.g. macOS). By simply installing this tiny file, FSL will run on parallel on any desktop or laptop. Specifically, any task that would have used SGE if it was available will be spawned to all your available CPUs. This solution is described in more detail on my optimizing FSL web page. Note that only some stages of FSL are designed to run in parallel (FEAT, MELODIC, TBSS, BEDPOSTX, FSLVBM, POSSUM). Therefore, this script may not always be the optimal way to accelerate FSL. Specifically, if you are using FEAT for fMRI analyses, you should be aware that only FLAME will be accelerated by the fsl_sub script provided here. An alternative to accelerating the processing of each individual would be to run several individuals in parallel. Further, while tools like bedpostx, eddy, and probtrackx may be able to take advantage of multiple CPUs, they will be dramatically faster using a CUDA-enabled graphics card.

Recent Versions

28-March-2020

  • Updated to correspond with FSL 6.0.3.

29-Jan-2017

  • Example dataset included.

7-May-2017

  • Explicitly use BASH shell, which is not default for Debian. Note: if you use FSL it is probably a good idea to set BASH as your default shell, as several FSL scripts will fail with other shells (e.g. DASH).

30-August-2016

  • do not run "GPU" code in parallel

15-May-2015

  • Not all versions of sh support "declare"

3-March-2015

  • Initial version
Installation

Replace your previous version of fsl_sub with this code. Tasks will automatically run in parallel. SGE will take precedence over this code, so if you ever upgrade to a SGE cluster you do not have to modify this function

cd $FSLDIR/bin
cp fsl_sub fsl_sub_orig
sudo cp ~/Downloads/fsl_sub/fsl_sub fsl_sub
sudo chmod o+rx fsl_sub
Installation

Replace the copy of fsl_sub that came with FSL with the version included here. To find the location of the original fsl_sub type which fsl_sub from the command line. You will want to make sure that the version of fsl_sub is executable (sudo chmod +x fsl_sub).

Tuning

After installation the code will automatically accelerate parallel processes. However, you can also control its behavior:

  1. If you want to test the benefit, you can temporarily disable the function by using the command “FSLPARALLEL=0; export FSLPARALLEL”
  2. If you want to test the benefit, you can temporarily force it to use precisely 8 cores with the command “FSLPARALLEL=8; export FSLPARALLEL”
  3. If you want to test the benefit, you can temporarily force it to automatically detect the number of cores (the default behavior) with the command “FSLPARALLEL=1; export FSLPARALLEL”
  4. To make permanent changes, add the desired FSLPARALLEL setting to your profile.
Test Dataset

If you want to test the effectiveness of parallel processing on your computer, un-compress the folder "bedpostTest.zip" and run the shell script "runme_quick.sh" - this will FSL's Bedpost twice: once with parallel processing off and once with it on. Notice that FSLPARALLEL will accelerate any FSL program that is designed to use SunGridEngine to go run CPU tasks in parallel. Note that in the specific case of bedpost, serious users will want want to use the GPU-based bedpostx_gpu instead of the CPU-based bedpostx (e.g. the included script runme_gpu.sh). Also, the sample dataset only includes an MRI scan with six low-resolution slices (with little benefit from more than six threads), while real world datasets include dozens of high-resolution slices. The purpose of this test dataset is simply to test the installation of the parallel fsl_sub.

./runme_quick.sh
fsl_sub running  with parallel processing disabled
...
real	19m59.990s
...
fsl_sub running  with parallel processing enabled
...
real	8m54.404s
License

This software uses the FMRIB Software Library license, which is embedded in the source code.

fsl_sub's People

Contributors

neurolabusc avatar soyfrien avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

fsl_sub's Issues

flameo issue in FSL v6.0

Hi!

I have succesfully used your script on my MacbookPro and do like it!

I have now tried to use it on a linux workstation (LinuxMint 19). On this workstation I have 2 versions of FSL installed:
/usr/share/fsl/5.0 (FSL version 5.0.11)
/usr/share/fsl/6.0 (FSL version 6.0)
In both I've replaced fsl_sub by your script.
Running the example you provide works fine and shows substantial decrease in duration as expected.

However, if I run a feat-analysis (which uses FLAME, and thus, calls fsl_sub) using FSL 6.0 feat gets stuck when performing flameo. htopshows many flameoprocesses running in parallel. These processes keep running even when I kill feat. All CPUs run at 100%. The only way I can stop this is to kill all flameo processes of the respective user.
This does not happen using FSL 5.0.11.

Any ideas on this are highly appreciated! Thank you!

System information:

cat $FSLDIR/etc/fslversion
6.0.0

$ cat /etc/upstream-release/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=18.04
DISTRIB_CODENAME=bionic
DISTRIB_DESCRIPTION="Ubuntu 18.04 LTS"

$ lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              32
On-line CPU(s) list: 0-31
Thread(s) per core:  2
Core(s) per socket:  8
Socket(s):           2
NUMA node(s):        2
Vendor ID:           GenuineIntel
CPU family:          6
Model:               63
Model name:          Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
Stepping:            2
CPU MHz:             1201.049
CPU max MHz:         3200,0000
CPU min MHz:         1200,0000
BogoMIPS:            4799.64
Virtualization:      VT-x
L1d cache:           32K
L1i cache:           32K
L2 cache:            256K
L3 cache:            20480K
NUMA node0 CPU(s):   0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30
NUMA node1 CPU(s):   1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts

Output differences when comparing randomise with randomise_parallel

This may not be the right platform for this, but just wanted to know if you or anyone had run a comparison between the randomise command and randomise_parallel that leverages your version of fsl_sub. I tried running it and found that the parallel outputs were different from the serial ones. Are you aware of any such differences? Thanks.

ReadMe has dead links

Hello,

I thought I should let you know that some of the links in the readme had dead links. I.e.

http://www.mccauslandcenter.sc.edu/crnl/optimizing-spmfsl

and

http://godzilla.kennedykrieger.org/penguin/fsl.shtml.

Related to this, your link https://www.sc.edu/about/offices_and_divisions/division_of_information_technology/rci/research_profiles/crorden.php at https://crnl.readthedocs.io/optimizing_spm_fsl/index.html regarding the easy integration of GPUs in MatLab is also dead.

fsl_sub has to be used if using feat?

Apologies if this is the wrong place to ask - couldn't find the solution to this basic question -

I am just wondering if once the fsl_sub function has been replaced if I need to write something like:
fsl_sub feat pathto.fsf
or if it is enough to just:
feat pathto.fsf
or if the individual fsf processing isn't parallelised so I need to do something like
feat pathto1.fsf pathto2.fsf
or
feat pathto1.fsf
feat pathto2.fsf

I tried these options but I don't think I have found the right combination yet -

Thanks -

(Running this in a 4 core Ubuntu 17.10 VM, with the neurodebian FSL installation at the moment)

runme_quick.sh

Thank you very much for the modified fsl_sub. It is so useful!

I tried your test script, rume_quick.sh, and found that export FSLPARALLEL is missing, which resulted in FSLPARALLEL=1 set in the child processes though I set FSLPARALLEL=0.
In addition, when I replaced #!/bin/sh with #!/bin/bash, error regarding pushd and popd disappeared.

Therefore, I modified the script as below and I really appreciated the effectiveness of your fsl_sub.

sed -i 's@#!/bin/sh@#!/bin/bash@' runme_quick.sh
sed -i 's/FSLPARALLEL=0/FSLPARALLEL=0; export FSLPARALLEL/' runme_quick.sh 
sed -i 's/FSLPARALLEL=1/FSLPARALLEL=1; export FSLPARALLEL/' runme_quick.sh 
sed -i 's/#time/time/' runme_quick.sh 

I dropped a note so that someone might benefit from this.

Kiyotaka

Illegal option -r

I use this fsl_sub to replace the original one. but it returns many "/usr/share/fsl/5.0/bin/fsl_sub: 1: jobs: Illegal option -r" when I was doing dual_regression

problem to use your code

Dear:

I use FSL-provided centos system on a server with 12 cores, I followed the instruction that copied fsl_sub to replace the original fsl_sub file in bin folder of $FSLDIR, when I run feat, it gives error as shown in snapshot.

Any suggestion? thanks!

snip20180131_8

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.