Code Monkey home page Code Monkey logo

sirf-exercises's Introduction

SIRF-Exercises

This material is intended to get you going with SIRF, an open source framework for PET, SPECT and MR Image Reconstruction, including synergistic aspects.

This repository also contains basic information on CIL functionality to show similarities with SIRF and how use CIL's optimisation algorithms.

This software is distributed under an open source license, see LICENSE.txt for details.

Links to documentation

Full instructions on getting started are in our documentation for participants (or use this link to GitHub for nice formatting, but do check which version of the exercises you are using). Despite the name, this documentation is also appropriate if you are trying these exercises on your own.
Gentle request: If you are attending a course, please read this before the course.

Instructors should check our documentation for instructors.

Installation instructions when you do not use our cloud resources are in INSTALL.md, but read the above links first.

You can run the SIRF-Exercises in GitHub Codespaces, see the section in the documentation, including information on which kernel to select (and more!).

Authors

  • Kris Thielemans (this document and PET exercises)
  • Christoph Kolbitsch (MR exercises and Introductory exercises)
  • Johannes Mayer (MR exercises)
  • David Atkinson (MR and geometry exercises)
  • Evgueni Ovtchinnikov (PET and MR exercises)
  • Edoardo Pasca (overall check and clean-up)
  • Richard Brown (PET and registration exercises)
  • Daniel Deidda and Palak Wadhwa (HKEM exercise)
  • Ashley Gillman (overall check, scripts and clean-up)
  • Imraj Singh (Deep Learning for PET exercise)
  • Daniel Deidda and Sam Porter (Synergistic SPECT/PET Reconstruction Exercises)

sirf-exercises's People

Contributors

anderbiguri avatar ashgillman avatar bathomas avatar casperdcl avatar ckolbptb avatar danajk avatar danieldeidda avatar evgueni-ovtchinnikov avatar gschramm avatar imraj-singh avatar kristhielemans avatar margaretduff avatar mastergari avatar nicolejurjew avatar nikefth avatar paskino avatar samdporter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sirf-exercises's Issues

MR-joint-TV demo needs fixing

At least running via Docker, this demo seems to be missing a directory:

OSErrorTraceback (most recent call last)
<ipython-input-2-1f29d02172b2> in <module>()
     14 #%% GO TO MR FOLDER
     15 os.chdir(examples_data_path('MR'))
---> 16 os.chdir('johannes')

OSError: [Errno 2] No such file or directory: 'johannes'

examples_data_path on Jupyter in Docker

Adding a print to the MR notebook a_fully_sampled

#%% GO TO MR FOLDER
os.chdir(examples_data_path('MR'))
 
print(examples_data_path('MR'))

gives
/opt/SIRF-SuperBuild/INSTALL/share/SIRF-2.2/data/examples/MR and you later get the error

'File ptb_resolutionphantom_fully_ismrmrd.h5 not found'

This file is here on my system
.../SIRF-SuperBuild/docker/devel/PTB_ACRPhantom_GRAPPA/ptb_resolutionphantom_fully_ismrmrd.h5

old MR notebooks have multiple problems

The older notebooks (not those in the interactive folder) were stripped too much and we get an error when opening them (see the process in #20).

They also need changing of petmr_data_path(...) with examples_data_path('MR').

However, I suspect that the notebooks in https://github.com/CCPPETMR/SIRF-Exercises/tree/master/notebooks/MR shouldn't really be there anymore and are superseded by those in interactive. @ckolbPTB @johannesmayer @DANAJK could you confirm that these are safe too delete?

I'm not sure what to do with those in tools. acqh5info might contain things that are now elsewhere. im5info uses h5py and seems interesting (but could do with some more info). @DANAJK, do we keep these ?

README in each folder

I think it'd be nice to have a README.md in each folder, especially if jupyter display the README nicely. @ashgillman I saw it doing that for you. A plugin?

SIRF VM: Error msg in reconstruct_measured_data.ipynb (PET)

In the notebook 'reconstruct_measured_data.ipynb' (PET notebooks) I get the error:

error: ??? "'Error opening file template.hs\n' exception caught at line 303 of /home/sirfuser/devel/buildVM/sources/SIRF/src/xSTIR/cSTIR/cstir.cpp; the reconstruction engine output may provide more information

after: lm2sino.set_up()

All the other notebooks went through without any problem.
Executed within VirtualBox 6.1.18 with the new .ova: SIRF_2.2.0_Python3.

download_MR_data.sh re-downloads PTB_ACRPhantom_GRAPPA.zip

running download_MR_data.sh twice results in

grappa2_6rep.h5: OK
File exists and its md5sum is ok
--2019-04-11 08:05:12--  https://zenodo.org/record/2633785/files/PTB_ACRPhantom_GRAPPA.zip
Resolving zenodo.org (zenodo.org)... 137.138.76.77
Connecting to zenodo.org (zenodo.org)|137.138.76.77|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 6401739 (6.1M) [application/octet-stream]
Saving to: ‘PTB_ACRPhantom_GRAPPA.zip.1’

PTB_ACRPhantom_GRAP 100%[===================>]   6.10M  4.05MB/s    in 1.5s    

2019-04-11 08:05:14 (4.05 MB/s) - ‘PTB_ACRPhantom_GRAPPA.zip.1’ saved [6401739/6401739]

Unpacking PTB_ACRPhantom_GRAPPA.zip
Archive:  PTB_ACRPhantom_GRAPPA.zip
replace PTB_ACRPhantom_GRAPPA/ptb_resolutionphantom_fully_ismrmrd.h5? [y]es, [n]o, [A]ll, [N]one, [r]ename: 

The script should probably use the download function, like the others

Registration notebook "missing features"

Just writing up somewhere what I feel is missing from the registration notebook. (this is just here to have it somewhere written for the fully3d training school prep)

Registration

Currently, while quite detailed exercise, it only has rigid transformation.
It has a comment that says # Set to NiftyF3dSym for non-rigid, but I fear that may not be enough as a "tutorial" if that is what we are looking for. Particularly set_parameters() is quite different (and not documented) and can be very important on non-rigid recon. We should add either a list of parameters to set, or an instruction to tell the user to know where to find them (I think they are only available reading the source, as they have other names that the f3d docs).

Perhaps adding (as commented in a meeting) non-rigid registration with tuning parameters to see the effect on the result is a good idea.

The demo also misses the different ways you can obtain displacement/deformation fields and differences among these two.

Resampling

Not much to say, but if we are teaching image recon in general we may be interested in clarifying/explaining

  • Interpolation types available (its there) and why to use some (e.g. don't use anything higher than order 1 because it will give you negative values)
  • Resampling without transforms: get one image from one domain to another one (equivalent to zoom_image in function, more or less).

profile location in image_creation_and_simulation is wrong

profiles go through a slice with activity. this would work better

profile_no_attn = acquired_data_no_attn_array[5,20,:]
profile_with_attn = acquired_data_with_attn_array[5,20,:]
profile_attn_factors = attn_factors.as_array()[5,20,:]

Can no longer install numba with python 2

The synergistic demos use brainweb, which starts by using pip install numba. That now fails:

 running bdist_wheel
  /usr/bin/python /tmp/pip-install-sW2ewi/llvmlite/ffi/build.py
    File "/tmp/pip-install-sW2ewi/llvmlite/ffi/build.py", line 122
      raise ValueError(msg.format(_ver_check_skip)) from e
                                                       ^
  SyntaxError: invalid syntax

Also seen here https://stackoverflow.com/questions/61925676/can%c2%b4t-install-numba-for-python with a comment "try a new version of Python".

Is there a work-around? Do we need numba or can we make it optional (without a lot of work).

This is currently a problem as our VM is still stuck on Python 2

MAPEM: adding different non-guided priors?

We discussed including different priors, I guess it would make sense to have them as part of this notebook. I guess the quadratic is always a good starting point and then we could have more interesting one like relative difference, logcosh? We could even give the function and the exercise would be to implement it

remaining issues with exercises_data_path

PET notebooks generally do the following

os.chdir(examples_data_path('PET'))
# Copy files to a working folder and change directory to where these files are.
# We do this to avoid cluttering your SIRF files. This way, you can delete 
# working_folder and start from scratch.
shutil.rmtree('working_folder/brain',True)
shutil.copytree('brain','working_folder/brain')
os.chdir('working_folder/brain')

However, this means they are creating the working_folder in the examples_data_path as opposed to the exercises_data_path.
Best to change to

os.chdir(exercises_data_path('PET'))
# Copy files to a working folder and change directory to where these files are.
# We do this to avoid cluttering your SIRF files. This way, you can delete 
# working_folder and start from scratch.
shutil.rmtree('working_folder/brain',True)
shutil.copytree(os.path.join(examples_data_path('PET'),'brain'),'working_folder/brain')
os.chdir('working_folder/brain')

corrections to d_undersampled_Recon

  • "This looks like our thang!" misspelling
  • E^H formula should not divide by coil-sensitivity sum
  • conjugate gradient algorithm Hint 0 should be b=E^H y
  • misspelling "quanitity"
  • misspelling "intialize p" in the answer of "Initialize Iterative Reconstruction"

sirf_registration notebook example

1.) When run in Jupyter from Docker, uses Python 2 and hence the print statements are ugly (though readable).

2.) Clarity: Does the 1 in this statement get_dimensions()[1] refer to the first dimension? I thought python labeled the first dimension as 0? Oh, and python throws in another bit of confusion by putting the slice in this dimension.
My point is that to anyone not versed in the quirks of python, this is quite hard to follow and needs some comments in the example.

3.) Does GeometricalInfo have a print method? This could be handy.

4.) The example might be made clearer by splitting into a few examples. First, two images that look similar apart from an in-plane displacement and then registering them. Second, two images that look like those in the example but with no real-world displacement (to show that the registration can take into account the spatial referencing provided by the geometrical info - the expected displacement field would be the identity). Third the full thing, but state at the outset how these images differ in their real-world position (at the moment it's not very clear if they are a fixed head with incorrect geometrical info, or there has been some movement between them that we are trying to find by registration).

overview of folder structure

We need an overview of where everything is, and where to start.

@DANAJK you suggested to have this in a notebook. I think I'd prefer to have it part of the README (potentially splitting that up in several notebooks). You can open those in the jupyter interface with reasonable formatting (although sadly links etc don't work). What do you think?

We could have both of course. Maybe update frequency is low enough that duplication doesn't matter.

If a notebook, I guess it would sit in the Introductory folder.

Cropping with SIRF <= v2.1.0

The following was in this notebook as a workaround for older versions of SIRF: https://github.com/SyneRBI/SIRF-Exercises/blob/master/notebooks/Synergistic/BrainWeb.ipynb

I don't really want it in that notebook, so I'll move it here, close the issue and point that notebook to here.

Cropping with SIRF <= v2.1.0

zoom_image and move_to_scanner_centre didn't exist prior to SIRF v2.1.0. If your version is older, you can use the STIR executable and a bit of bash. To be able to use stir executables, make sure they're enabled. This is system dependent, but might look something like this:

cd ~/devel/buildVM
cmake . -DSTIR_BUILD_EXECUTABLES:BOOL=ON
make

Now you should be able to replace:

im = im.zoom_image(size=(-1,150,150),offset_in_mm=(0,25,25))
im = im.move_to_scanner_centre(templ_sino)
im.write(fname + "_small.hv")
return im

with:

small_fname = fname + "_small.hv"
!zoom_image {small_fname} {fname}.hv 150 1 25 25
!sed -r -i 's/.*first pixel offset \(mm\).*//' {small_fname}
return pet.ImageData(small_fname)

You'll have to do something similar when it comes to misalignment. You'll have to switch the following:

resampled = resampler.get_output()
misaligned_image = resampled.move_to_scanner_centre(templ_sino)
return misaligned_image

with the following:

resampled = resampler.get_output().write("tmp_resampled")
!sed -i '/first pixel offset (mm)/d' tmp_resampled.hv
misaligned_image = pet.ImageData("tmp_resampled.hv")
!rm ./tmp_resampled.*
return misaligned_image

sed is system dependent, so I give no promise that this will work. But give it a go if necessary!

jupyter notebook reconstruct_measured_data: missing file 20170809_NEMA_60min_UCL.l.hdr

In [2]:
list.l mMR_template_span11.s mu_map.hv norm.n.hdr
list.l.hdr mMR_template_span11_small.hs mu_map.v README.md
mMR_template_span11.hs mMR_template_span11_small.s norm.n

In [7]:
error: ??? "'Error opening file 20170809_NEMA_60min_UCL.l.hdr\n' exception caught at line 261 of /home/sirfuser/devel/buildVM/sources/SIRF/src/xSTIR/cSTIR/cstir.cpp\nthe reconstruction engine output may provide more information"

HKEM: currently too long

At the moment it focuses on how to investigate the effect of parameters. This means many reconstructions.
Difficult to modify in terms of coding since we just call the KOSMAPOSL reconstructor.

to make it quicker:
remove all the KEM reconstruction and dedicate a cell on the differences between HKEM and KEM
reduce amount of different paremeter choices
reduce number of iterations
increase subsets
do all the exercises at N=3 and look at difference between different value in the last cell

--

registration demo question

Just tried with 2.2.0-rc.1 VM.

The registration demo works, but it seems a bit strange. Initial display of images is
image
Registered images is indeed
image
So, it seems to have rotated it correctly. However, the matrix is

numpy.set_printoptions(precision=2,suppress=True)
TM = algo.get_transformation_matrix_forward()
print(TM.as_array())
[[   0.99    0.07   -0.11 -125.71]
 [  -0.06    0.99    0.09  383.12]
 [   0.11   -0.08    0.99  138.53]
 [   0.      0.      0.      1.  ]]

which seems no rotation but a huge translation.

should not extract downloaded data into SIRF/data/examples

Running download_*sh puts symbolic links in SIRF/data/examples resulting then in git saying the data folder has changed (always tricky with submodules).

It would be better to just put this somewhere else, and then adapt the relevant demos.

BrainWeb: forward project data with tumour (more interesting to use)

The tumour insertion is an extra in this notebook. However, since all the synergistic recon depend on this output it would be more interesting to save the data and use that for the synergistic notebooks:

something like this should do:
#Forward project FDG image with tumour
umap_small=pet.ImageData('uMap_small.hv')
am = get_acquisition_model(umap_small, templ_sino)sino_tumour_FDG = am.forward(pet_tumour)
sino_tumour_FDG.write("FDG_tumour_sino")
sino_tumour_FDG_noisy = add_noise(sino_tumour_FDG,1000)
sino_tumour_FDG_noisy.write("FDG_tumour_sino_noisy")

missing modules

these need to be pip installed (maybe add a requirements.txt)

  • brainweb>=1.5.1 #48
  • nibabel
  • numba
  • tqdm #41

git merge conflicts due to python 2 vs 3

We use nbstripout to prevent conflicts due to notebook output, and to prevent output to be commited. Sadly, we still have conflicts when a notebook is commited with one Python version but run with another. Example:

$ git diff
--- a/notebooks/MR/interactive/a_fully_sampled.ipynb
+++ b/notebooks/MR/interactive/a_fully_sampled.ipynb
@@ -340,21 +340,21 @@
  ],
  "metadata": {
   "kernelspec": {
-   "display_name": "Python 3",
+   "display_name": "Python 2",
    "language": "python",
-   "name": "python3"
+   "name": "python2"
   },
   "language_info": {
    "codemirror_mode": {
     "name": "ipython",
-    "version": 3
+    "version": 2
    },
    "file_extension": ".py",
    "mimetype": "text/x-python",
    "name": "python",
    "nbconvert_exporter": "python",
-   "pygments_lexer": "ipython3",
-   "version": "3.7.1"
+   "pygments_lexer": "ipython2",
+   "version": "2.7.15rc1"
   }
  },
  "nbformat": 4,

looks harmless but

$ git pull
Updating 2e88c7a..471f7f4
error: Your local changes to the following files would be overwritten by merge:
	notebooks/MR/interactive/a_fully_sampled.ipynb
Please commit your changes or stash them before you merge.
Aborting

Enable @jit?

Now that we're Python 3, we could install numba and enable @jit by default.

Do we want this?

Replacing symbolic links

Currently download_MR_data.sh downloads the data to ~/data and then symbolically link the files into the locations expected by the examples.

This may cause a problem for the two grappa files (grappa2_1rep.h5 and grappa2_6rep.h5, which during git pull, are downloaded directly into the examples folder. This might lead to the files having to replace the dynamic links in the same folder.

change URL for NEMA data

the current azure link will expire afer the course so we need to put the data elsewhere later on.

e_advanced_recon.ipynb

data path in e_advanced_recon.ipynb is not correct. Requires:

from exercises_data_path import exercises_data_path
#%% GO TO MR FOLDER
os.chdir(os.path.join(exercises_data_path,'MR', 'PTB_ACRPhantom_GRAPPA'))

Also this block is wrong because bwd_img_arr is 3D.

# PLOT THE RESULTS
grappa_img_arr = norm_array(grappa_images.as_array())
bwd_img_arr = norm_array(bwd_img.as_array())

fig = plt.figure(figsize=(9, 4))
plt.set_cmap('gray')

ax = fig.add_subplot(1,2,1)
ax.imshow(abs(grappa_img_arr), vmin=0, vmax=1)
ax.set_title('Result of GRAPPA Reconstruction')
ax.axis('off')

ax = fig.add_subplot(1,2,2)
ax.imshow(abs(bwd_img_arr), vmin=0, vmax=1)
ax.set_title('Result of AcquisitionModel.backward()')
ax.axis('off')

plt.tight_layout()

>> TypeError: Invalid shape (4, 256, 256) for image data

reconstruct_measured_data: scatter is missing

this shows how to read data from mMR and perform correction estimation. This is very useful to whoever wants to start with SIRF and reconstruct real data. However scatter is missing

add requirements.txt

We need to preinstall some Python packages, see also #76 (comment)

  • add requirements.txt
  • modify documentation
  • in our docker files, check if it exists and use it
  • in VM files, check if it exists and use it

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.