Code Monkey home page Code Monkey logo

holopy's Introduction

Holography and Light Scattering in Python

Development Branch Build Status Documentation Status

HoloPy is a python based tool for working with digital holograms and light scattering. HoloPy can:

  • Load images, associate them with experimental metadata, and visualize loaded or calculated images.
  • Reconstruct 3D volumes from digital holograms
  • Do Scattering Calculations
    • Compute Holograms, electric fields, scattered intensity, cross sections, ...
    • From spheres, clusters of spheres, and arbitrary structures (using DDA)
  • Make precise measurements by fitting scattering models (based on the above structures) to experimental data.

HoloPy provides a powerful and user-friendly interface to scattering and optical propagation theories. It also provides a set of flexible objects that make it easy to describe and analyze data from complex experiments or simulations. HoloPy's optical propagation theories work for holograms of arbitrary objects; HoloPy's current scattering calculations accurately describe scatterers in sizes from tens of micrometers and smaller.

The easiest way to see what HoloPy is all about is to jump to the examples in our user guide.

HoloPy started as a project in the Manoharan Lab at Harvard University. If you use HoloPy, you may wish to cite one or more of our papers. We also encourage you to sign up for our User Mailing List to keep up to date on releases, answer questions, and benefit from other users' questions.

HoloPy is based upon work supported by the National Science Foundation under grant numbers CBET-0747625, DMR-0820484, DMR-1306410, and DMR-1420570.

holopy's People

Contributors

agoldfain avatar anna-wang avatar anniestephenson avatar barkls avatar briandleahy avatar carreau avatar charliegreboval avatar danhickstein avatar egehrels avatar faaborg avatar golovart avatar jeromefung avatar korigod avatar matthiasgi avatar ralex0 avatar rbnvrw avatar rebeccawperry avatar rgov avatar tdimiduk avatar victoriahwang89 avatar vnmanoharan avatar wsy2220 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

holopy's Issues

API improvements

Let's make the API more consistent and easier to use.
See issues:
#43
#32
#33

See whether we can simplify the MArray hierarchy (and come up with a better name?)

Using DDA on objects with more than one domain

Using the example from the tutorial (http://manoharan.seas.harvard.edu/holopy/users/dda_tutorial.html#dda-tutorial) fails:

import holopy as hp
from holopy.core import Optics, ImageSchema
from holopy.scattering.scatterer import Scatterer, Sphere
from holopy.scattering.theory import DDA
s1 = Sphere(r = .5, center = (0, -.4, 0))
s2 = Sphere(r = .5, center = (0, .4, 0))
schema = ImageSchema(100, .1, Optics(.66, 1.33, (1, 0)))
dumbbell = Scatterer(lambda point: s1.contains(point) or s2.contains(point),
1.59, (5, 5, 5))
holo = DDA.calc_holo(dumbbell, schema)

gives error:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

ScatteringTheory.calc_intensity assumes a detector facing along the z-axis

This bug is in scatterytheory.py. The hard-coded "normal" vector assumes the detector is pointing along the z-axis, so it only uses the components of the electric field in the x and y directions to calculate the intensity. This creates problems when considering scattering in another direction, e.g. for calculations of dark-field scattering spectra.

hdf5

Switch to reading/writing images and arrays as hdf5 rather than dumping binary data into .yaml files.

No longer want to create image-containing .yaml files, but we still need to be able to read them for foreseeable future.

Data for tests is currently in .yaml files, and some should be converted to hdf5.

Switch to lmfit-py for minimization

lmfit-py (https://github.com/lmfit/lmfit-py) is a python implementation of the Levenberg-Marquardt algorithm. It has support for bounds. Switching to lmfit from nmpfit would offer the following advantages:

  • we get uncertainty calculations for free, and in a transparent way (http://lmfit.github.io/lmfit-py/fitting.html#goodness-of-fit-and-estimated-uncertainty-and-correlations).
  • we could eliminate a lot of the code that is built around nmpfit. lmfit has its own Parameter class that supports bounds and constraints
  • we could eliminate the dependency on nmpfit
  • lmfit has support for other minimizers supported by scipy, so users would have access to these without having to install OpenOpt.
  • potentially we could also parallelize the Jacobian calculation. This is the most expensive part of fitting, as it requires calculating holograms multiple times. See lmfit-py enhancement request (lmfit/lmfit-py#76) and sample code (http://stackoverflow.com/a/19679060)
  • lmfit is under active development, and the developers are quite responsive. So if there is a feature we need (such as tied parameters) we can modify the code and submit a pull request to have it incorporated upstream

We would probably still need to wrap lmfit-py to make it compatible with our existing API, but there is already work on writing a high-level Model class and fit() function (lmfit/lmfit-py#57)

ported over from a launchpad blueprint originally created by @vnmanoharan

Switch to pillow for image io

pillow (http://python-imaging.github.io/) is a fork of the Python Imaging Library that is actively developed (https://github.com/python-imaging/Pillow). It includes support for 12-bit TIFF files (python-pillow/Pillow#416). If we switch to pillow from PIL, we can remove the dependency on tifffile.py and potentially eliminate a lot of the code in holopy/core/io/image-file-io.py.

We will need to develop unit tests for loading the 12-bit TIFF files from the Photon Focus camera and other images that have been a problem to load. If they do not load properly using pillow, we can file an issue upstream.

port of an issue by @vnmanoharan from launchpad

propagate does not expect images with both negative and positive pixels

Suppose
img
is an image with both negative and positive pixels.

Then
hp.show(hp.propagate(img,0.01)
gives a distinctly different result than
hp.show(hp.propagate(img - img.min(),0.01)

With autoscaling, the second result looks more similar to the un-propagated image, as desired for a propagation of a small finite distance.

Timeseries Interface

We need a standardized interface to analyse timeseries data, rather than have everyone write it themselves.

Coordinate system convention in Positions class

Dear holopy team,

The definition of theta in the Positions class (line 474 of metadata.py) appears to be consistent with conventional spherical coordinates. However, this means that theta = 0 corresponds to the positive z axis (backscattering) and theta = pi corresponds to the negative z axis (forward scattering), the reverse of conventional scattering coordinates. Is this correct?

Thank you,
Nick

Rethink saving to file

hp.save()
-defaults to saving any holopyobject as hdf5
-option to save to yaml if preferred

hp.save_image()
-saves Image object to image file
-saves metadata if compatible format
-options to control contrast range, discretization

Test Data Repo

We currently have machinery to test against full saves of holograms, fields, etc, but no good way to distribute that test data. We don't want to check in large binary files to the main holopy repository. I think it would work well to have a seperate holopy-test-golds repo (or propose another name) where we check in that data and a script to grab those files in the main holopy.

Might be worth having an option for our run_nose script that would call it if needed to test against the full data:
python run_nose.py --fetch-full-golds
or something like that.

Implement and document cascaded free-space propagation for reconstruction

Commit cb3491a deleted the old cascaded free-space propagation code for reconstruction. This code was very old and was rarely used, perhaps because the feature was poorly documented. But I think it is an important feature for reconstruction. The idea (described in [1]) is as follows: say you want to reconstruct your object at a distance of 10 microns. Instead of calculating the transfer function at 10 microns, you calculate it at a smaller distance, say 5 microns, and then multiply by the square of the transfer function. This way you avoid artifacts in the reconstruction that are related to the limited window of the transfer function (the larger the distance, the smaller the window over which the transfer function is nonzero).

I think that holopy should figure out automatically if cascaded free-space propagation is necessary to avoid artifacts in the reconstruction. This would entail some code that checks to see if we are undersampling the transfer function. Implementing this technique would hopefully prevent some of the artifacts we see in reconstructions.

I don't think this is necessary for 3.0, but I do think it is important to implement for anyone doing reconstructions.

[1] Kreis, Thomas M. “Frequency Analysis of Digital Holography with Reconstruction by Convolution.” Optical Engineering 41, no. 8 (2002): 1829–39. doi:10.1117/1.1489678.

Cannot reproduce recon example

I'm trying to reproduce the tutorial example on reconstruction and it is failing with the following error:

from numpy import linspace
import holopy as hp
from holopy.core import Optics
from holopy.propagation import propagate
from holopy.core.tests.common import get_example_data
from holopy.core import load

holo = get_example_data('image0001.yaml')
rec_vol = propagate(holo, linspace(4e-6, 10e-6, 7))
hp.show(rec_vol)

yields:

---------------------------------------------------------------------------
FutureWarning                             Traceback (most recent call last)
<ipython-input-10-96a8540cfb8d> in <module>()
      8 holo = get_example_data('image0001.yaml')
      9 rec_vol = propagate(holo, linspace(4e-6, 10e-6, 7))
---> 10 hp.show(rec_vol)

/Users/bmorris/anaconda/lib/python2.7/site-packages/holopy/vis/show.pyc in show(o, color)
     57         show_sphere_cluster(o,color)
     58     elif isinstance(o, (Image, np.ndarray, list, tuple)):
---> 59         show2d(o)
     60     elif isinstance(o, Scatterer):
     61         show_scatterer(o)

/Users/bmorris/anaconda/lib/python2.7/site-packages/holopy/vis/vis2d.pyc in show2d(im, i, t, phase)
    176         axis_names = ['x', 'z']
    177 
--> 178     im = squeeze(im)
    179 
    180 

/Users/bmorris/anaconda/lib/python2.7/site-packages/holopy/core/marray.pyc in squeeze(arr)
    678     """
    679     keep = [i for i, dim in enumerate(arr.shape) if dim != 1]
--> 680     if not hasattr(arr,'spacing') or arr.spacing == None:
    681         spacing = None
    682     else:

FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.

I'm using Python 2.7, Mac OS X 10.10.

Installation Package

Ideally using conda-forge

Tom has experience with Nix and may also implement that.

Too many different exceptions

We should think about coarse-graining our exceptions. In general having exceptions for particular errors is useful only insofar as the user might want to catch that particular exception. But we don't want a different exception for anything that can go wrong, since we have to write all those exceptions and remember their names to import them. There are several cases in which we can coarse-grain to make it easier to write new functions and classes without significantly impacting the user. For example, the Multisphere code has three different exceptions:

class MultisphereFieldNaN(UnrealizableScatterer):
    def __str__(self):
        return "Fields computed with Multisphere are NaN, this probably "
        "represents a failure of the code to converge, check your scatterer."


class MultisphereExpansionNaN(Exception):
    def __str__(self):
        return ("Internal expansion for Multisphere coefficients contains "
                "NaN.  This probably means your scatterer is unphysical.")

class ConvergenceFailureMultisphere(Exception):
    def __str__(self):
        return ("Multisphere calculations failed to converge, this probably means "
"your scatterer is unphysical, or possibly just huge")

All of which tell the user the same thing -- that there is a problem with the scatterer. Is there a way to reduce these to one exception, and use the message to indicate where the point of failure was? Also, in general, can we reuse exception classes from the standard library (like ValueError) where we are doing standard tasks like validating input? All of this would make the code easier to read and maintain.

Make Scattering theories into simple functions

The scattering theories are currently classes with magical functions that let you call calculation functions as if they were normal functions.

We decided to make the scattering theories into objects because we thought it would be useful for having theory settings, but in practice this is almost never used.

The code that allows the current api is kind of excessively magical (uses inspect and a decorator that inserts functions into an object's dictionary) and is a bit brittle to things like pickling (needed for multiprocessing support). So it is probably more hassle and maintenance burden than it is worth.

HoloPy does not capture all of ADDA's output

ADDA computes a number of potentially interesting scattering quantities that HoloPy does not currently capture. It should be relatively easy to parse them out of the output files and make them available. Have to think about what the right way to present that data is.

Python 3

Probably cleaner to only support one version of python, ie. abandon python 2.

Make Mie automatically determine if it needs to compute near field stuff

Using compute_escat_radial and full_radial_dependence in Mie calculations when you don't need to can cost almost a factor of 10 in run time in some cases. I think it should be possible to pick a conservative threshold where they are definitely not needed and have the code automatically turn those extra calculations off.

resampling with one value does not result in a square image

In the resampling examples given below, the second one doesn't work as expected. The expected output is that h would be [300x300] and the spacing with be [ 0.05466667 0.05466667]. Instead, the image is just shrunk along the first axis.

import holopy as hp
from holopy.core import ImageSchema, Optics

optics = hp.core.Optics(wavelen=.532, index=1.405,
polarization=[1.0, 0.0])

resampling with both dimensions explicitly specified:

h = hp.core.marray.Image(zeros([400,400]), spacing = .041, optics = optics)
h = h.resample([300,300])
print h.spacing

resampling with just one number:

h = hp.core.marray.Image(zeros([400,400]), spacing = .041, optics = optics)
h = h.resample(300)
print h.spacing

Copyright headers need to be updated

Copyright headers in each file should contain all years in which the code was modified. Also the author list should probably be updated. My recommendation is that we do the author list file-by-file, and that the policy (going forward) is that if someone commits non-trivial changes to a source file, they simply add their name to the end of the Copyright statement in the header. For the copyright years, the FSF recommends (https://www.gnu.org/prep/maintain/html_node/Copyright-Notices.html):

"To update the list of year numbers, add each year in which you have made nontrivial changes to the package...When you add the new year, it is not required to keep track of which files have seen significant changes in the new year and which have not. It is recommended and simpler to add the new year to all files in the package, and be done with it for the rest of the year."

Easier import api

Figure out to what degree it makes sense to provide shorter paths to commonly used functions and objects. Right now you need a bunch of boilerplate imports at the top of each script to do things. It might be nice to be able to do something like

import holopy.fitting.api as hp
or
import holopy.scattering.api as hp
or
from holopy.fitting.api import Sphere, fit, Optics, ...

and get all the stuff you want without having to do separate imports from lots of different places.

hp.vis.vis2d.show_scatterer_slices - not sure if this still works

I'm not sure hp.vis.vis2d.show_scatterer_slices works/I don't know if there's a different way of calling it now. When I type the following I get a bunch of black slices:

import holopy as hp
from holopy.scattering.scatterer import Ellipsoid

test = Ellipsoid(n = 1.585, r = [.4,0.4,1.5], center = [10,10,20])
hp.vis.vis2d.show_scatterer_slices(test,0.1)

I also get an error the first time I try this:

colorbar.py:829: RuntimeWarning: invalid value encountered in divide
  z = np.take(y, i0) + (xn - np.take(b, i0)) * dy / db

example slice:
figure_1

DDA does not agree with Mie for a sphere

sch = ImageSchema(shape=[100, 100], spacing=[0.10000000000000001, 0.10000000000000001], optics=Optics(wavelen=0.66, index=1.33, polarization=[1, 0], divergence=0.0), origin=[0.0, 0.0, 0.0])
sph = Sphere(n=1.59, r=0.5, center=(5, 5, 5))
hm = Mie.calc_holo(sph, sch)
hd = DDA.calc_holo(sph, sch)
hp.show(hm - hd)

These do not agree as well as they probably should. It appears there is a systemic offset in the center position.

Check scattering calculations for large spheres

PR #37 changed the how we compute the number terms for field calculations. We do not currently have any tests of large spheres which this could have affected. We should add a unit test computing scattering of a large sphere and run it against old versions of the code. @anna-wang ran a quick check and it looks ok, but it would still be good to check.

Open PRs

Evaluate and merge/close all open pull requests.

Make scatterers and theory available from holopy.scattering?

Is it worth making theories (Mie, Multisphere, DDA, ...) and scattereres (Sphere, Spheres, ...) available from holopy.scattering instead of just in the theory and scattererer namespaces?

Example, currently:

from holopy.scattering.scatterer import Sphere, Spheres
from holopy.scattering.theory import Mie, Multisphere

If we pushed these namespaces down, it would then be possible to do

from holopy.scattering import Sphere, Spheres, Mie, Multisphere

though the old code would of course still work.

I don't think we have enough things here fro this namespace to get too crowded, and it would save a line and chuck of typing from some of the most common use cases of holopy.

Optical train modeling

We should be modeling the optical train more explicitly. This may help solve alpha, and could allow us to correct for things like tilt, spherical abberation and things of that sort.

Rethink Backgrounding

We should need to think through in more detail what is the best way to use backgrounds to minimize error.
This may be:
Final = (Raw - Darkfield)/Background

Conversion Tools for old file formats

As discussed in #45 we are going to fork off readers for old formats (holopy yaml, older hdf5 files) into a separate repo.

I propose a name like holopy-legacy-reader, comment if you prefer something else.

My plan is that the legacy-reader will depend on modern HoloPy (as currently embodied in my xarray branch) for writing new style netcdf/hdf5 files but will contain the legacy code needed to read older files.

Remove old nose dependency

/holopy/scattering/theory/dda.py", line 40, in

from nose.plugins.skip import SkipTest

that is using old nose, need to find an equivalent way of doing things with new nose

setup.py: fortran doesn't compile properly when multiple fortran compilers are available

In Ubuntu

python setup.py build_ext --inplace

gives the following error when both gcc/gfortran and an Intel Fortran compiler are installed:

RuntimeError: module compiled against API version 9 but this version of numpy is 6
/n/home08/annawang/temp/holopy/holopy/scattering/theory/__init__.py:43: UserWarning: 
Could not import scattering.  You will not be able to do scattering
calculations, but the rest of holopy sould remain usable.

This is probably due to your not having properly compiled versions holopy's
fortran bits.  
  fortran bits.  """)
/n/sw/python-2.7.1/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'package'
  warnings.warn(msg)

[tonnes more lines...]

Found executable /usr/bin/g77
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler using build_ext
running scons

Removing either gcc or the Intel compiler results in the code compiling properly

Copyright headers are missing on some files

this is not an exhaustive list:
holopy/holopy/fitting/mcmc.py
holopy/holopy/fitting/noise_model.py
holopy/holopy/fitting/prior.py
holopy/holopy/fitting/timeseries.py
holopy/holopy/fitting/random_subset.py

Copyright headers should be added to each (see #56), and to any other (non-third-party) files without headers.

switch to xarray

Pandas is currently used in the Bayesian inference code. We might as well expand it to the rest of holopy where it makes sense, for example timeseries data.

multisphere fortran text output

While running, the multisphere code outputs a lot of text to the terminal as it iterates through calculating scattering terms.

For most users, this text output is annoying and not useful. Under what circumstances might this information be relevant? Can we suppress this output by default unless the user is concerned about convergence of their scattering calculation?

Improve Scattering Theory API

@agoldfain has requested we make it easier to add new scattering theories to holopy. Ideally what we should do is clean up and standardize the scattering theory classes a bit and then document what you need to write to add a new one (it should just be a function or two).

Rethink fitting in holopy

@vnmanoharan said: Should we get rid of the OpenOpt code? I don't think it is widely used, and if we're going to work on the fitting infrastructure, I'd rather it be based on lmfit.

Bayesian Inference

Interface needs some thought. Docs need heavy updating to emphasize its existence.

import Image

In the image_file_io.py, I had to change the line "import Image as PILImage" to "from PIL import Image as PILImage"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.