manoharan-lab / holopy Goto Github PK
View Code? Open in Web Editor NEWHologram processing and light scattering in python
License: GNU General Public License v3.0
Hologram processing and light scattering in python
License: GNU General Public License v3.0
hp.save()
-defaults to saving any holopyobject as hdf5
-option to save to yaml if preferred
hp.save_image()
-saves Image object to image file
-saves metadata if compatible format
-options to control contrast range, discretization
/holopy/scattering/theory/dda.py", line 40, in
from nose.plugins.skip import SkipTest
that is using old nose, need to find an equivalent way of doing things with new nose
sch = ImageSchema(shape=[100, 100], spacing=[0.10000000000000001, 0.10000000000000001], optics=Optics(wavelen=0.66, index=1.33, polarization=[1, 0], divergence=0.0), origin=[0.0, 0.0, 0.0])
sph = Sphere(n=1.59, r=0.5, center=(5, 5, 5))
hm = Mie.calc_holo(sph, sch)
hd = DDA.calc_holo(sph, sch)
hp.show(hm - hd)
These do not agree as well as they probably should. It appears there is a systemic offset in the center position.
@AnnieStephenson pointed out that there is currently an upper limit of 20 spheres (specified in scfodim.for file). Is there a physical or computational reason for this hard limit?
Dear holopy team,
The definition of theta in the Positions class (line 474 of metadata.py) appears to be consistent with conventional spherical coordinates. However, this means that theta = 0 corresponds to the positive z axis (backscattering) and theta = pi corresponds to the negative z axis (forward scattering), the reverse of conventional scattering coordinates. Is this correct?
Thank you,
Nick
push runtime nose dependence to tests (affects dda scattering exceptions).
@vnmanoharan said: Should we get rid of the OpenOpt code? I don't think it is widely used, and if we're going to work on the fitting infrastructure, I'd rather it be based on lmfit.
In the image_file_io.py, I had to change the line "import Image as PILImage" to "from PIL import Image as PILImage"
lmfit-py (https://github.com/lmfit/lmfit-py) is a python implementation of the Levenberg-Marquardt algorithm. It has support for bounds. Switching to lmfit from nmpfit would offer the following advantages:
We would probably still need to wrap lmfit-py to make it compatible with our existing API, but there is already work on writing a high-level Model class and fit() function (lmfit/lmfit-py#57)
ported over from a launchpad blueprint originally created by @vnmanoharan
@agoldfain has requested we make it easier to add new scattering theories to holopy. Ideally what we should do is clean up and standardize the scattering theory classes a bit and then document what you need to write to add a new one (it should just be a function or two).
PR #37 changed the how we compute the number terms for field calculations. We do not currently have any tests of large spheres which this could have affected. We should add a unit test computing scattering of a large sphere and run it against old versions of the code. @anna-wang ran a quick check and it looks ok, but it would still be good to check.
Using the example from the tutorial (http://manoharan.seas.harvard.edu/holopy/users/dda_tutorial.html#dda-tutorial) fails:
import holopy as hp
from holopy.core import Optics, ImageSchema
from holopy.scattering.scatterer import Scatterer, Sphere
from holopy.scattering.theory import DDA
s1 = Sphere(r = .5, center = (0, -.4, 0))
s2 = Sphere(r = .5, center = (0, .4, 0))
schema = ImageSchema(100, .1, Optics(.66, 1.33, (1, 0)))
dumbbell = Scatterer(lambda point: s1.contains(point) or s2.contains(point),
1.59, (5, 5, 5))
holo = DDA.calc_holo(dumbbell, schema)
gives error:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Using compute_escat_radial and full_radial_dependence in Mie calculations when you don't need to can cost almost a factor of 10 in run time in some cases. I think it should be possible to pick a conservative threshold where they are definitely not needed and have the code automatically turn those extra calculations off.
Probably cleaner to only support one version of python, ie. abandon python 2.
We should think about coarse-graining our exceptions. In general having exceptions for particular errors is useful only insofar as the user might want to catch that particular exception. But we don't want a different exception for anything that can go wrong, since we have to write all those exceptions and remember their names to import them. There are several cases in which we can coarse-grain to make it easier to write new functions and classes without significantly impacting the user. For example, the Multisphere code has three different exceptions:
class MultisphereFieldNaN(UnrealizableScatterer):
def __str__(self):
return "Fields computed with Multisphere are NaN, this probably "
"represents a failure of the code to converge, check your scatterer."
class MultisphereExpansionNaN(Exception):
def __str__(self):
return ("Internal expansion for Multisphere coefficients contains "
"NaN. This probably means your scatterer is unphysical.")
class ConvergenceFailureMultisphere(Exception):
def __str__(self):
return ("Multisphere calculations failed to converge, this probably means "
"your scatterer is unphysical, or possibly just huge")
All of which tell the user the same thing -- that there is a problem with the scatterer. Is there a way to reduce these to one exception, and use the message to indicate where the point of failure was? Also, in general, can we reuse exception classes from the standard library (like ValueError) where we are doing standard tasks like validating input? All of this would make the code easier to read and maintain.
Figure out to what degree it makes sense to provide shorter paths to commonly used functions and objects. Right now you need a bunch of boilerplate imports at the top of each script to do things. It might be nice to be able to do something like
import holopy.fitting.api as hp
or
import holopy.scattering.api as hp
or
from holopy.fitting.api import Sphere, fit, Optics, ...
and get all the stuff you want without having to do separate imports from lots of different places.
Commit cb3491a deleted the old cascaded free-space propagation code for reconstruction. This code was very old and was rarely used, perhaps because the feature was poorly documented. But I think it is an important feature for reconstruction. The idea (described in [1]) is as follows: say you want to reconstruct your object at a distance of 10 microns. Instead of calculating the transfer function at 10 microns, you calculate it at a smaller distance, say 5 microns, and then multiply by the square of the transfer function. This way you avoid artifacts in the reconstruction that are related to the limited window of the transfer function (the larger the distance, the smaller the window over which the transfer function is nonzero).
I think that holopy should figure out automatically if cascaded free-space propagation is necessary to avoid artifacts in the reconstruction. This would entail some code that checks to see if we are undersampling the transfer function. Implementing this technique would hopefully prevent some of the artifacts we see in reconstructions.
I don't think this is necessary for 3.0, but I do think it is important to implement for anyone doing reconstructions.
[1] Kreis, Thomas M. “Frequency Analysis of Digital Holography with Reconstruction by Convolution.” Optical Engineering 41, no. 8 (2002): 1829–39. doi:10.1117/1.1489678.
The scattering theories are currently classes with magical functions that let you call calculation functions as if they were normal functions.
We decided to make the scattering theories into objects because we thought it would be useful for having theory settings, but in practice this is almost never used.
The code that allows the current api is kind of excessively magical (uses inspect and a decorator that inserts functions into an object's dictionary) and is a bit brittle to things like pickling (needed for multiprocessing support). So it is probably more hassle and maintenance burden than it is worth.
Switch to reading/writing images and arrays as hdf5 rather than dumping binary data into .yaml files.
No longer want to create image-containing .yaml files, but we still need to be able to read them for foreseeable future.
Data for tests is currently in .yaml files, and some should be converted to hdf5.
In Ubuntu
python setup.py build_ext --inplace
gives the following error when both gcc/gfortran and an Intel Fortran compiler are installed:
RuntimeError: module compiled against API version 9 but this version of numpy is 6
/n/home08/annawang/temp/holopy/holopy/scattering/theory/__init__.py:43: UserWarning:
Could not import scattering. You will not be able to do scattering
calculations, but the rest of holopy sould remain usable.
This is probably due to your not having properly compiled versions holopy's
fortran bits.
fortran bits. """)
/n/sw/python-2.7.1/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'package'
warnings.warn(msg)
[tonnes more lines...]
Found executable /usr/bin/g77
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler
gnu: no Fortran 90 compiler found
gnu: no Fortran 90 compiler found
customize GnuFCompiler using build_ext
running scons
Removing either gcc or the Intel compiler results in the code compiling properly
We should need to think through in more detail what is the best way to use backgrounds to minimize error.
This may be:
Final = (Raw - Darkfield)/Background
Ideally using conda-forge
Tom has experience with Nix and may also implement that.
Installation of cairocffi doesn't resolve.
ADDA computes a number of potentially interesting scattering quantities that HoloPy does not currently capture. It should be relatively easy to parse them out of the output files and make them available. Have to think about what the right way to present that data is.
SamplingResult should have an initial_condition property that returns initial walker positions. This needs to be preserved when saving with a specified burn-in.
Suppose
img
is an image with both negative and positive pixels.
Then
hp.show(hp.propagate(img,0.01)
gives a distinctly different result than
hp.show(hp.propagate(img - img.min(),0.01)
With autoscaling, the second result looks more similar to the un-propagated image, as desired for a propagation of a small finite distance.
I'm not sure hp.vis.vis2d.show_scatterer_slices works/I don't know if there's a different way of calling it now. When I type the following I get a bunch of black slices:
import holopy as hp
from holopy.scattering.scatterer import Ellipsoid
test = Ellipsoid(n = 1.585, r = [.4,0.4,1.5], center = [10,10,20])
hp.vis.vis2d.show_scatterer_slices(test,0.1)
I also get an error the first time I try this:
colorbar.py:829: RuntimeWarning: invalid value encountered in divide
z = np.take(y, i0) + (xn - np.take(b, i0)) * dy / db
As discussed in #45 we are going to fork off readers for old formats (holopy yaml, older hdf5 files) into a separate repo.
I propose a name like holopy-legacy-reader, comment if you prefer something else.
My plan is that the legacy-reader will depend on modern HoloPy (as currently embodied in my xarray branch) for writing new style netcdf/hdf5 files but will contain the legacy code needed to read older files.
scattering.theory.mie_f.mie_specfuncs.log_der_1 can take > 10 percent of runtime for a low fraction Mie calculation. It is a short function that should be pretty easy to move to fortran and see a noticeable speedup.
pillow (http://python-imaging.github.io/) is a fork of the Python Imaging Library that is actively developed (https://github.com/python-imaging/Pillow). It includes support for 12-bit TIFF files (python-pillow/Pillow#416). If we switch to pillow from PIL, we can remove the dependency on tifffile.py and potentially eliminate a lot of the code in holopy/core/io/image-file-io.py.
We will need to develop unit tests for loading the 12-bit TIFF files from the Photon Focus camera and other images that have been a problem to load. If they do not load properly using pillow, we can file an issue upstream.
port of an issue by @vnmanoharan from launchpad
While running, the multisphere code outputs a lot of text to the terminal as it iterates through calculating scattering terms.
For most users, this text output is annoying and not useful. Under what circumstances might this information be relevant? Can we suppress this output by default unless the user is concerned about convergence of their scattering calculation?
If Multisphere is passed a sphere scatterer, it should be able to wrap it in a Spheres() object.
We currently have machinery to test against full saves of holograms, fields, etc, but no good way to distribute that test data. We don't want to check in large binary files to the main holopy repository. I think it would work well to have a seperate holopy-test-golds repo (or propose another name) where we check in that data and a script to grab those files in the main holopy.
Might be worth having an option for our run_nose script that would call it if needed to test against the full data:
python run_nose.py --fetch-full-golds
or something like that.
We should try to get at least decent test coverage of the new inference code.
Evaluate and merge/close all open pull requests.
We need a standardized interface to analyse timeseries data, rather than have everyone write it themselves.
Interface needs some thought. Docs need heavy updating to emphasize its existence.
Pandas is currently used in the Bayesian inference code. We might as well expand it to the rest of holopy where it makes sense, for example timeseries data.
I'm trying to reproduce the tutorial example on reconstruction and it is failing with the following error:
from numpy import linspace
import holopy as hp
from holopy.core import Optics
from holopy.propagation import propagate
from holopy.core.tests.common import get_example_data
from holopy.core import load
holo = get_example_data('image0001.yaml')
rec_vol = propagate(holo, linspace(4e-6, 10e-6, 7))
hp.show(rec_vol)
yields:
---------------------------------------------------------------------------
FutureWarning Traceback (most recent call last)
<ipython-input-10-96a8540cfb8d> in <module>()
8 holo = get_example_data('image0001.yaml')
9 rec_vol = propagate(holo, linspace(4e-6, 10e-6, 7))
---> 10 hp.show(rec_vol)
/Users/bmorris/anaconda/lib/python2.7/site-packages/holopy/vis/show.pyc in show(o, color)
57 show_sphere_cluster(o,color)
58 elif isinstance(o, (Image, np.ndarray, list, tuple)):
---> 59 show2d(o)
60 elif isinstance(o, Scatterer):
61 show_scatterer(o)
/Users/bmorris/anaconda/lib/python2.7/site-packages/holopy/vis/vis2d.pyc in show2d(im, i, t, phase)
176 axis_names = ['x', 'z']
177
--> 178 im = squeeze(im)
179
180
/Users/bmorris/anaconda/lib/python2.7/site-packages/holopy/core/marray.pyc in squeeze(arr)
678 """
679 keep = [i for i, dim in enumerate(arr.shape) if dim != 1]
--> 680 if not hasattr(arr,'spacing') or arr.spacing == None:
681 spacing = None
682 else:
FutureWarning: comparison to `None` will result in an elementwise object comparison in the future.
I'm using Python 2.7, Mac OS X 10.10.
The docs (and possibly other things) point at launchpad. We should move these to point at github since we have moved development over here.
this is not an exhaustive list:
holopy/holopy/fitting/mcmc.py
holopy/holopy/fitting/noise_model.py
holopy/holopy/fitting/prior.py
holopy/holopy/fitting/timeseries.py
holopy/holopy/fitting/random_subset.py
Copyright headers should be added to each (see #56), and to any other (non-third-party) files without headers.
This bug is in scatterytheory.py. The hard-coded "normal" vector assumes the detector is pointing along the z-axis, so it only uses the components of the electric field in the x and y directions to calculate the intensity. This creates problems when considering scattering in another direction, e.g. for calculations of dark-field scattering spectra.
We should have an automated way to test all the code in tutorial examples to make sure it always works.
Is it worth making theories (Mie, Multisphere, DDA, ...) and scattereres (Sphere, Spheres, ...) available from holopy.scattering instead of just in the theory and scattererer namespaces?
Example, currently:
from holopy.scattering.scatterer import Sphere, Spheres
from holopy.scattering.theory import Mie, Multisphere
If we pushed these namespaces down, it would then be possible to do
from holopy.scattering import Sphere, Spheres, Mie, Multisphere
though the old code would of course still work.
I don't think we have enough things here fro this namespace to get too crowded, and it would save a line and chuck of typing from some of the most common use cases of holopy.
We should be modeling the optical train more explicitly. This may help solve alpha, and could allow us to correct for things like tilt, spherical abberation and things of that sort.
In the resampling examples given below, the second one doesn't work as expected. The expected output is that h would be [300x300] and the spacing with be [ 0.05466667 0.05466667]. Instead, the image is just shrunk along the first axis.
import holopy as hp
from holopy.core import ImageSchema, Optics
optics = hp.core.Optics(wavelen=.532, index=1.405,
polarization=[1.0, 0.0])
h = hp.core.marray.Image(zeros([400,400]), spacing = .041, optics = optics)
h = h.resample([300,300])
print h.spacing
h = hp.core.marray.Image(zeros([400,400]), spacing = .041, optics = optics)
h = h.resample(300)
print h.spacing
Copyright headers in each file should contain all years in which the code was modified. Also the author list should probably be updated. My recommendation is that we do the author list file-by-file, and that the policy (going forward) is that if someone commits non-trivial changes to a source file, they simply add their name to the end of the Copyright statement in the header. For the copyright years, the FSF recommends (https://www.gnu.org/prep/maintain/html_node/Copyright-Notices.html):
"To update the list of year numbers, add each year in which you have made nontrivial changes to the package...When you add the new year, it is not required to keep track of which files have seen significant changes in the new year and which have not. It is recommended and simpler to add the new year to all files in the package, and be done with it for the rest of the year."
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.