Code Monkey home page Code Monkey logo

pyxem-demos's Introduction

Actions_ Coveralls_ pypi_version_ downloads_ black_ doi_

pyxem is an open-source (GPL v3) python library for multi-dimensional diffraction microscopy.

The package defines objects and functions for the analysis of numerous diffraction patterns. It has been primarily developed as a platform for hybrid diffraction-microscopy based on 4D scanning diffraction microscopy data in which a 2D diffraction pattern is recorded at every position in a 2D scan of a specimen.

pyxem is an extension of the hyperspy library for multi-dimensional data analysis and defines diffraction specific Signal classes.

Installation instructions and tutorial examples are available here .

Basic Documentation is available here.

If analysis using pyxem forms a part of published work please cite the DOI at the top of this page. In addition to citing the package we would appreciate an additional citation to methods papers if you use the following capabilities:

Orientation Mapping

@article{pyxemorientationmapping2022,
    title={Free, flexible and fast: Orientation mapping using the multi-core and GPU-accelerated template matching capabilities in the python-based open source 4D-STEM analysis toolbox Pyxem},
    author={Cautaerts, Niels and Crout, Phillip and {\AA}nes, H{\aa}kon Wiik and Prestat, Eric and Jeong, Jiwon and Dehm, Gerhard and Liebscher, Christian H},
    journal={Ultramicroscopy},
    pages={113517},
    year={2022},
    publisher={Elsevier},
    doi={10.1016/j.ultramic.2022.113517}
}

Strain Mapping

Two-Dimensional Strain Mapping with Scanning Precession Electron Diffraction: An Investigation into Data Analysis Routines by Crout et al. which is freely avaliable at https://arxiv.org/abs/2307.01071

pyxem-demos's People

Contributors

cssfrancis avatar din14970 avatar dnjohnstone avatar ericpre avatar m0hsend avatar magnunor avatar pc494 avatar phillipcrout avatar pre-commit-ci[bot] avatar shogas avatar stefsmeets avatar tinabe avatar uellue avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyxem-demos's Issues

Demo07 incompatible with pyxem0.13: get_azimuthal_integral

Following the jupyter notebook until:
integration2d = dp.get_azimuthal_integral2d(npt_rad=100, unit="pyxem")

  1. npt_rad is now npt
  2. after the easy fix
    integration2d = dp.get_azimuthal_integral2d(npt=100, unit="pyxem")
    would lead to:
    ValueError: ai property is not currently set

Same problem for get_azimuthal_integral1d

Suspect it has something to do with the lazy input, but the problem persists after changing
dp.unit="2th_deg"
or giving wavelength=2.5e-12 for dp.unit = "k_A^-1" (the unit in Demo7)

pyxem-demos 0.11.0

Tidying up in advance of a v0.11.0 release has highlighted some issues with the demos as they stand, which need to be fixed before release:

  • demo-02 : the sub pixel refinement step is throwing an Error
  • demo-02 : plotting the vector matching results is throwing an Error
  • demo-06 : separate_learning_segments is throwing an Error for using a > comparison between a float and a None type object. (Odd because I don't think we changed this part of the code...)
  • demo-08 : plotting the final PDF i.e. pdf.plot() is throwing a dimensions error
  • demo-09 : we need to put the data in the Google drive (I don't have a copy of it)
  • demo-09 : Update formatting for consistency with other demos.

@pc494 @tinabe @JoonatanL @CSSFrancis - it would be much appreciated if you could each have a look at the issue above that corresponds to code you contributed significantly to.

VDFGenerator has incorrect imports

To Reproduce
Steps to reproduce the behavior:

from pyxem.generators.vdf_generator import VDFGenerator

Throws:

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-28-996234032340> in <module>
----> 1 from pyxem.generators.vdf_generator import VDFGenerator

~/miniconda3/envs/miniconda-1/lib/python3.8/site-packages/pyxem/generators/vdf_generator.py in <module>
     22 from hyperspy.api import roi
     23 
---> 24 from pyxem.signals.vdf_image import VDFImage
     25 from pyxem.utils.vdf_utils import normalize_vdf
     26 from pyxem.signals import transfer_signal_axes, transfer_navigation_axes_to_signal_axes

~/miniconda3/envs/miniconda-1/lib/python3.8/site-packages/pyxem/signals/vdf_image.py in <module>
     23 from pyxem.utils.segment_utils import separate_watershed
     24 from pyxem.signals.diffraction_vectors import DiffractionVectors
---> 25 from pyxem.signals import transfer_signal_axes
     26 from pyxem.signals.segments import VDFSegment
     27 

ImportError: cannot import name 'transfer_signal_axes' from 'pyxem.signals' (/home/phillip/miniconda3/envs/miniconda-1/lib/python3.8/site-packages/pyxem/signals/__init__.py)

I think this is because vdf_image is an old code, and from pyxem.signals.vdf_image import VDFImage should be redirected to the correct place instead - hopefully that will resolve the issue.

Notebook 00

The example file path needs updating also the wording for the second section needs changing since it refers to the wrong method in the text. If you could please copy the example data to your OneDrive @dnjohnstone I will submit a correction.

Links to Data in Demo-03 is broken

Many thanks to the user who emailed us about this, the files can be found using the link in the README and this issue will be fixed shortly.

mib to h5 conversion

Hi,

We have an issue with the mib_to_h5stack conversion function is throwing the following error :

>>> pxm.utils.io_utils.mib_to_h5stack(data_path, h5_path)
---------------------------------------------------------------------------
UnboundLocalError                         Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_26584/236218266.py in <module>
      1 h5_path = 'Z:\\user\\Pixelated detector\\21-09-09 First test\\STEM mode\\02-test\\20210909 155051\\default.h5'
      2 
----> 3 pxm.utils.io_utils.mib_to_h5stack(data_path, h5_path)

~\Anaconda3\envs\user4dstem\lib\site-packages\pyxem\utils\io_utils.py in mib_to_h5stack(fp, save_path, mmap_mode)
    222                     _stack_h5dump(data, hdr_info, save_path)
    223         elif hdr_info["raw"] == "MIB":
--> 224             _stack_h5dump(data, hdr_info, save_path)
    225     return
    226 

~\Anaconda3\envs\user4dstem\lib\site-packages\pyxem\utils\io_utils.py in _stack_h5dump(data, hdr_info, saving_path, raw_binary, stack_num)
    901                     )
    902                 else:
--> 903                     data_dump1 = _untangle_raw(
    904                         data_dump0, hdr_info, data_dump0.shape[0]
    905                     )

~\Anaconda3\envs\user4dstem\lib\site-packages\pyxem\utils\io_utils.py in _untangle_raw(data, hdr_info, stack_size)
   1029             (da.concatenate((det1, det3), 1), da.concatenate((det2, det4), 1)), 2
   1030         )
-> 1031     return untangled_data
   1032 
   1033 

UnboundLocalError: local variable 'untangled_data' referenced before assignment

Searching a little bit in the source code it seems that there is an issue with the hdr associated with the mib file. When we run :

>>> _parse_hdr(data_path)
{'width': 256,
 'height': 256,
 'Assembly Size': '1x1',
 'offset': 384,
 'data-type': 'unsigned',
 'data-length': '8',
 'Counter Depth (number)': 6,
 'raw': 'MIB',
 'byte-order': 'dont-care',
 'record-by': 'image',
 'title': 'Z:\\user\\Pixelated detector\\21-09-09 First test\\STEM mode\\02-test\\20210909 155051\\default',
 'date': '20210909',
 'time': '15:50:57.107038',
 'data offset': 384}

It seems to us that in the source code this "Assembly Size" : "1x1" is not well taken into account. Only the "Assembly Size" : "2x2" should be working. It is not clear for us if this is the intented behavior. How can we make the conversion to h5 then ?

Reducing Data Size, Adding in Binder Links etc.

At some point we moved from uploading the necessary datasets for the demos. I really like the idea of making binder links for each of the files and automating building documentation from the demos. Github has pretty strict limits about not uploading a file that is larger than 100 mb but I think that we should think about if we can reduce the size of the example datasets by:

  • Reducing the size of the scan (x,y)
  • Bin the dataset (kx, ky)
  • Reduce the bit depth

The problem datasets are:

  • Demo 2 --> demo 2 already crops the dataset to a 12 mb dataset.
  • Demo 6 --> Rebinning the data by a factor of 2 makes the dataset 15 mb instead of 250 (reduced by factor of 16). It also speeds up this notebook a fair amount. (@pc494 do you think that this should be okay)
  • Demo 10 --> @magnunor I haven't looked at this dataset much but is there anyway to get this under 100 mb just as an example? We can still have the ability to download the full dataset but the the sake of running it in binder a small dataset might be more useful just to give people the ability to run it in a web browser.

I'll probably just try to go forward with this when I have time. But let me know if anyone has any concerns.

Need to update to pyxem-0.6

The tutorials here will work (except for minor bugs) in pyxem-0.5.1 - but we need to update them to work in pyxem-0.6 which is now the master version of the code on github.

Passing a Signal2D as a 3x3 matrix to apply_affine_transformation()

Encountered a small bug in the strain mapping notebook:

x_l = []
for x in [0, 0, -0.01, 0.02]:
    x_s = np.eye(3)
    x_s[0, 0] += x
    x_l.append(x_s)

angles = hs.signals.Signal2D(np.asarray(x_l).reshape(2, 2, 3, 3))
dp = dp.apply_affine_transformation(D=angles, order=5, inplace=False)
dp.set_diffraction_calibration(1)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-44-24ce614d8f54> in <module>
      6 
      7 angles = hs.signals.Signal2D(np.asarray(x_l).reshape(2, 2, 3, 3))
----> 8 dp = dp.apply_affine_transformation(D=angles, order=5, inplace=False)
      9 dp.set_diffraction_calibration(1)

~/miniconda3/envs/pxm/lib/python3.7/site-packages/pyxem/signals/electron_diffraction.py in apply_affine_transformation(self, D, order, inplace, *args, **kwargs)
    287 
    288         # This defines the transform you want to perform
--> 289         distortion = tf.AffineTransform(matrix=D)
    290 
    291         # skimage transforms can be added like this, does matrix multiplication,

~/miniconda3/envs/pxm/lib/python3.7/site-packages/skimage/transform/_geometric.py in __init__(self, matrix, scale, rotation, shear, translation)
    761                              " the implicit parameters at the same time.")
    762         elif matrix is not None:
--> 763             if matrix.shape != (3, 3):
    764                 raise ValueError("Invalid shape of transformation matrix.")
    765             self.params = matrix

AttributeError: 'Signal2D' object has no attribute 'shape'

03 Reference Standards - 2. Determine Lens Distortions

Hi,
I am trying to work through section 2 in example notebook 3 in version 0.13.2. but I am getting this AttributeError, see below. Could you please help me to solve this?

Thanks!


In [9]: cal.get_elliptical_distortion(mask_radius=10, scale=100, amplitude=1000, asymmetry=0.9,spread=2)

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-13-27e3dbe10698> in <module>
----> 1 cal.get_elliptical_distortion(mask_radius=10,
      2                               scale=100, amplitude=1000,
      3                               asymmetry=0.9,spread=2)

D:\Python\Anaconda\envs\pyxem\lib\site-packages\pyxem\generators\calibration_generator.py in get_elliptical_distortion(self, mask_radius, scale, amplitude, spread, direct_beam_amplitude, asymmetry, rotation, center)
    167             )
    168         standard_dp = self.diffraction_pattern
--> 169         image_size = standard_dp.data.shape[0]
    170         if center is None:
    171             center = [(image_size - 1) / 2, (image_size - 1) / 2]

AttributeError: 'CalibrationDataLibrary' object has no attribute 'data'

demos errors signal2D has no attribute

dp = pxm.load_hspy('./data/01/twinned_nanowire.hdf5')
dp.set_microscope_parameters(beam_energy=300.0,
camera_length=21.0,
scan_rotation=277.0,
convergence_angle=0.7,
exposure_time=10.0)


AttributeError Traceback (most recent call last)
in
----> 1 dp.set_microscope_parameters(beam_energy=300.0,
2 camera_length=21.0,
3 scan_rotation=277.0,
4 convergence_angle=0.7,
5 exposure_time=10.0)

AttributeError: 'Signal2D' object has no attribute 'set_microscope_parameters'

data = pxm.load_hspy("./data/09/PdNiP_test.hspy")
data.set_signal_type("electron_diffraction")
data.beam_energy=200
data.unit = "k_nm^-1"
rad = data.get_azimuthal_integral2d(npt_rad=100, center=(31.77734804, 31.23438963))


AttributeError Traceback (most recent call last)
in
2 data.beam_energy=200
3 data.unit = "k_nm^-1"
----> 4 rad = data.get_azimuthal_integral2d(npt_rad=100, center=(31.77734804, 31.23438963))

AttributeError: 'Signal2D' object has no attribute 'get_azimuthal_integral2d'

Setting %env OMP_NUM_THREADS=1 speeds up the routines in example 11 tremendously

When matching a larger dataset following example 11 but using lazy arrays, running this as the very first cell before any numerical library is loaded speeds up the calculation on a machine with with 24 cores:

%env OMP_NUM_THREADS=1

When @sk1p profiled the system under load without this setting, it spent most of it's time in sched_yield instead of doing useful work. With this setting enabled (no OpenMP multithreading) it was mostly doing useful work. I didn't benchmark the difference because I ran out of patience, but it is about a factor 10.

Some routines in SciPy and NumPy are multi-threaded internally, for example OpenBLAS. It seems that Dask's/pyxem's parallelism in combination with OpenMP/OpenBLAS threading leads to oversubscription of the CPU or some other kind of scheduling issues. Restricting to only on one level of parallelism resolves this issue.

FYI we encountered a similar issue in LiberTEM. In order to avoid setting the environment variable and disabling threading altogether, we implemented a few context managers to set the thread count to 1 in code blocks that run in parallel: https://github.com/LiberTEM/LiberTEM/blob/master/src/libertem/common/threading.py

Maybe that can be useful in HyprSpy/pyxem? Perhaps this should actually be handled in Dask.

load syntaxes broken in some demos

pxm.load('filename.extension') does not work in 13.0

I downgraded to 12.3 and it works fine.

Your sheet 01 GaAs Nanowire - Data Inspection - Preprocessing - Unsupervised Machine Learning.ipynb

does not run under 13.0 as a result

Ian

01 GaAs Nanowire

Hi,

I run always into the same issue with this notebook =(

(I should mention that I am hopping over 3. and 4. To go directly to 5. I replace in in the peak finding "dpc" with "dp"... so it should work ? )

This is the error message after peaks.plot_diffraction_vectors_on_signal:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-18-a031ee756147> in <module>
----> 1 peaks.plot_diffraction_vectors_on_signal(dp, cmap = 'gray', vmax = 50)

~\.conda\envs\pyxem\lib\site-packages\pyxem\signals\diffraction_vectors.py in plot_diffraction_vectors_on_signal(self, signal, *args, **kwargs)
    329             Keyword arguments passed to signal.plot()
    330         """
--> 331         mmx, mmy = generate_marker_inputs_from_peaks(self)
    332         signal.plot(*args, **kwargs)
    333         for mx, my in zip(mmx, mmy):

~\.conda\envs\pyxem\lib\site-packages\pyxem\signals\diffraction_vectors.py in generate_marker_inputs_from_peaks(peaks)
    104 
    105     """
--> 106     max_peak_len = _find_max_length_peaks(peaks)
    107     pad = np.array(
    108         list(

~\.conda\envs\pyxem\lib\site-packages\pyxem\signals\diffraction_vectors.py in _find_max_length_peaks(peaks)
     73     """
     74     x_size, y_size = (
---> 75         peaks.axes_manager.navigation_shape[0],
     76         peaks.axes_manager.navigation_shape[1],
     77     )

IndexError: tuple index out of range

Examples no longer compatible with latest version

Thanks for making this nice package!

Was trying to run through the example files to learn to work with this, but I've been struggling a bit. At least in example 1, I couldn't get the file to load in the normal way:

ValueError: Invalid signal_type in saved data for pyxem, please use load_hspy for this data. 

I suppose this is because you have now created your own hdf5 format for storing the data, which is no longer compatible with the hyperspy format?

Loading in with load_hspy of course returns a hyperspy dataset that doesn't have the set_experimental_parameters methods. I worked around this by explicitly creating a dataset:

dp = pxm.signals.electron_diffraction2d.ElectronDiffraction2D(dp)

The names of arguments in set_experimental_parameters have also changed... Basically without documentation and non-working demo's it's quite difficult to use. Could you revise the examples a bit to be compatible with the latest version?

05 GaAs Strain measurements

Hi,

After finally switching to calibrated units ^^ The subpixelrefinementGenerator worked... The center of mass method however does not. Suggestions where I made a mistake?

x_peak = [-0.08,0.27]
y_peak = [-0.45,0.0027]
spg = SubpixelrefinementGenerator(dp, np.asarray([x_peak,y_peak]))
Vs = spg.center_of_mass_method(4)


Output:
0%
0/3000 [00:00<?, ?it/s]
---------------------------------------------------------------------------
UFuncTypeError                            Traceback (most recent call last)
<ipython-input-39-2ca1685e808f> in <module>
      4 y_peak = [-0.45,0.0027]
      5 spg = SubpixelrefinementGenerator(dp, np.asarray([x_peak,y_peak]))
----> 6 Vs = spg.center_of_mass_method(4)

~\.conda\envs\pyxem\lib\site-packages\pyxem\generators\subpixelrefinement_generator.py in center_of_mass_method(self, square_size)
    366             return ((vectors + shifts) - center) * calibration
    367 
--> 368         self.vectors_out = self.dp.map(
    369             _center_of_mass_map,
    370             vectors=self.vector_pixels,

~\.conda\envs\pyxem\lib\site-packages\hyperspy\signal.py in map(self, function, show_progressbar, parallel, max_workers, inplace, ragged, output_signal_size, output_dtype, **kwargs)
   4591                 kwargs["output_dtype"] = output_dtype
   4592             # Iteration over coordinates.
-> 4593             res = self._map_iterate(function, iterating_kwargs=ndkwargs,
   4594                                     show_progressbar=show_progressbar,
   4595                                     parallel=parallel,

~\.conda\envs\pyxem\lib\site-packages\hyperspy\signal.py in _map_iterate(self, function, iterating_kwargs, show_progressbar, parallel, max_workers, ragged, inplace, **kwargs)
   4733 
   4734             with ThreadPoolExecutor(max_workers=max_workers) as executor:
-> 4735                 for ind, res in zip(
   4736                     range(res_data.size), executor.map(func, zip(*iterators))
   4737                 ):

~\.conda\envs\pyxem\lib\concurrent\futures\_base.py in result_iterator()
    606                     # Careful not to keep a reference to the popped future
    607                     if timeout is None:
--> 608                         yield fs.pop().result()
    609                     else:
    610                         yield fs.pop().result(end_time - time.monotonic())

~\.conda\envs\pyxem\lib\concurrent\futures\_base.py in result(self, timeout)
    436                     raise CancelledError()
    437                 elif self._state == FINISHED:
--> 438                     return self.__get_result()
    439 
    440                 self._condition.wait(timeout)

~\.conda\envs\pyxem\lib\concurrent\futures\_base.py in __get_result(self)
    388         if self._exception:
    389             try:
--> 390                 raise self._exception
    391             finally:
    392                 # Break a reference cycle with the exception in self._exception

~\.conda\envs\pyxem\lib\concurrent\futures\thread.py in run(self)
     50 
     51         try:
---> 52             result = self.fn(*self.args, **self.kwargs)
     53         except BaseException as exc:
     54             self.future.set_exception(exc)

~\.conda\envs\pyxem\lib\site-packages\hyperspy\misc\utils.py in func(*args)
   1167     def func(*args):
   1168         dat, these_kwargs = figure_out_kwargs(*args)
-> 1169         return function(dat, **these_kwargs)
   1170 
   1171     return func, iterators

~\.conda\envs\pyxem\lib\site-packages\pyxem\generators\subpixelrefinement_generator.py in _center_of_mass_map(dp, vectors, square_size, center, calibration)
    363             for i, vector in enumerate(vectors):
    364                 expt_disc = _com_experimental_square(dp, vector, square_size)
--> 365                 shifts[i] = [a - square_size / 2 for a in _center_of_mass_hs(expt_disc)]
    366             return ((vectors + shifts) - center) * calibration
    367 

~\.conda\envs\pyxem\lib\site-packages\pyxem\generators\subpixelrefinement_generator.py in _center_of_mass_hs(z)
    325             s = np.sum(z)
    326             if s != 0:
--> 327                 z *= 1 / s
    328             dx = np.sum(z, axis=0)
    329             dy = np.sum(z, axis=1)

UFuncTypeError: Cannot cast ufunc 'multiply' output from dtype('float64') to dtype('uint8') with casting rule 'same_kind'

Standard Format

It would be good to establish a standard format for pyxem-demos. I propose the one used in the current GaAs nanowire example.

This means that some of the others would need updating.

Open to alternative suggestions.

Errors in tutorials

Hi! I am trying to follow the tutorials and I got several errors:
GaAs Nanowire
In Produce virtual diffraction contrast images for all diffraction vectors
AttributeError: 'DiffractionVectors' object has no attribute 'get_vdf_images'

SED Phase & Orientation Mapping
In [164]:
AttributeError: 'ElectronDiffraction' object has no attribute 'set_calibration'

SED Strain Mapping by Affine Transform
In box [9] dp = hs.stack()
NameError: name 'hs' is not defined

pyxem demos 0.12.3 release

Housekeeping. I am going to run through all of these (hopefully this weekend) so that we can package for release. Will also add an NBviewer link as we have in the orix-demos

Data for latest demo releases

Unable to locate data for demos 6, 7 & 8.
Have checked README linked Google Drive. Are these data located elsewhere?
Many thanks.

02 - Vector Matching

I am trying to work through section 4.3 in example notebook 2 in version 0.13 and am having some trouble. In particular, it appears like get_crystallographic_map() is just making another vector object instead of a crystal_map type object. And so then crystal_map doesn't have associated get_phase_map(), get_orientation_map() functions and I am having trouble figuring out how to access the data within crystal_map. Could you please give some guidance on how to use these features?

Thanks,

Edwin

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.