Code Monkey home page Code Monkey logo

cristalx's Introduction

CristalX

badge badge BCH  compliance Documentation Status Join the chat at https://gitter.im/Grain-Segmentation/community

Identification of individual grains in microscopic images

CristalX is a Python package that helps in the analysis of polycrystalline microstructures. Its name originates from the French word 'cristal', corresponding to the English word 'crystal'.

Features

  • Image segmentation to identify the grains in a microstructure
  • Analysis tools for the segmented image
  • Explicit geometrical representation of the grains
  • Interacting with meshes created on the microstructure
  • Mapping fields between a mesh and the grid of DIC measurements
  • Simulation tools for the inverse problem arising from a combined numerical-experimental method (in progress ...)
  • Visualization and data exchange

Getting help

  1. Read the documentation.
  2. Check the existing issues. They may already provide an answer to you question.
  3. Open a new issue.

Contributing

Read the docs/source/contributing.md file.

Citing CristalX

We have an article freely available on SoftwareX, showing the background and the design of CristalX.

When using CristalX in scientific publications, please cite the following paper:

  • Csati, Z.; Witz, J.-F.; Magnier, V.; Bartali, A. E.; Limodin, N. & Najjar, D. CristalX: Facilitating simulations for experimentally obtained grain-based microstructures. SoftwareX, 2021, 14, 100669

BibTeX entry:

@Article{Csati2021,
  author    = {Zoltan Csati and Jean-Fran{\c{c}}ois Witz and Vincent Magnier and Ahmed El Bartali and Nathalie Limodin and Denis Najjar},
  journal   = {{SoftwareX}},
  title     = {{CristalX}: {F}acilitating simulations for experimentally obtained grain-based microstructures},
  year      = {2021},
  month     = jun,
  pages     = {100669},
  volume    = {14},
  doi       = {10.1016/j.softx.2021.100669},
}

cristalx's People

Contributors

csatizoltan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

cristalx's Issues

Generate cohesive elements for Abaqus

Motivation

By allowing the degradation along the interfaces, one can better capture the localization, which is often intergranular in polycrystalline materials. The cohesive zone model is simple to implement for non-moving interfaces, once we have access to the 1D elements on the interfaces.

Useful links

Cohesive zone modelling in Abaqus

Here come the links I bookmarked...

Workflow

The following tasks need to be done:

  • Fork Phon and make it work with Python 3.
    • Issue with non-mutable OrderedDict: KristofferC/Phon#9 (comment)
    • Use list(dictionary) instead of dictionary.keys() to loop through and modify dictionary.
    • Support 2D nodes (maybe it's only needed during the .inp file writing)
    • Support CPS3 element types for Abaqus.
  • Prepare the med module to be able to extract edges and edge groups from a .med file.
  • Create a function/class in the simulation module (not in the geometry module to keep it uncoupled) that will manage the whole cohesive zone insertion.
  • Document the workflow by

Support DIC data for multiple time steps

Currently, the DIC class knows about the displacement field at a given time step only. On the other hand, one is often interested in the evolution of the displacement field. It is clear that loading the displacement measurement data for each time step is not viable in general on personal computers1. Moreover, the member functions of the class would become more verbose because the user would need to give at which time step a given operation should be applied. Since the class cannot know in advance how the series of displacement fields are stored, it is the responsibility of the user to pass it to DIC. The DIC class should have a method that loads the new field.


1E.g. in my current dataset, the DIC data is given as an HDF5 file with the size of 38 GB.

Export meshed microstructure to Abaqus

Create an Abaqus class with the following desired capabilities:

  • do not rely on the Salome API so that it can be factored out later to a standalone module (see also #28)
  • additional functionalities required for running an Abaqus simulation (materials, section definitions, etc).
    This might seem unimportant at first sight (why would one do it, when they can be set from within Abaqus?), however, setting a different material property for each grain in a microstructure is tedious.

Also, create a script that shows the use of these.

Factor out utility functions

  • FixedDict in meshing.py
  • (maybe) plot_prop in analysis.py
  • save_image in segmentation.py
  • common methods in the Material and Geometry classes in abaqus.py
  • simulation.py is not intended to be run, factor out the runnable components to a script

Explode mesh

While asking for something else, I got an answer to visualize a mesh with its partitions exploded. It is easy to implement and helps in visually distinguishing small partitions, which would remain indistinguishable due to the neighboring larger partitions.

Display grain number and grain properties on grains

Similarly as done for matplotlib: https://github.com/CsatiZoltan/Polycrystalline-microstructures/blob/89cdfed050a268a5904ab126dc42efbf258e1fea/grains/analysis.py#L154

The center of the grain can be found by computing the center of mass, assuming that the grain is homogeneous (refer to core_shape_properties.py):

from OCC.GProp import GProp_GProps
from OCC.BRepGProp import brepgprop_SurfaceProperties
...
# Compute inertia properties
props = GProp_GProps()
brepgprop_SurfaceProperties(region, props)  # region is a *TopoDS_Face* object
area = props.Mass()  # assuming that the region is homogeneous
cog = props.CentreOfMass()
cog_x, cog_y, cog_z = cog.Coord()

Class for labeled images

More and more functions work on labeled images in the codebase. Currently, these are

  • show_label_image
  • label_image_skeleton
  • thicken_skeleton
  • label_image_apply_mask
  • some methods of the Analysis class could also come here

in the analysis module and

  • Segmentation.save_image
  • Segmentation.save_array

in the segmentation module.


It is time to collect them in a class. We will call this class LabeledImage1 and it should have, at first thought, the following methods:

  • __init__(self, label_image)
  • find_skeleton
  • thicken_skeleton
  • apply_mask(self, mask, value)
  • change_label(self, old_labels, new_labels)
  • show(self, color='random', labels=False) 2
  • save_png(self, filename)
  • save_numpy(self, filename)
  • _validate(label_image)
  • __str__(self)

and with the following data members:

  • n_label

Once the class has been created, deprecate the old functions and also mention this class creation principal as an example under the "Do not overuse classes" bullet point of the program design.


1 MATLAB calls it "label image". However, the word "label" is also a verb and we want to avoid the misunderstanding that we perform labeling as done in computer vision. We also prefer "labeled" to "labelled" as American English is used in the rest of CristalX.
2 Other values for the color parameter:

  • 'seeded_random', see #45
  • 'optimal', see #9

Random colors for grains, but with a given seed

When plotting the same grain configuration twice, the displayed colors are different because colors are randomly allocated to each grain. This is not only annoying when making comparisons, but it can also deceive people who do not know about the random color allocation.
The solution is to use a given seed for the random number generator. Then the same colors are used for subsequent plotting.

from numpy.random import default_rng
rng = default_rng(seed)  # uses the PCG64 generator
vals = rng.random((m, n))

See also #9.

Implement it in the analysis module as show_label_image.

Duck typing

Thorough checking for input types clutters the code and is not Pythonic. The user is expected to read the docstring of the parameters and pass the proper type.
Later, the type checking could be enforced by using static analyzers, such as mypy.

Generalize pixel neighbour search

Depending on the input image, different strategies may work better. Implement the strategies seen in this talk (source code here) and extend them with the one currently being the most promising in my case.

Display text using for PythonOCC scenes

Useful utilities for positioning the text can be found in core_geometry_utils.py of the PythonOCC-examples project for version 0.18 and in ShapeFactory.py in the new versions.
The text itself can be displayed using the DisplayMessage method, see core_topology_glue.py.

Faster interpolation

A recurring task in this project is to interpolate scattered data on a grid. The function griddata returns nan for the locations being outside the convex hull of the input data, for all but the nearest neighbor interpolant. A brute-force method to obtain interpolated values at those external points too is to perform two interpolations: one with the requested interpolant (linear, cubic, etc.) and one with the nearest neighbor. Then use the nearest neighbour interpolated values for the data points where the other interpolants return nan. A possibly faster way could be to perform the nearest neighbor interpolation only at the external points. Two approaches in mind:

Distinguishable colors for nearby labels in a label image

Currently, random colors are generated when saving a label image.
https://github.com/CsatiZoltan/Polycrystalline-microstructures/blob/20e3eb63df753251911fbe1c2b8ae410cb0c8c7d/grains/segmentation.py#L265
An ongoing issue deals with it: scikit-image/scikit-image#4507
Ideally, colors should be chosen so that nearby labelled regions are easily distinguishable. See the above issue for details.
This functionality is already implemented in ImageJ: https://imagej.net/Glasbey

Citing

If we publish this work, show a citing request on the main page of the documentation, such as this.
See also here.

Rewrite the Abaqus module

Processing an Abaqus input file (.inp) is more complicated than it seemed. The current implementation used nested dictionaries to extract data from a .inp file. For deeply nested dictionaries, it hampers the readability and it is difficult to insert a new level somewhere in the middle. Another problem is that many Abaqus commands are similar but still need slightly different inputs. The many edge cases render the current approach inflexible for extension.

The .inp files have a set of syntax rules and keywords. This makes it a domain specific language. Hence, an AST can be used to analyze it. Recognizing this gave the idea to represent the Abaqus module hierarchy as a tree. E.g. if the .inp file contains

* Material, name=mat-1
** possible comment line
* Elastic
** possible comment line
data line
* Plastic
data line 1
** possible comment
data line 2
* Material, name=mat-2
...

it could be represented as

Root
|--- ...
|--- Material (object containing the name: 'mat-1')
      |--- Elastic (object containing the elastic properties)
      |--- Plastic (object containing the plastic properties)
|--- Material (object containing the name: 'mat-2')
...

Being a tree data structure, including new commands or changing the hierarchy is easy. E.g. if we want the Material command to be part of the Property Abaqus module, we insert a Property object under the Root and set the Material objects to be children of the Property (singleton) object.

Once we have the tree, we can fill it in with data coming from the input file. We define parent-children relationships so that the added nodes (commands) are inserted to the correct position of the tree. For each command, create a class

  • which serves as a container holding the data read from the .inp file
  • checks for admissibility
  • the object of which is a node of the tree

Manipulations (modifying element connectivities, adding materials, etc.) are easy to carry out in this intermediate representation, no need to deal with the text representation. Writing the (possibly modified) state into file is straightforward by traversing the tree.

anytree, written in pure Python, is a promising package, in which custom objects can form the nodes of the tree.

Avoid MEDCoupling dependency using HDF5

A MED file (.med) is an HDF5 file in disguise. Processing a MED file to extract mesh information requires the MEDCoupling module. I couldn't compile a standalone MEDCoupling interface. What I currently do is

  1. Download Salome.
  2. Set up my IDE to use Salome's built-in Python interpreter.
  3. Use that interpreter when working with MED mesh processing.

Comments for the workflow above:

  1. This is not a problem because I anyway use Salome to repair the geometry and to export it to .med.
  2. It has its quirks, but I created a short script for it.
  3. This point is inconvenient. The reason is that when I use the Python interpreter of Salome
    • I cannot install packages that need compilation because Python links with the wrong dynamically linked libraries
    • even if I could install packages to Salome's Python, those packages can crash the original environment (happened to me)
    • I am restricted to the preinstalled versions of numpy and matplotlib

So the goal is to use load and process the .med file using h5py, which has very few dependencies. As a side effect, the user of my code does not need the several GB sized Salome so that they can handle the .med file. The structure of a .med file can easily be visualized with ViTables.

More general implementation of non-simply connected domains

When a large grain contains one or more grains, the large grain has to be modified to exclude the small grains that act as holes. This makes the large grain a non-simply connected domain.
This modification can be carried out as

  • Boolean operation (set difference)
  • boundary representation

Currently, I plan to support the Boolean solution for the splinegon representation only, as that's what I actively work with. However, a general implementation would be nice, which does not need to have information about the actual geometrical representation. I.e. it would work for the polygon representation too. That would however require several changes in the code:

  • The cad module would import the geometry module for the sake polygon-based operations.
  • The region_branches variable will no longer be a simple list, but a list of lists in order to incorporate the holes
  • The geometry.Polygon class must be modified so that holes can be represented, which needs
    • updating the methods for calculating the area, the equivalent diameter, etc.
    • the data structure will not remain a simple nx2 NumPy array, but a list of NumPy arrays, that describe the outer boundary and the internal boundaries as well

Allow inserting custom OOF command

Different images often require different meshing techniques. Once experimented in the OOF2 GUI, one would like to save the corresponding OOF2 commands. Create a method add_command in class OOF2.

Intergranular or intragranular deformation dominates?

Physics

From the modelling viewpoint, it is of importance to know whether the strain localizes to the grain boundaries (also called interfaces) or it is dominant within the grains as well. Indeed, the cohesive zone/band model can largely speed up the computations in the first case. To decide whether the localization is intergranular or intragranular in a given microstructure, we can make use of experimental data (if available).
From now on, we will assume that thanks to full-field measurement (e.g. digital image correlation - DIC), we have access to the strain field at every point1 in the microstructure.

Algorithm

Things to do:

  1. Define a band on the interfaces. The thickness of the band is a parameter of the model. ce6c499

  2. Be able to compute

    • a strain measure (tensor) from the displacement field 7eaa0b5
    • an equivalent strain from the strain tensor 2b9b0be

    on the whole domain (where the displacement field exists)

  3. Take the grain microstructure and the (equivalent) strain field and

    • localize the strain field on the band a74f954
    • possibly return a histogram of the magnitude of the equivalent strain in the band a74f954
    • determine what portion of the large strain values lie in the band, compared to the rest of the domain (i.e. the internal regions of the grains) a74f954
  4. The ratio of inter/intragranular deformation should be

    • executed for different time steps, as the deformation evolves 13576f7; and
    • the data above should be visualized somehow 13576f7
  5. Once the methodology works properly, create a Jupyter notebook from it.

Expected results

For many microstructures, the strain localization is dominantly intergranular. If that is the case,

  1. Device a model to allow simplifications (e.g. no need to account plastic deformation in the grain interiors)
  2. Allow extracting the generated mesh on the interfaces (they will be a set of 1D elements)

Footnotes

1 I.e. in every pixel. Use subpixel interpolation if you need a value at an arbitrary position.

Collect deprecated functions

Commit dc50d41 introduced a deprecation life-cycle to CristalX.
Managing deprecated functionalities, and the fact in which release it was marked as deprecated and in which future release it is planned to be removed, becomes cumbersome as CristalX grows.
Therefore, create a function in the grain package (in __init__.py) to fetch these data. I found 3 main possibilities:

  1. Analyze the source code with the inspect built-in Python module
    "Method 2" in https://stackoverflow.com/a/5910893/4892892
  2. Create an additional decorator to decorate the decorator you want to track
    "Method 3" in https://stackoverflow.com/a/5910893/4892892. In my case, I would need to decorate the deprecated decorator from the deprecation package.
  3. Analyze the AST
    Provided by https://stackoverflow.com/a/9580006/4892892, this is also a source parsing method, as the first item.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.