leon-vv / traceon Goto Github PK
View Code? Open in Web Editor NEWElectromagnetic solver and electron tracer
Home Page: https://traceon.org/
License: GNU Affero General Public License v3.0
Electromagnetic solver and electron tracer
Home Page: https://traceon.org/
License: GNU Affero General Public License v3.0
We know that the Maxwell equations are linear. This means that if we solve for every electrode (unity excitation) we can use superposition to find the fields at different excitation. In terms of computation this means we run the solver N terms, where N is the number of electrodes.
Now, when we use a direct solver, according to the (Numpy documentation)[https://numpy.org/doc/stable/reference/generated/numpy.linalg.solve.html] behind the scenes uses a LAPACK routine which computes the LU decomposition. It would therefore be faster to store this LU decomposition after solving for the first electrode and reuse this decomposition for the other electrodes. In other words, we solve the system with multiple right hand sides, which is computationally cheaper.
This might have a big impact on the exposed API. We have to think through how to expose the superposition of fields in the API in a neat way. We need to be able to rescale the fields after a solution step.
Ideas to improve the method:
Dielectric materials can readily be added to the simulations if we allow for Dirichlet boundary conditions. In other words conditions on the electric field instead of the potential. If I understand correctly, the electric field at the boundary of a dielectric and the bound surface charge induced there are related by
we therefore need to be able to generate normal vectors at the boundaries. This should be possible with GMSH getNormal
function.
We use STEP_MIN and STEP_MAX to guide the particle tracing. However, due to the adaptive step size control do we really need these values? Some more research is needed.
The special geometries included in geometry.py
should be in their own benchmark files. Not in the geometry
module.
Could be hosted in the meantime on my personal webpage.
Some tests/examples on how to use Geometry.MEMSStack
The interpolation technique used to speed up ray tracing can be improved
It seems like matplotlib 3D renderer is a bit unreliable. Maybe switching to something like Mayavi with a proper 3D renderer would be beneficial.
We have to get rid of the Nmax
variable in tracing.py. The particle should be traced indefinitely.
We should clearly specify which symmetries are supported and for which excitation (electrostatic/magnetostatic).
Symmetry | Electrostatic | Magnetostatic |
---|---|---|
Axial symmetric | x | |
Planar symmetric (finite length) | ||
Planar symmetric (odd, infinite length) | x | |
3D |
For finite length with planar symmetry we have to neglect edge effects as the problem would otherwise become solvable only in 3D. One could image not implementing the infinite length case as it can be approximated easily by increasing the finite length to a large value (I believe this is the approach Comsol takes).
Full support for a symmetry implies the following:
When simulating an entire microscope, we need to be able to trace an electron through multiple elements. The most obvious approach to achieve this would be to define a new SystemTracer
which accepts multiple fields. Translation and rotations of the fields inside the 'global' coordinate system should be supported.
We need to update the scripts to the new API
Currently the only tests of the software suite are the validations in /validation
.
We need a bigger test suite that clearly distinguishes between the following classes of tests:
Each class of test should use the same function to display the results.
Currently we use Simpson integration rule (see solver/util.py
). It might be beneficent to instead use a Gaussian quadrature. We need to explore the performance/accuracy benefits. Hopefully using the better tests of issue #4.
Currently the numerical charge values are expressed as
I remember reading somewhere (forgot the source) that the accuracy of the BEM is optimal when the charges on the elements are roughly equal. This means that once we find the solution for a given mesh size, we can refine the mesh in an intelligent way by considering which line elements have the most charge.
It looks like using multiple threads doesn't always speed up the program (observed especially in 3D). We have to investigate what causes these performances issues
For axial symmetry we have an elegant interpolation technique relying on high order axial derivatives of the potential. However, the computation time can quickly be dominated by the computation of the derivatives. Also, for the 3D and planar symmetric cases it's not obvious the computation of the high order derivatives will be equally elegant.
For this reason we need a new, reliable interpolation technique which is not as cumbersome to set up. Hermite interpolation seems the most obvious.
The Numba JIT compile times are getting out of hand. Also passing a function to a Numba compiled function (like we do for line_integral
) is not properly supported in Numba with regards to caching. This means that those functions need to be recompiled often...
A proper C or Cython or Rust module would probably fix these issues.
For large 3D problems the direct solver can take a long time. Maybe good idea to support an iterative solver.
We currently use a axial Taylor expansion to quickly calculate the necessary fields. We need a script that for any geometry can plot the accuracy of the interpolation along the axis (for different distances from the axis).
Quoting from Peter W. Hawkes. Pinciples of Electron Optics, Basic Geometrical Optics. 2017.
Section 10.2.1:
Another advantage is the possibility of using more than the minimum number N of trial
functions and variables Q k . A favourable choice is, for instance, the use of function values
Y(s n ) and derivatives Yโ(s n ) at the nodes, hence M = 2N. In each boundary element the
function Y(s) can then be approximated by a cubic Hermite polynomial, which is fairly
accurate. Each node has two degrees of freedom, which gives a better result than the simple
one-degree approximation.
After solving for the field, currently an ugly tuple is returned containing all the relevant data. It would be better to return a Field class containing a reference to the excitation and the geometry.
Now sometimes we over sample by a lot, for example validation/preikszas.py
spends way too much time calculating the required derivatives. Maybe a good first guess would be to take the sampling equal to the mesh size.
Currently we only take the analytical derivative up to order 4 in solver/__init__.py
function get_axial_derivatives
. After using the assumption r_0 = 0
to simplify the formulas (see #3) we should be able to calculate the derivatives of higher order analytically.
See also formula 13.16a
in Peter W. Hawkes. Principles of Electron Optics, Basic Geometrical Optics. 2017.
See here: https://about.zenodo.org/
We need to have a validation for this case. Convergence to the infinite case when the finite length grows large would be acceptable for a first validation.
The relevant formula for the potential seems to be given by this page.
Don't only return the position of the particles, but also the corresponding time steps
Many function docstrings are now obsolete
We currently use N as input to approximate the number of mesh elements. However, the approximation is really bad. Better would be to simply take the mesh size as input.
Currently in traceon/solver/radial_symmetry.py
we have not used the assumption that r_0 = 0
. Using this assumption greatly simplifies the formulas and therefore likely speeds up the calculation of the derivatives.
The particle mass and charge should be adjustable.
The validations act as examples too.. but are not very penetrable. We need better example files, for example a simple einzel lens and a 3D deflector ?
Currently we compute the self voltage (voltage produced at an element by the charge on the same element) by making sure the integration doesn't step into the singularity at the center. It would be better to figure out a more accurate procedure to calculate the self voltage. I recall that a exact formula was claimed for the triangle case in:
Computational Charged Particle Optics. Thomas Verduin. 2013.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.