Code Monkey home page Code Monkey logo

gdtk-uq / gdtk Goto Github PK

View Code? Open in Web Editor NEW
54.0 10.0 14.0 123.48 MB

The Gas Dynamics Toolkit (GDTk) is a set of software tools for simulating high speed fluid flow, maintained at The University of Queensland and the University of Southern Queensland, Australia.

Home Page: https://gdtk.uqcloud.net/

License: Other

Shell 0.02% Makefile 1.10% HTML 0.17% Lua 27.44% D 54.70% Python 6.50% Ruby 0.34% Tcl 0.18% Cuda 1.34% C 0.05% TeX 0.05% Fortran 1.57% C++ 6.13% Go 0.03% Crystal 0.03% Chapel 0.35%
computational-fluid-dynamics scientific-computing parallel-computing

gdtk's Introduction

GDTk -- Gas Dynamics Toolkit

This repository hosts our collection of tools, written mainly in the D programming language, for gas dynamic simulations. Our focus is on open source development to give a simple access point for doing gas dynamics in research and teaching.

Contents

  • Toolkit Overview
  • (Relatively) Quick Start
  • Eilmer -- A compressible flow solver
    • Features
  • Documentation
  • License
  • Contributors
  • Chief Gardeners

Toolkit Overview

The Gas Dynamics Toolkit is a collection of programs and functions for computing the properties of high-speed gas flows. Since the computational tools have been developed within the University of Queensland's Centre for Hypersonics, there is clear bias toward predicting chemically-reacting gas flows, as would be found in shock-tunnel and expansion-tube experiments.

The computational tools range from large-scale programs for the simulation of gas flows in whole experimental facilites and more limited functions for the evaluation of simple state-to-state transitions, such as the jump in flow conditions across a simple shock wave.

Who we are

The principal developers of these tools are Rowan Gollan, Kyle Damm, Nick Gibbons, and Peter Jacobs. Substantial support has been given to the project by Anand Veeraragavan, Ingo Jahn, Vince Wheatley, and Fabian Zander. There have been many more contributors to the project over the years, including colleagues at other universities, students and visitors.

Tools

The tools in the kit are:

  • Eilmer, a simulation program for 2- and 3-dimensional gas flows.
  • L1d, a program for the end-to-end simulation of free-piston-driven shock tunnels and expansion tubes.
  • Pitot, a program using state-to-state calculations to estimate test flow conditions in impulse facilities.
  • ESTCN, a state-to-state calculation program for estimating flow conditions in reflected-shock tunnels.
  • NENZF1d, a program for estimating flow conditions in reflected-shock tunnels, when the test gas reaches temperatures high enough for chemical reactions to occur and when nonequilibrium chemistry effects are expected to be important.
  • ceq, a calculator of thermochemical equilibrium gas compositions.
  • Loadable Python libraries

Prerequisite knowledge

This section is about you, the user of the Gas Dynamics Toolkit. We assume that your mathematics, science or engineering background adequately prepares you for computational fluid dynamics (CFD) analysis. In particular, we assume that you have a working knowledge of geometry, calculus, mechanics, and thermo-fluid-dynamics, at least to a second- or third-year university level. With the Toolkit code, we try to make the analysis of compressible, reacting flow accessible and reliable; we cannot make it trivial.

Quick Start

This quick start guide introduces you to *Eilmer(, and takes you through to running a simple simulation of supersonic flow over a sharp cone. It should take less than half an hour to do the full set up and run.

Note: The section on installing prerequisites assumes a working knowledge on linux system configuration. If you find this section it too quick or too terse, you could try the gentler chapter on Installation.

Install prerequisites

The main requirement is a D language compiler. We recommend using the latest stable release of the LLVM D compiler.

To build Eilmer and other programs in the toolkit, you will require:

  • D compiler
  • A C compiler
    • GNU compiler is a good option and comes standard on most systems.
  • The gfortran compiler (and 32-bit libraries)
    • gfortran and gfortran-multilib on Debian/Ubuntu/Mint
    • gcc-gfortran on RedHat/CentOS/Fedora
  • git (to clone the repository)
  • readline development package:
    • libreadline-dev on Debian/Ubuntu/Mint
    • readline-devel on RedHat/CentOS/Fedora
  • ncurses development package:
    • libncurses5-dev on Debian/Ubuntu/Mint
    • ncurses-devel on RedHat/CentOS/Fedora
  • openmpi development package:
    • libopenmpi-dev on Debian/Ubuntu/Mint
    • openmpi-devel on RedHat/CentOS/Fedora (after install on RedHat-family systems, load with module load mpi/openmpi-x86_64, and you might like to place that in your .bashrc file so that it's loaded every time you start a session)
  • plotutils development package:
    • libplot-dev on Debian/Ubuntu/Mint
    • plotutils-devel on RedHat/CentOS/Fedora (for CentOS 8.x, enable PowerTools repo)
  • foreign-function interface packages for Python and Ruby:
    • python3-cffi on Debian/Ubuntu/Mint and RedHat/CentOS/Fedora
    • ruby-ffi on Debian/Ubuntu/Mint

Additionally, if you want to run the test suite, you will require:

  • Ruby package
  • TCL package
  • the Python sympy package

For viewing and plotting results, we recommend:

  • Paraview
  • Gnuplot

The source code of the Lua interpreter is included in the source code repository.

Getting the source code

The full source code for the toolkit programs, including a set of examples, can be found in a public repository on github. To get your own copy, use the git revision control client to clone the repository with the following command:

git clone https://github.com/gdtk-uq/gdtk.git gdtk 

and within a couple of minutes, depending on the speed of your network connection, you should have your own copy of the full source tree and the complete repository history.

Installing Eilmer

The default installation directory is $HOME/gdtkinst. To compile and install Eilmer, move into the eilmer source area and use make to coordinate the compiling and installing:

cd gdtk/src/eilmer
make install

If you are on a Mac, you'll need to give the make command an extra hint:

make PLATFORM=macosx install

Configure environment

We'll assume you are happy using the default install area $HOME/gdtkinst. The next step is to configure your environment to use Eilmer. You will need to set the DGD variable to point to the top of the installation tree, and the DGD_REPO variable to point to the top of the repository tree. Note that the installation tree and repository tree are separate. You then also need to set $PATH, $DGD_LUA_PATH and $DGD_LUA_CPATH to point to the appropriate places. Some example lines from a .bashrc file are:

export DGD=$HOME/gdtkinst
export DGD_REPO=$HOME/gdtk
export PATH=$PATH:$DGD/bin
export DGD_LUA_PATH=$DGD/lib/?.lua
export DGD_LUA_CPATH=$DGD/lib/?.so

Remember to refresh your current shell (or log out and log back in) so that your newly configured environment is available.

Running your first Eilmer simulation

To test that everything has worked, you can exercise the flow solver to simulate the supersonic flow over a 20-deg cone.

cd ~
cd gdtk/examples/eilmer/2D/sharp-cone-20-degrees/sg
prep-gas ideal-air.inp ideal-air-gas-model.lua
e4shared --prep --job=cone20
e4shared --run --job=cone20
e4shared --post --job=cone20 --vtk-xml

If all of that worked successfully, you may view the final flow field using Paraview::

paraview plot/cone20.pvd

Eilmer -- A compressible flow solver

Presently, the principal code in this collection is the Eilmer simulation code for 2D and 3D gas dynamic flows that may involve chemical reactions. It is a research/education code and, with its built-in grid generation capabilities, is suitable for the exploration of flows where the bounding geometry is not too complex.

Features:

  • Eulerian/Lagrangian description of the flow (finite-volume, 2D axisymmetric or 3D).
  • Transient, time-accurate, optionally implicit updates for steady flow.
  • Shock capturing plus shock fitting boundary.
  • Multiple block, structured and unstructured grids.
  • Parallel computation in a shared-memory context.
  • High-temperature nonequilibrium thermochemistry.
  • GPU acceleration of the finite-rate chemistry.
  • A selection of thermochemical models.
  • Rotating frame of reference for turbomachine modelling.
  • Turbulence models: Spalart-Allmaras and k-omega
  • Conjugate heat transfer to solid surfaces and heat flow within solid objects.
  • Adjoint solver for design optimisation.
  • MHD simulation for a single-fluid plasma (a work in progress).
  • Import of GridPro structured grids and SU2 unstructured grids for complex flow geometries.

We have structured the code as a programmable program, with a user-supplied input script (written in Lua) providing the configuration for any particular simulation exercise. Our target audience is the advanced student of gas dynamics, possibly an undergraduate student of engineering but, more likely, a postgraduate student or academic colleague wanting to simulate gas flows as part of their study.

Documentation

The documentation for users of the code is in a set of PDF reports at the GDTk web site http://gdtk.uqcloud.net Presently there are user guides for the main simulation code, the geometry package and the gas model package. More documents will appear as they are completed.

For those brave souls prepared to dive into the use and extension of the code, there are examples provided as well as the source code itself. For the short term, with the code in a period of rapid development, we expect that users of the code will be mainly our students and academic colleagues who can talk to us directly.

License

For the source code, we use the GNU General Public License 3. Please see the file gpl.txt. For the documentation, we use the Creative Commons Attribution-ShareAlike 4.0 International License.

Contributors

This code is the product of many people. Please see the file contributors.txt for some details of who has written what part of the code. The commit history is place to go to see further details.

Chief Gardeners

Peter Jacobs and Rowan Gollan, 2015-08-31 -- 2022-02-16

gdtk's People

Contributors

alex-muirhead avatar amirmit avatar brad1992 avatar caiyu-xie avatar chris-m-james avatar christine-mittler avatar darylbond avatar fabszan avatar flynnh-github avatar ingojahn avatar jamieborder avatar jbho-rbd avatar jiashengliang avatar jimmyjohnh avatar kieran-mackle avatar kyleadm avatar mragank02 avatar nwjctst avatar pajacobs-ghub avatar pmariotto avatar reeceotto avatar rjgollan-on-github avatar toniatop avatar uqngibbo avatar wattrg avatar whyborn avatar yuuuliu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gdtk's Issues

GridPro structured grid fails to produce meaningful flow?

Hi,

I have a simple blunt body 2D axisymmetric structured mesh that I generated in Pointwise, then exported in a Gridpro .dat format. It is a simple 4 sided domain. The mesh successfully imports using the following command:

-- IMPORT GRIDPRO GRID
gproGrid = "powerlaw_mesh_pw.dat"
grid = importGridproGrid(gproGrid, 1)

I then define a FBArray

-- Define the flow-solution blocks.
blk = FBArray:new{grid=grid[1],
                  initialState=initial,
                  bcList={west=InFlowBC_Supersonic:new{flowState=inflow},
                          north=OutFlowBC_Simple:new{},
                          east=WallBC_WithSlip:new{},
                          south=WallBC_WithSlip:new{}},
                  njb=nblocks/2, nib=2}

I can run the simulation fine, however when reviewing in Paraview the flow is unchanged from the initialState. I have attached my mesh and script files. Unsure if this is a fault on my end maybe with the boundary conditions, but no combinations have worked thus far.
powerlaw_body.zip
.

Issue: Finest grid turbulent flow simulations take more computational time with e4-nk-dist solver

Hi,
I am trying to run Shock-wave boundary layer interaction simulation using e4-nk-dist solver. My computational mesh is hybrid (layer for quad cell near the wall and triangular cells rest of the domain) and consists of 1.3 lakhs cells in total. I am running the simulations in parallel with 24 processors. I am running simulations for Mach number of 5 with k-omega turbulence model. The smallest cell size is 1e-6. Due this smallest cell size, the pseudo time step is coming out around 1e-8. Because of this smallest pseudo time step it is takes large amount of computational time for one simulation.
Is this the case for any simulations with finest grids? Is there any way to speed up the simulations apart from the parallelization? Kindly let me know.
PS: I am attaching the pictures of computational grid and terminal output.

1
2
3

Thanks.
Sakthi

shared library missing from eilmer deployment of lua

When trying to use external libraries with lua that use the C API, the liblua.so is required. This isn't currently built with the eilmer deployment of lua. It may be useful to include this in the lua external library directory. I've managed to cobble something together that gets gdtk to create liblua.so but it leads to failures when compiling the rest e.g eilmer. For now I've just copied liblua.so in and it runs fine for my purposes but could be a nice feature if it isn't too painful to implement.

3D Grid Import

To Whom It May Concern,

In importing a 3d test grid (in this case just a box with a mesh generated in GMSH (with .su2 format), I experience issues with the prep command which references gdtk/build/lib/blk_conn.lua - with attempt to index a nil value (field 'p'). I am trying to trace back the error. Is this a grid import or formatting error? I feel as though the error is in my process, but that this information may be useful for others in the future. I have attached the grid and geo file for reference.

I have successfully taken a 3d geometry into GMSH and performed a slicing operation on it in GMSH - that 2D slice mesh file reads into Eilmer without issue, and I am attempting a similar procedure in this case with the main difference being that I am leaving the full 3d mesh in place.

Thank you for your attention to this issue. If I can provide more information about the method I have been trying that will assist in resolving this please let me know.

BoxTest2.zip

Boundary load problem

Hello,

I'm first time been using this amazing software but facing some problems now, currently using extract-boundary-loads method in 2temperature model, it would be great if it could extract the q_rot and q_tr separately by using extract-boundary-loads method, is there any method to do so?

Thanks for your help,
Jianshu Wu

Config option diffuse_wall_bcs_on_init can cause problems with turbulent simulations

I've discovered a weird problem where diffuse_wall_bcs_on_init can put zeros in the flowfield for the Spalart Allmaras field variable nuhat. Zero nuhat is technically an invalid flow state, and although I've tried to make the solver resistant to it, the Preconditioner in particular likes to choke if it's given nuhat=0, and the simulation crashes instantly.

This issue only appeared because of the recent change in 41140df which removes the presteps, which used to smooth out the zeros before they could cause problems.

Boundary conditions for T_modes

Hello,
I'm having trouble applying a condition to the T_modes.

I'm using as boundary condition "WallBC_NoSlip_UserDefinedT:new{Twall="BC_temp.lua"}",
where BC_temp.lua contains:

function interface(args)
elem = sampleFluidFace('u',blkId,args.i,0,0) --unstructured 2d grid
Twall = 300
inter = {}
inter.T = Twall
inter.T_modes = {Twall}
inter.massf = elem.massf
return inter
end

I read in other test cases that you need just to define inter.T and inter.T_modes, but without providing also the massflow the simulation does not start because of nan in the concentrations.

Thank you for the help.

Eilmer flow data : conversion from old format to new

I have a few data sets after running cases in Eilmer using the old format. I would need to restart them and begin to collect time averages. It looks like this option only exists with the new format. So, I was wondering if there is an option to convert the data files from one format to another.

Potentially misleading warning about Extrema Clipping

Import of issue #12 from the old repository, originally reported by me.

Nick Gibbons created an issue 2022-02-11
The steady-state solver currently prints the following warning when running with config.extrema_clipping = true (the default)

WARNING:
extrema_clipping is set to true.
This is not recommended when using the steady-state solver.
Its use will likely delay or stall convergence.
Continuing with simulation anyway.
END WARNING.

I have found that turning off extrema_clipping sometimes causes problems, and I can usually get good convergence even with it on.
Should we remove this warning?

problem about "Flow of nitrogen over a cylinder of finite length"

Dear all,

I have some issues when I run the example code in the folder "gdtk\examples\eilmer\3D\cylinder-troy-x2".

1.When I excute this command:
$ e4shared --prep --job=cyl
there is an error occur:
(base) wxk@wxk-virtual-machine:~/gdtk/examples/eilmer/3D/cylinder-troy-x2$ e4shared --prep --job=cyl
Eilmer 4.0 compressible-flow simulation code.
Revision-id: fe3e854
Revision-date: Sat Jul 15 14:07:15 2023 +1000
Compiler-name: ldc2
Build-date: 2023年 07月 18日 星期二 14:19:42 CST
Build-flavour: debug
Profiling: omitted
Capabilities: multi-species-gas multi-temperature-gas MHD turbulence.
Parallelism: Shared-memory
Begin preparation stage for a simulation.
Loading prep.lua...
Done loading prep.lua
Cylinder L/D=2 in N2 at u=10000 m/s.
GasModel set nsp= 2 nmodes= 0
T= 3000.0 density= 0.002246145593667 sound speed= 1071.5671268405
M_inf= 9.3321265178087
-----a= Vector3(-0.0075, 0, 0)
There was a problem in the user-supplied input lua script: cyl.lua
globalconfig.FlowSolverException@main_with_rev_string.d(480): cannot open billig.lua: No such file or directory

main_with_rev_string.d:480 _Dmain [0x56050a77ad3a]

I think this error is related to the function call, so I modified billig.lua as follows.
image
I also modified the call command for billig.lua in cyl.lua.
image

then I successfully excute this command.
(base) wxk@wxk-virtual-machine:~/gdtk/examples/eilmer/3D/cylinder-troy-x2$ e4shared --prep --job=cyl
Eilmer 4.0 compressible-flow simulation code.
Revision-id: fe3e854
Revision-date: Sat Jul 15 14:07:15 2023 +1000
Compiler-name: ldc2
Build-date: 2023年 07月 18日 星期二 14:19:42 CST
Build-flavour: debug
Profiling: omitted
Capabilities: multi-species-gas multi-temperature-gas MHD turbulence.
Parallelism: Shared-memory
Begin preparation stage for a simulation.
Loading prep.lua...
Done loading prep.lua
Cylinder L/D=2 in N2 at u=10000 m/s.
GasModel set nsp= 2 nmodes= 0
T= 3000.0 density= 0.002246145593667 sound speed= 1071.5671268405
M_inf= 9.3321265178087
Points on Billig's correlation.
x= 0.01056853277522 y= 0.0
x= 0.010106030042078 y= 0.00375
x= 0.0087195860834211 y= 0.0075
x= 0.0064123814146518 y= 0.01125
x= 0.0031896765302903 y= 0.015
x= -0.00094124752002355 y= 0.01875
front of grid: d[1]= Vector3(-0.00919462, 0, 0)
max_time= 2.25e-05
Build config files for job: cyl
Done building config files.
NOTE: shock detector is on.
NOTE: shock detector is on.
Done building grid and flow files.
Done.

2.when I excute next command "e4shared --run --job=cyl --verbosity=1 --max-cpus=4", there is a new error as follow.
I have tried adjusting the mesh and solver, but this issue has not been resolved and will only have small impact on "fabs(massf_sum - 1.0)".
I don't know if the cause of this problem is physical or simulated, and it may also be related to my modification of billig.lua.

I'd really appreciate some help with this!

Thanks!
Cheers,
Xinke Wang from NSSC

(base) wxk@wxk-virtual-machine:~/gdtk/examples/eilmer/3D/cylinder-troy-x2$ e4shared --run --job=cyl --verbosity=1 --max-cpus=4
Eilmer 4.0 compressible-flow simulation code.
Revision-id: fe3e854
Revision-date: Sat Jul 15 14:07:15 2023 +1000
Compiler-name: ldc2
Build-date: 2023年 07月 18日 星期二 14:19:42 CST
Build-flavour: debug
Profiling: omitted
Capabilities: multi-species-gas multi-temperature-gas MHD turbulence.
Parallelism: Shared-memory
Begin simulation with command-line arguments.
jobName: cyl
tindxStart: 0
maxWallClock: 432000
verbosityLevel: 1
maxCPUs: 4 for shared memory-parallelism
Begin init_simulation()...
NOTE: shock detector is on.
Single process running with 4 threads.
Heap memory used: 532 MB, unused: 119 MB, total: 651 MB (651-651 MB per task)
Done init_simulation() at wall-clock(WC)= 9.4 sec
starting simulation time= 0
Integrate in time.
Exception thrown in phase 05a of stage 1 of explicit update: Sum of species mass fractions far from 1.0: fabs(massf_sum - 1.0) = 0.118622
assert_error_tolerance = 0.1
tolerance = 0
massf = [ 1.118622e+00 0.000000e+00]

Exception thrown in phase 05a of stage 1 of explicit update: Sum of species mass fractions far from 1.0: fabs(massf_sum - 1.0) = 0.118622
assert_error_tolerance = 0.1
tolerance = 0
massf = [ 1.118622e+00 0.000000e+00]

Exception thrown in phase 05a of stage 1 of explicit update: Sum of species mass fractions far from 1.0: fabs(massf_sum - 1.0) = 0.118622
assert_error_tolerance = 0.1
tolerance = 0
massf = [ 1.118622e+00 0.000000e+00]

Exception caught while trying to take step 1.
----- Begin exception message -----
globalconfig.FlowSolverException@simcore_gasdynamic_step.d(1542): Explicit update failed after 3 attempts; giving up.

simcore_gasdynamic_step.d:1542 void simcore_gasdynamic_step.gasdynamic_explicit_increment_with_fixed_grid() [0x561c679b8b2e]
simcore.d:1068 int simcore.integrate_in_time(double) [0x561c679cfe3f]
main_with_rev_string.d:747 _Dmain [0x561c679d7a40]
----- End exception message -----
Integration stopped: An exception was caught during the time step.
Done integrate_in_time().
Note that integrate_in_time failed.
Finalize the simulation.
Write flow solution.
Step= 1 final-t= 1e-10
Done.

Prepartitioned grid import for massively parallel job - very slow time integration

Hello,

We have been testing Eilmer's capabilities in a step-by-step manner, to assess its suitability for our requirements.
We have previously conducted 2D supersonic internal flow simulations using Eilmer successfully. The grid as well as block partitioning was done within Eilmer for the simple geometry.

Now, we have created a block structured grid for a 3D scramjet geometry. The grid size is approximately 150 million, divided into approximately 3200 blocks. Grid is generated and also partitioned into blocks externally using GridPro. All blocks have an almost uniform number of grid points (~50000 cells) to ensure uniform load on all processors.

Grid and connectivity information is exported out of GridPro in the SU2 format, and the grid is then imported into Eilmer using the grids = importGridproGrid(gproGrid, scale) function.

While the simulation begins and runs successfully, the progress is prohibitively slow. We were able to run barely 600 time steps in 4 hours of runtime, on 3192 cores!

The code output seems to suggest that MPI threads have been initialized. So I wonder why the progress is so slow. Has the code been used for massively parallel jobs before? My previous experiences have been using Eilmer's internal geometry creation and grid partitioning algorithms. I don't know if I am missing something when I import the grid and block connectivity information from an external source.

I have attached my input deck and the solver on-screen output here for reference.

Thanks
config_details.zip

Parallelism

gdtk/eilmer - I have requested more than 2 nodes worth of processors to run a case, but it seems as though in each run, gdtk only uses 2 nodes worth of CPUs. For instance, I have requested 6 nodes each with 40 cores resulting in 240 total, but eilmer only subscribes 80.

What is causing this? I have had the same issue on multiple clusters. I use something similar to the first PBS example in the gdtk website to setup the run.

Missing data from `.vtu` files from `lmr`

Description

When opening an exported .pvtu file from any of the lmr examples, Paraview will throw the error "Piece cell data array vel.vector not found".

( 681.898s) [paraview        ]  vtkXMLPDataReader.cxx:377    ERR| vtkXMLPUnstructuredGridReader (0x1e
de0260): Piece cell data array vel.vector not found
( 681.916s) [paraview        ]  vtkXMLPDataReader.cxx:377    ERR| vtkXMLPUnstructuredGridReader (0x1ede
0260): Piece cell data array vel.vector not found

The source of this error is that lmr snapshot2vtk will currently write

  <PDataArray Name="vel.vector" type="Float32" NumberOfComponents="3"/>

to the header of the .pvtu files, however will not write any data with this name to the corresponding .vtu files.

Recreation Steps

  1. Build the most recent version of lmr using make install
  2. Run one of the lmr 2D examples (used sharp-cone-20-degrees/ sg-minimal)
  3. Run lmr snapshot2vtk post processing
  4. Open the exported flowXXXX.pvtu file using Paraview

Details

OS: Pop!_OS 22.04
Paraview: version 5.11.1
lmr Revision-id: a797383
lmr Compiler-name: ldc2

High Memory usage in simulation

When running e4mpi, I have very large memory issues that arise causing seg fault failures. I am looking for reasons why this may be occurring. The case I am running is a simple 2D case, with 315,000 cells, which is by no means a large grid. The RAM usage on this case is nevertheless totaling ~944 Gb, which is significant even for large memory compute nodes.

This error seems very odd, can someone assist with debugging?

Thank you.

L1D piston brake completely prevents recoil

Hi all,

I am simulating a shock tunnel (HEG) using L1D. For this specific condition the piston used is equipped with brakes, that are automatically applied when the piston is moving backwards.

I have experimental data showing the piston does have some recoil even with the brakes equipped.

However, in the L1D simulation, the piston is stopped completely at the moment of the would-be recoil. Changing the brakes friction force has no effect on this behavior.

I'd really appreciate some help with this!

Thanks!
Cheers,
Balazs from DLR Göttingen

InFlowBC_FromStagnation() with Steady-State Solver

Import of issue #8 from the old directory, originally reported by Fabian Zander:

Fabian Zander created an issue 2021-11-25
Simulations utilising the steady state solver with the inflow from stagnation b.c. don’t appear to be running (for me or my test case - available on request)

Import mesh issue

Hello,

I'm using Gmsh to generate a mesh and then import it to eilmer4, but it's giving me the following error:
Screenshot 2024-03-08 at 13 39 18
And the mesh file and input file is here
mesh and input file.zip

I'd appreciate it if someone could point me in the right direction.
Thanks,
Jianshu Wu

Error running across multiple nodes

Hello,

I ran into this issue while running cases on Notre Dame's HPC system. I can run cases just fine when only running on one node however when running across multiple nodes e4mpi keeps outputting errors. Attached is the output file with the errors I am getting.

Notre Dame support, after looking at the problem, says this is not an issue with their system. I recompiled but that didn't seem to resolve the issue.

ZL

Dist.o293344.txt

Newton-Krylov Solver attempts to run with divergence gradient calculation but shouldn't.

Import of issue #9 from the old repository, originally reported by Lachlan Whyborn.

Lachlan Whyborn created an issue 2021-12-04

The Newton-Krylov solver requires the least-squares gradient calculation for the construction of the numerical Jacobian, but will still attempt to run if ‘divergence’ is selected for the spatial_deriv_calc. Should probably throw a more descriptive error during initialisation rather than an unitialised least-squares workspace error.

Megathread: Ghost cell free wall boundary issues

Import of issue #9 from the old repository, originally reported by me.

Nick Gibbons created an issue 2021-12-03

Commit b802b66 swapped to the experimental “type 1” versions of the wall boundary conditions: WallBC_WithSlip, WallBC_NoSlipFixedT, and WallBC_NoSlip_Adiabatic, which do not use ghost cells to implement wall behaviour. This thread is for collecting specific simulations where the ghost-cell free treatment seems to cause problems.

install problems

Dear all,

I have an issue when I install the code on Ubuntu.
I've installed ldc and configured environment variables in .bashrc.
But when I make install in \gdtk\src\eilmer the following error occurs, I tried it on another computer but got the same error.

I'd really appreciate some help with this!

Thanks!
Cheers,
Xinke Wang from NSSC

~/gdtk/src/eilmer$ make install
sed -e 's/PUT_REVISION_STRING_HERE/fe3e854e/'
-e 's/PUT_REVISION_DATE_HERE/Sat Jul 15 14:07:15 2023 +1000/'
-e 's/PUT_COMPILER_NAME_HERE/ldc2/'
-e 's/PUT_BUILD_DATE_HERE/2023年 07月 17日 星期一 11:24:18 CST/'
main.d > main_with_rev_string.d
ldc2 -w -g -link-defaultlib-debug -L-export-dynamic -d-debug -d-version=flavour_debug -dip1008 -preview=in -I.. -I../nm -I../util -I../geom -I../grid_utils -I../extern/gzip -d-version=multi_species_gas -d-version=multi_T_gas -d-version=MHD -d-version=turbulence -of=e4shared
-d-version=tecplot_unavailable
main_with_rev_string.d
simcore.d simcore_gasdynamic_step.d simcore_solid_step.d simcore_exchange.d simcore_io.d fileutil.d fvcell.d fvvertex.d fvinterface.d fluxcalc.d onedinterp.d flowgradients.d lsqinterp.d conservedquantities.d flowstate.d globalconfig.d globaldata.d block.d fluidblock.d sfluidblock.d ufluidblock.d fluidblockarray.d fluidblockio_old.d fluidblockio_new.d gas_solid_interface.d grid_motion.d grid_motion_udf.d grid_motion_shock_fitting.d history.d loads.d special_block_init.d mass_diffusion.d shockdetectors.d turbulence.d jacobian.d fluidblockio.d celldata.d lua_helper.d luaflowsolution.d luaflowstate.d user_defined_source_terms.d postprocess.d flowsolution.d vtk_writer.d tecplot_writer_classic.d tecplot_writer.d
bc/package.d bc/boundary_condition.d bc/ghost_cell_effect/package.d bc/ghost_cell_effect/ghost_cell.d bc/ghost_cell_effect/internal_copy_then_reflect.d bc/ghost_cell_effect/flow_state_copy.d bc/ghost_cell_effect/flow_state_copy_from_profile.d bc/ghost_cell_effect/flow_state_copy_from_history.d bc/ghost_cell_effect/synthesise_flow_state.d bc/ghost_cell_effect/extrapolate_copy.d bc/ghost_cell_effect/from_upwind.d bc/ghost_cell_effect/fixed_p.d bc/ghost_cell_effect/fixed_pt.d bc/ghost_cell_effect/from_stagnation.d bc/ghost_cell_effect/full_face_copy.d bc/ghost_cell_effect/mapped_cell_copy.d bc/ghost_cell_effect/gas_solid_full_face_copy.d bc/user_defined_effects.d bc/boundary_flux_effect.d bc/boundary_cell_effect.d bc/boundary_interface_effect.d solid/solidbc.d solid/solidblock.d solid/solid_ghost_cell.d solid/solid_boundary_flux_effect.d solid/solid_boundary_interface_effect.d solid/solid_gas_full_face_copy.d solid/solid_full_face_copy.d solid/ssolidblock.d solid/solidfvcell.d solid/solidfvinterface.d solid/solidfvvertex.d solid/solidprops.d solid/solidsolution.d solid/solid_udf_source_terms.d solid/luasolidprops.d field/field.d field/fieldgmres.d field/fieldconductivity.d field/fieldexchange.d field/fieldderivatives.d field/fieldbc.d ../geom/package.d ../geom/geometry_exception.d ../geom/elements/package.d ../geom/elements/nomenclature.d ../geom/elements/vector3.d ../geom/elements/properties.d ../geom/elements/projection.d ../geom/gpath/package.d ../geom/gpath/path.d ../geom/gpath/line.d ../geom/gpath/arc.d ../geom/gpath/helix.d ../geom/gpath/bezier.d ../geom/gpath/nurbs.d ../geom/gpath/polynomial.d ../geom/gpath/polyline.d ../geom/gpath/xspline.d ../geom/gpath/xsplinelsq.d ../geom/gpath/svgpath.d ../geom/gpath/modifiedpath.d ../geom/gpath/gpath_utils.d ../geom/surface/package.d ../geom/surface/parametricsurface.d ../geom/surface/coonspatch.d ../geom/surface/aopatch.d ../geom/surface/gmopatch.d ../geom/surface/channelpatch.d ../geom/surface/ruledsurface.d ../geom/surface/nozzleexpansionpatch.d ../geom/surface/sweptpathpatch.d ../geom/surface/meshpatch.d ../geom/surface/subrangedsurface.d ../geom/surface/bezierpatch.d ../geom/surface/beziertrianglepatch.d ../geom/surface/nurbssurface.d ../geom/surface/spherepatch.d ../geom/surface/cubepatch.d ../geom/surface/controlpointpatch.d ../geom/volume/package.d ../geom/volume/parametricvolume.d ../geom/volume/tfivolume.d ../geom/volume/slabvolume.d ../geom/volume/wedgevolume.d ../geom/volume/sweptsurfacevolume.d ../geom/volume/twosurfacevolume.d ../geom/volume/meshvolume.d ../geom/volume/subrangedvolume.d ../geom/volume/nurbsvolume.d ../geom/grid/package.d ../geom/grid/grid.d ../geom/grid/sgrid.d ../geom/grid/usgrid.d ../geom/grid/paver.d ../geom/grid/paver2d.d ../geom/misc/package.d ../geom/misc/univariatefunctions.d ../geom/misc/svg.d ../geom/misc/kdtree.d ../geom/misc/sketch.d ../geom/misc/nurbs_utils.d ../geom/luawrap/package.d ../geom/luawrap/luaunifunction.d ../geom/luawrap/luageom.d ../geom/luawrap/luanomenclature.d ../geom/luawrap/luagpath.d ../geom/luawrap/luagpath_utils.d ../geom/luawrap/luasurface.d ../geom/luawrap/luavolume.d ../geom/luawrap/luagrid.d ../geom/luawrap/luasgrid.d ../geom/luawrap/luausgrid.d ../geom/luawrap/luasketch.d ../grid_utils/grid_deform.d
../gas/package.d ../gas/composite_gas.d ../gas/gas_model.d ../gas/gas_state.d ../gas/init_gas_model.d ../gas/ideal_gas.d ../gas/ideal_helium.d ../gas/cubic_gas.d ../gas/cea_gas.d ../gas/physical_constants.d ../gas/therm_perf_gas.d ../gas/therm_perf_gas_equil.d ../gas/very_viscous_air.d ../gas/uniform_lut.d ../gas/uniform_lut_plus_ideal.d ../gas/adaptive_lut_CEA.d ../gas/ideal_air_proxy.d ../gas/ideal_gas_ab.d ../gas/two_temperature_reacting_argon.d ../gas/two_temperature_argon_plus_ideal.d ../gas/ideal_dissociating_gas.d ../gas/two_temperature_air.d ../gas/two_temperature_nitrogen.d ../gas/two_temperature_dissociating_nitrogen.d ../gas/vib_specific_nitrogen.d ../gas/vib_specific_co.d ../gas/fuel_air_mix.d ../gas/equilibrium_gas.d ../gas/pseudo_species_gas.d ../gas/pseudo_species.d ../gas/electronically_specific_gas.d ../gas/electronic_species.d ../gas/two_temperature_gasgiant.d ../gas/thermo/cea_thermo_curves.d ../gas/thermo/evt_eos.d ../gas/thermo/perf_gas_mix_eos.d ../gas/thermo/pvt_eos.d ../gas/thermo/therm_perf_gas_mix_eos.d ../gas/thermo/thermo_model.d ../gas/thermo/therm_perf_gas_mix.d ../gas/thermo/two_temperature_gas.d ../gas/thermo/three_temperature_gas.d ../gas/thermo/multi_temperature_gas.d ../gas/thermo/energy_modes.d ../gas/diffusion/cea_therm_cond.d ../gas/diffusion/cea_viscosity.d ../gas/diffusion/chemkin_therm_cond.d ../gas/diffusion/chemkin_viscosity.d ../gas/diffusion/gas_mixtures.d ../gas/diffusion/sutherland_therm_cond.d ../gas/diffusion/sutherland_viscosity.d ../gas/diffusion/therm_cond.d ../gas/diffusion/transport_properties_model.d ../gas/diffusion/two_temperature_trans_props.d ../gas/diffusion/multi_temperature_trans_props.d ../gas/diffusion/three_temperature_trans_props.d ../gas/diffusion/viscosity.d ../gas/diffusion/wilke_mixing_therm_cond.d ../gas/diffusion/wilke_mixing_viscosity.d ../gas/diffusion/gasgiant_transport_properties.d ../gas/diffusion/binary_diffusion_coefficients.d ../gas/diffusion/rps_diffusion_coefficients.d ../extern/ceq/source/ceq.d ../extern/gzip/gzip.d
../util/lua.d ../util/json_helper.d ../util/lua_service.d ../util/msg_service.d ../util/time_utils.d ../util/zip.d ../nm/package.d ../nm/nm_exception.d ../nm/number.d ../nm/complex.d ../nm/bbla.d ../nm/bdfLU.d ../nm/bracketing.d ../nm/brent.d ../nm/secant.d ../nm/gaussquad.d ../nm/linesearch.d ../nm/nelmin.d ../nm/newton.d ../nm/newtoncotes.d ../nm/ridder.d ../nm/rungekutta.d ../nm/rsla.d ../nm/schedule.d ../nm/smla.d ../nm/stmatrix.d ../nm/tree_patch.d ../nm/univariate_lut.d ../nm/limiters.d ../nm/spline.d ../nm/splinelsq.d
../kinetics/package.d ../kinetics/thermochemical_reactor.d ../kinetics/init_thermochemical_reactor.d ../kinetics/chemistry_update.d ../kinetics/energy_exchange_mechanism.d ../kinetics/energy_exchange_system.d ../kinetics/equilibrium_update.d ../kinetics/electronic_update.d ../kinetics/electronically_specific_kinetics.d ../kinetics/ideal_dissociating_gas_kinetics.d ../kinetics/fuel_air_mix_kinetics.d ../kinetics/powers_aslam_kinetics.d ../kinetics/yee_kotov_kinetics.d ../kinetics/rate_constant.d ../kinetics/reaction.d ../kinetics/reaction_mechanism.d ../kinetics/relaxation_time.d ../kinetics/exchange_cross_section.d ../kinetics/exchange_chemistry_coupling.d ../kinetics/multi_temperature_thermochemical_reactor.d ../kinetics/two_temperature_air_kinetics.d ../kinetics/two_temperature_argon_kinetics.d ../kinetics/two_temperature_argon_with_ideal_gas.d ../kinetics/two_temperature_nitrogen_kinetics.d ../kinetics/two_temperature_dissociating_nitrogen_kinetics.d ../kinetics/vib_specific_nitrogen_kinetics.d ../kinetics/vib_specific_co_kinetics.d ../kinetics/two_temperature_gasgiant_kinetics.d ../gas/luagas_model.d ../kinetics/luathermochemical_reactor.d ../kinetics/luachemistry_update.d ../kinetics/luaequilibrium_calculator.d ../kinetics/luaelectronically_specific_kinetics.d ../kinetics/luareaction_mechanism.d ../kinetics/luatwo_temperature_air_kinetics.d ../kinetics/luavib_specific_nitrogen_kinetics.d
../gasdyn/gasflowexception.d ../gasdyn/idealgasflow.d ../gasdyn/gasflow.d ../gasdyn/luaidealgasflow.d ../gasdyn/luagasflow.d ../nm/luabbla.d
../extern/ceq/source/libceq.a ../gas/libgasf.a ../../extern/lua-5.4.3/install/lib/liblua.a
-L-ldl
Error: Preview in is invalid
make: *** [makefile:592:e4shared] 错误 1

Gridding and Spline Issue

Hello all,

I was trying to set up a grid and ran into this gridding issue when using the spline2 function.

Grid_Structure

It looks like the generated spline loops back on itself causing the gridding to mess up. When using the same spline control points in MATLAB, which if I understand correctly uses same spline generation algorithm, the generated spline did not have this issue.

I got around the issue by generating the spline in MATLAB and grabbing more points to force the correct spline using the spline2 function. This might have been just an issue with my control points but if it is a bug just wanted you to be aware. I've attached the original control points.

Nozzle_Contour_Points.txt

Typo in User Guide, related to modal energy source terms

Alexis Lefevre has discovered that the User Guide section on user defined source terms does not match the code.

Specifically the guide tells users to add the following keys to the table that gets passed upstairs to the D code:

  • renergies table of nmodes energy fluxes.
  • romega rate of addition of ω for the k − ω turbulence model
  • rtke rate of addition of turbulent kinetic energy

In getUDFSourceTermsForCell in user_defined_source_terms.d however, the D code looks for keys without the "r". Commit 40b9c0c4046baaed4421512bcfdb13e90839a130 to the gdtk-docs repo changes the text to the following:

  • energies table of nmodes energy fluxes.
  • omega rate of addition of ω for the k − ω turbulence model
  • tke rate of addition of turbulent kinetic energy
  • nuhat rate of addition of Spalart-Allmaras field variable

Import su2 mesh issue

Hi again,

Sorry for this but I don't imagine this will come out and I cannot find how to fix it...
With import file
Screenshot 2024-03-09 at 17 03 44

but
Screenshot 2024-03-09 at 17 00 28

And here is the mesh and input file:
mesh_and_input.zip

And I have a question: can Eilmer4 import su2 mesh with structure mesh?

Thanks very much for everything
Jianshu Wu

Parallel Scaling test case for steady state NK-DIST exits with error

Was trying out parallel scaling test case as given gdtk/examples/eilmer/3D/parallel-scaling/nk/ by Nick Gibbons (@uqngibbo) to benchmark Eilmer on our local cluster. I did read the README.rst file as given in parallel-scaling folder which said as follows...

To run with a larger number of cores, perhaps for a supercomputer, you may want to try editing the gengrid.lua file to add more cells to the problem. You should be able to do this using the N_REFINE parameter, although you may also have to change a_factor on line 165 to keep the cluster functions happy.

So, I changed the N_REFINE from 0.5 to 2.0 to get a dense grid and when I set the number_of_processors=32,64,128,256 in the parameters.txt file. For the case of -np 32 as given in following make run command:

mpirun -np 32 e4-nk-dist --job=bc --verbosity=1 > LOGFILE 2> ERRFILE

the case gives the following output in LOGFILE. (See attached below)
LOGFILE.txt

I don't understand why is it crashing for -np 32 ?
because the same case is running fine for -np 64, 128, 256

I have also tried changing a_factor from 0.005 to 0.001 as mentioned in the README.rst but wasn't successful.

Simulating Shock Tunnel with 2-D Contour Nozzle (not conical nozzle) in L1d3

Hii,

I am doing experimental Research on shock tube and shock tunnel systems. To get an estimation of expected results, I want to simulate flow in our shock tunnel which uses a 2-D contour nozzle (see images attached). The nozzle includes a convergent section, throat, divergent section and then a constant area section. The 2-D drawing shows these parts with respective dimensions (the dimensions are in cm). The 3d drawing illustrates the divergent part and throat. And the 3d assembly drawing shows nozzle configuration which will be fitted behind the shock tube to make it a shock tunnel. The area ratio (exit area/throat area) of this contour nozzle is equal to 0.012/0.08 = 6.667. The throat height is 0.012 meters, the exit height is 0.08 meters, and the thickness of the nozzle (perpendicular to the plane of the paper) is 0.04 meters. One thing to note is that the convergent section is a circular arc, hence the convergent section can be treated as a conical convergent section. So that was a basic information about the nozzle.

Now, the question is: How can I simulate it in L1d3? To the best of my knowledge, when we add two breakpoints in L1d, it uses linear interpolation to join them. This method will work for the 3-D conical nozzle because, in the 2-D projection, it will appear that the throat and the exit are attached in a linear fashion, also the throat and exit of a conical nozzle are actually circles, hence they can be completely defined by diameter value given in the add_breakpoint command. However, this is not the case with a 2-D contour nozzle. Its geometry will be different in the 2-D plane, and the throat and the exit are not attached with a linear line. Also, the throat and exit are rectangular in shape, so if we give value of their heights in place of diameter in add_breakpoint command, their area ratio will be completely different (L1d will be take them as circular section with those height values a diameter). So, how can I simulate this 2-D contour nozzle in L1d? The most important requirement is that the area ratio should not change while defining the geometry.

Looking forward for suggestions.....

2d
3d_nozzle
nozzle_assembly

compilation problems

Dear folks,

we have an issue compiling the code on a UBUNTU 20.04.
We are using a gcc 9.3 compiler and receive the error below. We tried compiling on several machines, the issue is always the same. We would appreciate some help.

Thanks,
Cheers,
Alexander from DLR Göttingen

"
.../gas/therm_perf_gas.d(268): Error: @nogc function gas.therm_perf_gas.ThermallyPerfectGas.update_thermo_from_ps cannot call non-@nogc function gas.therm_perf_gas.ThermallyPerfectGas.update_thermo_from_ps.bracket!(zeroFun, double).bracket
../gas/therm_perf_gas.d(319): Error: @nogc function gas.therm_perf_gas.ThermallyPerfectGas.update_thermo_from_hs cannot call non-@nogc function gas.therm_perf_gas.ThermallyPerfectGas.update_thermo_from_hs.bracket!(zeroFun, double).bracket
../gas/therm_perf_gas.d(363): Error: @nogc function gas.therm_perf_gas.ThermallyPerfectGas.update_thermo_from_hs cannot call non-@nogc function gas.therm_perf_gas.ThermallyPerfectGas.update_thermo_from_hs.bracket!(zeroFun2, double).bracket
../gas/uniform_lut_plus_ideal.d(96): Error: @nogc function gas.uniform_lut_plus_ideal.UniformLUTPlusIdealGas.update_thermo_from_pT cannot call non-@nogc function gas.uniform_lut_plus_ideal.UniformLUTPlusIdealGas.update_thermo_from_pT.bracket!(p_error, double).bracket
../gas/uniform_lut_plus_ideal.d(142): Error: @nogc function gas.uniform_lut_plus_ideal.UniformLUTPlusIdealGas.update_thermo_from_rhou cannot call non-@nogc function gas.uniform_lut_plus_ideal.UniformLUTPlusIdealGas.update_thermo_from_rhou.bracket!(u_error, double).bracket
../gas/uniform_lut_plus_ideal.d(214): Error: @nogc function gas.uniform_lut_plus_ideal.UniformLUTPlusIdealGas.update_thermo_from_rhop cannot call non-@nogc function gas.uniform_lut_plus_ideal.UniformLUTPlusIdealGas.update_thermo_from_rhop.bracket!(p_error, double).bracket
../gas/two_temperature_argon_plus_ideal.d(116): Error: @nogc function gas.two_temperature_argon_plus_ideal.TwoTemperatureArgonPlusIdealGas.update_thermo_from_pT cannot call non-@nogc function gas.two_temperature_argon_plus_ideal.TwoTemperatureArgonPlusIdealGas.update_thermo_from_pT.bracket!(p_error, double).bracket
../gas/two_temperature_argon_plus_ideal.d(175): Error: @nogc function gas.two_temperature_argon_plus_ideal.TwoTemperatureArgonPlusIdealGas.update_thermo_from_rhou cannot call non-@nogc function gas.two_temperature_argon_plus_ideal.TwoTemperatureArgonPlusIdealGas.update_thermo_from_rhou.bracket!(u_error, double).bracket
../gas/two_temperature_argon_plus_ideal.d(273): Error: @nogc function gas.two_temperature_argon_plus_ideal.TwoTemperatureArgonPlusIdealGas.update_thermo_from_rhop cannot call non-@nogc function gas.two_temperature_argon_plus_ideal.TwoTemperatureArgonPlusIdealGas.update_thermo_from_rhop.bracket!(p_error, double).bracket
../gas/thermo/therm_perf_gas_mix.d(89): Error: @nogc function gas.thermo.therm_perf_gas_mix.ThermPerfGasMixture.updateFromPS cannot call non-@nogc function gas.thermo.therm_perf_gas_mix.ThermPerfGasMixture.updateFromPS.bracket!(zeroFun, double).bracket
../gas/thermo/therm_perf_gas_mix.d(142): Error: @nogc function gas.thermo.therm_perf_gas_mix.ThermPerfGasMixture.updateFromHS cannot call non-@nogc function gas.thermo.therm_perf_gas_mix.ThermPerfGasMixture.updateFromHS.bracket!(zeroFun, double).bracket
../gas/thermo/therm_perf_gas_mix.d(186): Error: @nogc function gas.thermo.therm_perf_gas_mix.ThermPerfGasMixture.updateFromHS cannot call non-@nogc function gas.thermo.therm_perf_gas_mix.ThermPerfGasMixture.updateFromHS.bracket!(zeroFun2, double).bracket
../nm/number.d(24): Error: undefined identifier isClose
../nm/number.d(25): Error: undefined identifier isClose
../nm/number.d(33): Error: undefined identifier isClose
../nm/number.d(34): Error: undefined identifier isClose
../nm/number.d(43): Error: undefined identifier isClose
../nm/number.d(51): Error: undefined identifier isClose
make: *** [makefile:541: e4shared] Error 1
"

Turbulence models

Is the Menter-SST turbulence model available for use? Is that part of the config.turbulence=k-omega model but must be invoked in a specific way? What I mean is the model presented by Menter, F. R., "Two-Equation Eddy-Viscosity Turbulence Models for Engineering Applications," available?

It appears that the model based on "Improved Two-Equation k-omega Turbulence Models for Aerodynamic Flows", F. R. Menter, NASA TM 103975 (1992) might be available, could someone provide an example of how that might be invoked?

header lines not written when config.write_loads is false

Volker Hanneman from DLR has found that running the steady state solver with:

config.write_loads=false
write_loads_count = 100,

Produces a malformed "times" file in the loads directory, which breaks some scripts for reading loads. Specifically this occurs because the header line is written during init_simulation, which honours the write_loads flag, but the loads files are written in steadystate_core, which does not.

Question on staged prep option

I am wondering about how to invoke the staged prep options properly in Eilmer, as it would be extremely beneficial to my current runs. I have used the example in the sharp-cone-20-degrees, specifically the usg-su2 staged prep case, but I have a few challenges when extending this to my case. In my case, I encounter a mapped cell file issue.

So in the present case, I have the ugrid partitioned su2 files, and I use the prep grid and prep flow format, but I encounter this issue:

"globalconfig.FlowSolverException@bc/ghost_cell_effect/mapped_cell_copy.dglobalconfig.FlowSolverException@bc/ghost_cell_effect/mapped_cell_copy.d(233): Did not find mapped blocks section for block id=175"

Is this because of a setting I need to adjust? I have noticed when I perform the e4shared --prep-grid --job="" command, it outputs the importing values and at the end states:

#connections: 0
#grids: 1280
#gridArrays: 0

So it appears maybe it is not identifying the connections. The grid is a pointwise generated su2 grid, and not a gridpro grid, would that cause this?

High ram usage during solution on a moderate mesh

Dear Eilmer Team,

I have been learning Eilmer recently on a relatively simple case. I am only interested in the steady state behavior which is why I am running my case with backward_Euler. My current mesh is 10M cells (imported from .su2 file) and the ram usage while solving it using 30 cores is 124 GB. Is there a way I can decrease the RAM usage? 124GB is manageable at this stage, but it means we might run into trouble on a significant amount of our boxes is we go to to 20 or 30M cell.

Regards,
Hilbert

Turbulence and non-equilibrium

I have tried utilizing a turbulence solver with non-equilibrium (5sp air) case. In doing so, I receive an error that the mass fraction does not sum to 1. Is this a solvable error or is it inherent in this type of calculation? The error exactly as reported is:

object.Error@(0): The given mass fraction values do not sum to 1.0.
The sum value is: 0.000000e+00
The error in the sum is larger than tolerance= 1.000000e-06

Any thoughts on what we could try here?

Nightmare bug in complex pow when given negative exponent.

The pow overload in complex.d current uses a for loop when called with an integer exponent, which causes it to return instantly when given a negative exponent:

number x = 10.0;
T = pow(x, -2)
writefln("T %e ", T)
>>>  10.0

This was causing all kinds of trouble in the old 2T argon model, and might be a problem elsewhere.

RMS quantities on Eilmer

After the code has run and post processing step is completed to write out the data, I do not see any RMS quantities being written out into the .vtk file. I did not find any option to add these as extra variables using the "add-vars" command either. A quick browse through the source code seemed to suggest that RMS quantities are not being tracked. Am I correct? Or is there a way to make the code write out the RMS quantities, such as u',v',w',T',P' ?

Physicality Check Allow Sketchy Temperatures

Stage two of the JFNK's physicality check reduces the relaxation factor until the decode call successfully recovers the primitive variables with no exceptions getting thrown. In a couple of simulations, I have noticed a pattern where this sometimes causes problems: Specifically, the physicality check can let through extremely low (single-digit) temperatures, which are technically "physical", but cause the simulation to crash on subsequent steps.

It may be that we need a relaxation factor keyed off the change in temperature that behaves similarly to the density one.

Creating 3D blocks using the TFI constructor fails with open corner errors

Hans Hornung has found a regression where 3D blocks created in Eilmer fails to link up their constituent surfaces properly.

object.Error@(0): TFIVolume open at corner 0 p= Vector3(0, 0, -0.0099) p_alt1= Vector3(0, 0.0108566, -0.0893) p_alt2= Vector3(0, 0, -0.0893)

This seems to have something to do with commit a8a91f6, which changed the ordering of the geometry nomenclature.

High memory usage during prep of large grids

Kyle and I have observed the prep program using >120 GB of RAM when processing his 32M cell BOLT-II bolt grid, and failing due to memory limits on some machines. This seems like it could be improved with some attention to how the heap is used during prep.

LDC compiler gives garbage instead of stack trace when exceptions are thrown

By default the LDC compiler doesn't include the symbols needed for a nice stack trace, even when compiling with -g

Example:

[email protected](2689): Range violation
----------------
??:? [0x5567cb998a25]
??:? [0x5567cb9c257e]
??:? [0x5567cb9a3ddd]
??:? [0x5567cb9980f8]
??:? [0x5567cb9987e7]
??:? [0x5567cb712f8e]
??:? [0x5567cb7169e0]
??:? [0x5567cb914532]
??:? [0x5567cb7bac66]
??:? [0x5567cb7967ae]
??:? [0x5567cb735cc5]
??:? [0x5567cb901fde]
??:? [0x5567cb92617f]
??:? [0x5567cb9a3a5f]
??:? [0x5567cb9a3952]
??:? [0x5567cb9a37ad]
??:? [0x5567cb926684]
??:? __libc_start_main [0x7f829e299082]
??:? [0x5567cb25c5ad]

This LDC issue (ldc-developers/ldc#863) explains that adding -L-export-dynamic to the compile returns the expected output, which it does:

[email protected](2689): Range violation
----------------
??:? @nogc void sfluidblock.SFluidBlock.first_order_flux_calc(ulong) [0x5604b1957b53]
??:? @nogc void sfluidblock.SFluidBlock.convective_flux_phase0(bool, ulong) [0x5604b195bbe0]
??:? int steadystate_core.evalRHS(double, int).__foreachbody5(ref fluidblock.FluidBlock) [0x5604b1bce812]
??:? int std.parallelism.doSizeZeroCase!(fluidblock.FluidBlock[], int delegate(ref fluidblock.FluidBlock)).doSizeZeroCase(ref std.parallelism.ParallelForeach!(fluidblock.FluidBlock[]).ParallelForeach, int delegate(ref fluidblock.FluidBlock)) [0x5604b1a36ba6]
??:? int std.parallelism.ParallelForeach!(fluidblock.FluidBlock[]).ParallelForeach.opApply(scope int delegate(ref fluidblock.FluidBlock)) [0x5604b19ec69e]
??:? void steadystate_core.evalRHS(double, int) [0x5604b197b015]
??:? void steadystate_core.iterate_to_steady_state(int, int, int, bool, bool) [0x5604b1bbc15e]
??:? _Dmain [0x5604b1be2a4f]

The stated reason for this is that the option increases the size of the executable (it's off by default), but I have found that this only changes the size of e4-nk-shared from 18M to 25M. If there's no performance penalty for doing so, I propose including -L-export-dynamic in the debug flavour of the code.

Extract Surfaces Via Group Label

I was wondering whether there was any simple way to extract surfaces via a group label or similar during post processing?

For larger sims i'm having to do things similar to the below, which gets quite tedious.

--surface-list="0,5;3,5;6,5;9,5;12,5;15,5;18,5;21,5;24,5;81,5;84,5;87,5;90,5;93,5;96,5;99,5;102,5;105,5;108,5;111,5;114,5 ;117,5;120,5;123,5;126,5;129,5;132,5;29,4;32,4;35,4;38,4;41,4;44,4;47,4;50,4;53,4;56,4;59,4;62,4;65,4;68,4;71,4;74,4;77,4; 80,4;137,4;140,4;143,4;146,4;149,4;152,4;155,4;158,4;161,4"

If there was some way to specify a group name rather than each surface, similar to --compute-loads-on-group, it would help a lot. I had a look at the post processing source code, but couldn't figure out a simple way to implement it.

Import mesh from auto meshing program

Hi all,

I am interested in running virtual windtunnel cases. What I mean with that is that I want to import an stl, define a box around it and put flow through the box. Because I would like this to be a semi-automated process I have been working with auto meshing tools like fluent-meshing and snappyHexMesh(OpenFOAM mesher).

From what I have seen from Eilmer the only programs that natively export a mesh for use in Eilmer are Gmsh, Pointwise and Gridpro.
From what i have read the only one of those that has decent auto unstructured meshing capability is the latest version of Pointwise and I am not sure how mature that capability is.

Because of the above I have two questions I hope to get some input on:
1: Does anyone have any experience with the latest version of Pointwise, building a script that imports an stl defines a box around the stl and meshes that?
2: The most promising route I can see currently is converting an OpenFOAM mesh to the appropriate format. There is the following thread https://www.cfd-online.com/Forums/su2/127445-tool-convert-openfoam-mesh-su2-mesh-2d-3d.html but I have never been able to get the python script in there to work. Does anybody have any experience in converting an OpenFOAM mesh to a format that Eilmer will take?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.