mflowcode / mfc Goto Github PK
View Code? Open in Web Editor NEWExascale simulation of multiphase/physics fluid dynamics
Home Page: https://mflowcode.github.io
License: MIT License
Exascale simulation of multiphase/physics fluid dynamics
Home Page: https://mflowcode.github.io
License: MIT License
The build system needs its own documentation. I think just using Sphinx would be sufficient for this. We should also have a FAQ
section somewhere.
The post-processed silo files don't have correct primitive variables. Likely an issue with common/m_variables_conversion.fpp
subroutine s_convert_conservative_to_primitive
.
Hi,
I tried to reproduce the case of S.Sembian(2016), but icfl condition in the run_time.inf became infinity . And all the physical quantities including alpha, pressure, velocity also became infinity.
The infinity firstly comes in several grids, and then grows into the whole grids.
These are my input file.
input.zip
What I am doing wrong? Could you advice, please?
Hypoelasticity on GPUs
There appear to be some problems with testing using release-gpu
on Expanse. I'm not sure what the correct build and test procedure is. But I've tried all the different -b
options and ensured there are enough GPUs on hand. With srun
it hangs, and with mpirun
and mpiexec
it fails.
Documenting this here. Wingtip runner failing to build MFC because it hangs here
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(510): function(_CUDAToolkit_find_root_dir )
Called from: [2] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[1] /nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(563): function(_CUDAToolkit_find_version_file result_variable )
Called from: [2] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[1] /nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(577): if(CMAKE_CUDA_COMPILER_LOADED AND NOT CUDAToolkit_BIN_DIR AND CMAKE_CUDA_COMPILER_ID STREQUAL NVIDIA )
Called from: [2] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[1] /nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(586): if(NOT CUDAToolkit_ROOT_DIR AND CUDAToolkit_ROOT )
Called from: [2] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[1] /nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(589): if(NOT CUDAToolkit_ROOT_DIR )
Called from: [2] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[1] /nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(590): _CUDAToolkit_find_root_dir(FIND_FLAGS PATHS ENV CUDA_PATH PATH_SUFFIXES bin )
Called from: [2] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[1] /nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(511): cmake_parse_arguments(arg SEARCH_PATHS;FIND_FLAGS ${ARGN} )
Called from: [3] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[2] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[1] /nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(513): if(NOT CUDAToolkit_BIN_DIR )
Called from: [3] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[2] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[1] /nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(514): if(NOT CUDAToolkit_SENTINEL_FILE )
Called from: [3] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[2] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[1] /nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(515): find_program(CUDAToolkit_NVCC_EXECUTABLE NAMES nvcc nvcc.exe PATHS ${arg_SEARCH_PATHS} ${arg_FIND_FLAGS} )
Called from: [3] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[2] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[1] /nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(522): if(NOT CUDAToolkit_NVCC_EXECUTABLE )
Called from: [3] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[2] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[1] /nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(530): if(EXISTS ${CUDAToolkit_NVCC_EXECUTABLE} )
Called from: [3] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[2] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[1] /nethome/sbryngelson3/MFC/CMakeLists.txt
/nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake(534): execute_process(COMMAND ${CUDAToolkit_NVCC_EXECUTABLE} -v __cmake_determine_cuda OUTPUT_VARIABLE _CUDA_NVCC_OUT ERROR_VARIABLE _CUDA_NVCC_OUT )
Called from: [3] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[2] /nethome/sbryngelson3/MFC/build/cmake/share/cmake-3.24/Modules/FindCUDAToolkit.cmake
[1] /nethome/sbryngelson3/MFC/CMakeLists.txt
^CTerminated
Two things
ping
or something could check if a connection is actually working.cmake
from binary, MFC still issues a warningexp-1-57: project-bryngelsonPI/MFC $ ./mfc.sh
./mfc.sh: line 82: cmake: command not found
even though it actually works in the end.
[I]shb-m1pro: Downloads/MFC $ ./mfc.sh test -j 8
[mfc.sh]: Entering the Python virtual environment (venv).
___ ___ ___
/__/\ / /\ / /\ [email protected] [Darwin]
| |::\ / /:/_ / /:/ ---------------------------------------
| |:|:\ / /:/ /\ / /:/
__|__|:|\:\ / /:/ /:/ / /:/ ___
/__/::::| \:\ /__/:/ /:/ /__/:/ / /\ --jobs: 8
\ \:\~~\__\/ \ \:\/:/ \ \:\ / /:/ --mode: release-cpu
\ \:\ \ \::/ \ \:\ /:/
\ \:\ \ \:\ \ \:\/:/
\ \:\ \ \:\ \ \::/
\__\/ \__\/ \__\/ $ ./mfc.sh [build, run, test, clean] --help
Building pre_process:
$ cmake --build /Users/spencer/Downloads/MFC/build/pre_process --target pre_process -j 8 --config Release
[0/2] Re-checking globbed directories...
ninja: no work to do.
$ cmake --install /Users/spencer/Downloads/MFC/build/pre_process
-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/pre_process
Building simulation:
$ cmake --build /Users/spencer/Downloads/MFC/build/simulation --target simulation -j 8 --config Release
[0/2] Re-checking globbed directories...
ninja: no work to do.
$ cmake --install /Users/spencer/Downloads/MFC/build/simulation
-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/simulation
Test | from D79C3E6F to BDD3411B (142 tests)
tests/UUID Summary
3AE495F4 1D -> bc=-5
C5B79059 1D -> bc=-9
70DAE9E8 1D -> bc=-4
D79C3E6F 1D -> bc=-1
48CCE072 1D -> bc=-7
5EC236F2 1D -> bc=-6
AED93D34 1D -> bc=-8
8A59E8E6 1D -> bc=-2
727F72ED 1D -> bc=-10
A60691E7 1D -> bc=-11
3FC6FC4A 1D -> bc=-12
2AB32975 1D -> bc=-3
B3C85904 1D -> weno_order=3 -> mapped_weno=F -> mp_weno=F
7077C99F 1D -> weno_order=3 -> mapped_weno=T -> mp_weno=F
84017671 1D -> weno_order=5 -> mapped_weno=F -> mp_weno=F
F5890628 1D -> weno_order=5 -> mapped_weno=T -> mp_weno=F
34580912 1D -> weno_order=5 -> mapped_weno=F -> mp_weno=T
5527832F 1D -> 1 Fluid(s) -> riemann_solver=1 -> mixture_err
4AEF478A 1D -> 1 Fluid(s) -> riemann_solver=1 -> avg_state=1
32D0F235 1D -> 1 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
18BDCBC8 1D -> 1 Fluid(s) -> riemann_solver=2 -> mixture_err
F97573DB 1D -> 1 Fluid(s) -> riemann_solver=2 -> avg_state=1
F4F6AC27 1D -> 1 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
2F35A1FE 1D -> 1 Fluid(s) -> riemann_solver=2 -> model_eqns=3
1E738705 1D -> 2 Fluid(s) -> riemann_solver=1 -> mixture_err
0879E062 1D -> 2 Fluid(s) -> riemann_solver=1 -> avg_state=1
83EFC30C 1D -> 2 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
1CCA82F5 1D -> 2 Fluid(s) -> riemann_solver=1 -> mpp_lim
3A8359F6 1D -> 2 Fluid(s) -> riemann_solver=2 -> mixture_err
6D24B115 1D -> 2 Fluid(s) -> riemann_solver=2 -> avg_state=1
461DCB09 1D -> 2 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
FD891191 1D -> 2 Fluid(s) -> riemann_solver=2 -> model_eqns=3
9DAC4DDC 1D -> 2 Fluid(s) -> riemann_solver=2 -> alt_soundspeed
C4907722 1D -> 2 Fluid(s) -> riemann_solver=2 -> mpp_lim
C79E1D3C 1D -> 2 Fluid(s) -> Viscous
CD9D3050 1D -> 2 Fluid(s) -> Viscous -> weno_Re_flux
0FCCE9F1 1D -> 2 MPI Ranks
EF54219C 1D -> Bubbles -> Monopole -> Polytropic -> bubble_model=3
7FC6826B 1D -> Bubbles -> Monopole -> Polytropic -> bubble_model=2
6B22A317 1D -> Bubbles -> Monopole -> bubble_model=2
59D05DE9 1D -> Bubbles -> Monopole -> nb=1
9EB947DB 1D -> Hypoelasticity -> 1 Fluid(s)
AF0BCEE4 1D -> Hypoelasticity -> 2 Fluid(s)
AF46C382 1D -> Bubbles -> Monopole -> QBMM -> bubble_model=3
55533234 2D -> bc=-1
EAA53889 2D -> bc=-2
46AA7AF8 1D -> Bubbles -> Monopole -> QBMM
20AE0551 2D -> bc=-4
A6E65782 2D -> bc=-5
4129A23A 2D -> bc=-6
E84967E7 2D -> bc=-7
5F877BC9 2D -> bc=-8
16C03D8E 2D -> bc=-9
B96AC58F 2D -> bc=-10
8FDEE23A 2D -> bc=-11
BF46F657 2D -> bc=-12
D972BA0F 2D -> bc=-3
E4EFEDB2 2D -> weno_order=3 -> mapped_weno=F -> mp_weno=F
CD3D9660 2D -> weno_order=3 -> mapped_weno=T -> mp_weno=F
3974AC7B 2D -> weno_order=5 -> mapped_weno=F -> mp_weno=F
C04741B4 2D -> weno_order=5 -> mapped_weno=T -> mp_weno=F
E76D41CE 2D -> weno_order=5 -> mapped_weno=F -> mp_weno=T
7374E266 2D -> 1 Fluid(s) -> riemann_solver=1 -> mixture_err
3BFEAC19 2D -> 1 Fluid(s) -> riemann_solver=1 -> avg_state=1
FBF808BE 2D -> 1 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
[36m[m: Entering the Python virtual environment (venv).
___ ___ ___
/__/\ / /\ / /\ [email protected] [Darwin]
| |::\ / /:/_ / /:/ ---------------------------------------
| |:|:\ / /:/ /\ / /:/
__|__|:|\:\ / /:/ /:/ / /:/ ___
/__/::::| \:\ /__/:/ /:/ /__/:/ / /\ --jobs: 1
\ \:\~~\__\/ \ \:\/:/ \ \:\ / /:/ --mode: release-cpu
\ \:\ \ \::/ \ \:\ /:/ --targets: pre_process and simulation
\ \:\ \ \:\ \ \:\/:/
\ \:\ \ \:\ \ \::/
\__\/ \__\/ \__\/ $ ./mfc.sh --help
Run
Acquiring /Users/spencer/Downloads/MFC/tests/043B535A/case.py...
Configuration:
Input /Users/spencer/Downloads/MFC/tests/043B535A/case.py
Job Name (-#) unnamed
Engine (-e) interactive
Nodes (-N) 1
CPUs (/node) (-n) 1
GPUs (/node) (-g) 0
MPI Binary (-b) mpirun
Running pre_process:
Ensuring the Interactive Engine works (10s timeout):
$ mpirun -np 1 hostname
Error: The Interactive Engine appears to hang or exit with a non-zero status code. This may indicate that the wrong MPI binary is being used
to
launch parallel jobs. You can specify the correct one for your system using the <-b,--binary> option. For example:
- ./mfc.sh run <myfile.py> -b mpirun
- ./mfc.sh run <myfile.py> -b srun
Reason: Exit code.
Terminated: 15
[36m[m: Exiting the Python virtual environment.
/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/resource_tracker.py:224:
UserWarning: resource_tracker: There appear to be 3 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
3B414AF0 2D -> 1 Fluid(s) -> riemann_solver=2 -> mixture_err
Error: Test tests/043B535A: 2D -> 1 Fluid(s) -> riemann_solver=2 -> model_eqns=3: Failed to execute MFC. You can find the run's output in
/Users/spencer/Downloads/MFC/tests/043B535A/out.txt, and the case dictionary in /Users/spencer/Downloads/MFC/tests/043B535A/case.py.
Terminated: 15
[mfc.sh]: Exiting the Python virtual environment.
[I]shb-m1pro: Downloads/MFC $
[I]shb-m1pro: Downloads/MFC $ mpif90
gfortran: fatal error: no input files
compilation terminated.
[I]shb-m1pro: Downloads/MFC $ mpif90 --version
GNU Fortran (Homebrew GCC 12.2.0) 12.2.0
Copyright (C) 2022 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
above fails but below works
[I]shb-m1pro: Downloads/MFC $ ./mfc.sh test -j 4
[mfc.sh]: Entering the Python virtual environment (venv).
___ ___ ___
/__/\ / /\ / /\ [email protected] [Darwin]
| |::\ / /:/_ / /:/ ---------------------------------------
| |:|:\ / /:/ /\ / /:/
__|__|:|\:\ / /:/ /:/ / /:/ ___
/__/::::| \:\ /__/:/ /:/ /__/:/ / /\ --jobs: 4
\ \:\~~\__\/ \ \:\/:/ \ \:\ / /:/ --mode: release-cpu
\ \:\ \ \::/ \ \:\ /:/
\ \:\ \ \:\ \ \:\/:/
\ \:\ \ \:\ \ \::/
\__\/ \__\/ \__\/ $ ./mfc.sh [build, run, test, clean] --help
Building pre_process:
$ cmake --build /Users/spencer/Downloads/MFC/build/pre_process --target pre_process -j 4 --config Release
[0/2] Re-checking globbed directories...
ninja: no work to do.
$ cmake --install /Users/spencer/Downloads/MFC/build/pre_process
-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/pre_process
Building simulation:
$ cmake --build /Users/spencer/Downloads/MFC/build/simulation --target simulation -j 4 --config Release
[0/2] Re-checking globbed directories...
ninja: no work to do.
$ cmake --install /Users/spencer/Downloads/MFC/build/simulation
-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/simulation
Test | from D79C3E6F to BDD3411B (142 tests)
tests/UUID Summary
3AE495F4 1D -> bc=-5
70DAE9E8 1D -> bc=-4
8A59E8E6 1D -> bc=-2
D79C3E6F 1D -> bc=-1
5EC236F2 1D -> bc=-6
48CCE072 1D -> bc=-7
AED93D34 1D -> bc=-8
C5B79059 1D -> bc=-9
727F72ED 1D -> bc=-10
A60691E7 1D -> bc=-11
3FC6FC4A 1D -> bc=-12
2AB32975 1D -> bc=-3
B3C85904 1D -> weno_order=3 -> mapped_weno=F -> mp_weno=F
7077C99F 1D -> weno_order=3 -> mapped_weno=T -> mp_weno=F
84017671 1D -> weno_order=5 -> mapped_weno=F -> mp_weno=F
F5890628 1D -> weno_order=5 -> mapped_weno=T -> mp_weno=F
34580912 1D -> weno_order=5 -> mapped_weno=F -> mp_weno=T
5527832F 1D -> 1 Fluid(s) -> riemann_solver=1 -> mixture_err
4AEF478A 1D -> 1 Fluid(s) -> riemann_solver=1 -> avg_state=1
32D0F235 1D -> 1 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
18BDCBC8 1D -> 1 Fluid(s) -> riemann_solver=2 -> mixture_err
F97573DB 1D -> 1 Fluid(s) -> riemann_solver=2 -> avg_state=1
F4F6AC27 1D -> 1 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
2F35A1FE 1D -> 1 Fluid(s) -> riemann_solver=2 -> model_eqns=3
1E738705 1D -> 2 Fluid(s) -> riemann_solver=1 -> mixture_err
0879E062 1D -> 2 Fluid(s) -> riemann_solver=1 -> avg_state=1
83EFC30C 1D -> 2 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
1CCA82F5 1D -> 2 Fluid(s) -> riemann_solver=1 -> mpp_lim
3A8359F6 1D -> 2 Fluid(s) -> riemann_solver=2 -> mixture_err
6D24B115 1D -> 2 Fluid(s) -> riemann_solver=2 -> avg_state=1
461DCB09 1D -> 2 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
FD891191 1D -> 2 Fluid(s) -> riemann_solver=2 -> model_eqns=3
9DAC4DDC 1D -> 2 Fluid(s) -> riemann_solver=2 -> alt_soundspeed
C4907722 1D -> 2 Fluid(s) -> riemann_solver=2 -> mpp_lim
C79E1D3C 1D -> 2 Fluid(s) -> Viscous
CD9D3050 1D -> 2 Fluid(s) -> Viscous -> weno_Re_flux
0FCCE9F1 1D -> 2 MPI Ranks
EF54219C 1D -> Bubbles -> Monopole -> Polytropic -> bubble_model=3
7FC6826B 1D -> Bubbles -> Monopole -> Polytropic -> bubble_model=2
59D05DE9 1D -> Bubbles -> Monopole -> nb=1
6B22A317 1D -> Bubbles -> Monopole -> bubble_model=2
AF46C382 1D -> Bubbles -> Monopole -> QBMM -> bubble_model=3
46AA7AF8 1D -> Bubbles -> Monopole -> QBMM
9EB947DB 1D -> Hypoelasticity -> 1 Fluid(s)
AF0BCEE4 1D -> Hypoelasticity -> 2 Fluid(s)
55533234 2D -> bc=-1
EAA53889 2D -> bc=-2
20AE0551 2D -> bc=-4
A6E65782 2D -> bc=-5
4129A23A 2D -> bc=-6
E84967E7 2D -> bc=-7
5F877BC9 2D -> bc=-8
16C03D8E 2D -> bc=-9
B96AC58F 2D -> bc=-10
8FDEE23A 2D -> bc=-11
BF46F657 2D -> bc=-12
D972BA0F 2D -> bc=-3
E4EFEDB2 2D -> weno_order=3 -> mapped_weno=F -> mp_weno=F
CD3D9660 2D -> weno_order=3 -> mapped_weno=T -> mp_weno=F
3974AC7B 2D -> weno_order=5 -> mapped_weno=F -> mp_weno=F
C04741B4 2D -> weno_order=5 -> mapped_weno=T -> mp_weno=F
E76D41CE 2D -> weno_order=5 -> mapped_weno=F -> mp_weno=T
7374E266 2D -> 1 Fluid(s) -> riemann_solver=1 -> mixture_err
3BFEAC19 2D -> 1 Fluid(s) -> riemann_solver=1 -> avg_state=1
FBF808BE 2D -> 1 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
3B414AF0 2D -> 1 Fluid(s) -> riemann_solver=2 -> mixture_err
3C00B89D 2D -> 1 Fluid(s) -> riemann_solver=2 -> avg_state=1
345A94C0 2D -> 1 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
043B535A 2D -> 1 Fluid(s) -> riemann_solver=2 -> model_eqns=3
16FBF4C8 2D -> 2 Fluid(s) -> riemann_solver=1 -> mixture_err
DC9CB97E 2D -> 2 Fluid(s) -> riemann_solver=1 -> avg_state=1
A5C93D62 2D -> 2 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
A6AC2E06 2D -> 2 Fluid(s) -> riemann_solver=1 -> mpp_lim
5781A4C2 2D -> 2 Fluid(s) -> riemann_solver=2 -> mixture_err
645A26E3 2D -> 2 Fluid(s) -> riemann_solver=2 -> avg_state=1
FC4D07B6 2D -> 2 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
4F2F4ACE 2D -> 2 Fluid(s) -> riemann_solver=2 -> model_eqns=3
5DAB50B2 2D -> 2 Fluid(s) -> riemann_solver=2 -> alt_soundspeed
F0F175B2 2D -> 2 Fluid(s) -> riemann_solver=2 -> mpp_lim
9CB03CEF 2D -> 2 Fluid(s) -> Viscous
D6BAC936 2D -> 2 Fluid(s) -> Viscous -> weno_Re_flux
DB670E50 2D -> Axisymmetric -> model_eqns=2
B89B8C70 2D -> Axisymmetric -> model_eqns=3
FB822062 2D -> Axisymmetric -> Viscous
8C7AA13B 2D -> 2 MPI Ranks
B3AAC9C8 2D -> Axisymmetric -> Viscous -> weno_Re_flux
34DBFE14 2D -> Bubbles -> Monopole -> Polytropic -> bubble_model=3
AE37D842 2D -> Bubbles -> Monopole -> nb=1
14B6198D 2D -> Bubbles -> Monopole -> Polytropic -> bubble_model=2
CC4F7C44 2D -> Bubbles -> Monopole -> bubble_model=2
122713AA 2D -> Hypoelasticity -> 1 Fluid(s)
5281BD7B 2D -> Hypoelasticity -> 2 Fluid(s)
66CFF8CC 2D -> Bubbles -> Monopole -> QBMM -> bubble_model=3
6FC6A809 3D -> bc=-1
09DAFEBA 3D -> bc=-2
303B925A 2D -> Bubbles -> Monopole -> QBMM
F99FBB36 3D -> bc=-4
E09A12D9 3D -> bc=-5
5010B814 3D -> bc=-6
730DFD6D 3D -> bc=-7
ABAC3AE3 3D -> bc=-8
C93BE9B5 3D -> bc=-9
D0045756 3D -> bc=-10
557FF170 3D -> bc=-11
61FFF3D3 3D -> bc=-12
6B4B738B 3D -> bc=-3
E1352143 3D -> weno_order=3 -> mapped_weno=F -> mp_weno=F
13DFC31D 3D -> weno_order=3 -> mapped_weno=T -> mp_weno=F
728A2A5B 3D -> weno_order=5 -> mapped_weno=F -> mp_weno=F
42B169F5 3D -> weno_order=5 -> mapped_weno=F -> mp_weno=T
19E33853 3D -> weno_order=5 -> mapped_weno=T -> mp_weno=F
9ACD5174 3D -> 1 Fluid(s) -> riemann_solver=1 -> mixture_err
73B0539E 3D -> 1 Fluid(s) -> riemann_solver=1 -> avg_state=1
2A523AC1 3D -> 1 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
C06849AD 3D -> 1 Fluid(s) -> riemann_solver=2 -> mixture_err
AB0BE4E4 3D -> 1 Fluid(s) -> riemann_solver=2 -> avg_state=1
C36F18FB 3D -> 1 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
6241177B 3D -> 1 Fluid(s) -> riemann_solver=2 -> model_eqns=3
C4A2FAA3 3D -> 2 Fluid(s) -> riemann_solver=1 -> mixture_err
851F7AE2 3D -> 2 Fluid(s) -> riemann_solver=1 -> avg_state=1
BD8004FF 3D -> 2 Fluid(s) -> riemann_solver=1 -> wave_speeds=2
758D0268 3D -> 2 Fluid(s) -> riemann_solver=1 -> mpp_lim
AACF1BC5 3D -> 2 Fluid(s) -> riemann_solver=2 -> mixture_err
B33E256A 3D -> 2 Fluid(s) -> riemann_solver=2 -> avg_state=1
B8F5F1C8 3D -> 2 Fluid(s) -> riemann_solver=2 -> wave_speeds=2
7C8F1BA9 3D -> 2 Fluid(s) -> riemann_solver=2 -> alt_soundspeed
F0E6771E 3D -> 2 Fluid(s) -> riemann_solver=2 -> model_eqns=3
A0B82851 3D -> 2 Fluid(s) -> riemann_solver=2 -> mpp_lim
1C0780C8 3D -> 2 Fluid(s) -> Viscous
301B9153 3D -> Cylindrical -> model_eqns=2
2060F55A 3D -> 2 Fluid(s) -> Viscous -> weno_Re_flux
07C33719 3D -> Cylindrical -> Viscous
CE232828 3D -> 2 MPI Ranks
939D6718 3D -> Cylindrical -> Viscous -> weno_Re_flux
36256906 3D -> Bubbles -> Monopole -> Polytropic -> bubble_model=3
8A341282 3D -> Bubbles -> Monopole -> nb=1
AD63A4A5 3D -> Bubbles -> Monopole -> Polytropic -> bubble_model=2
622DEC78 3D -> Bubbles -> Monopole -> bubble_model=2
7EFBCDAE 3D -> Hypoelasticity -> 1 Fluid(s)
BDD3411B 3D -> Hypoelasticity -> 2 Fluid(s)
63850240 3D -> Bubbles -> Monopole -> QBMM -> bubble_model=3
AB04C64D 3D -> Bubbles -> Monopole -> QBMM
Tested โ
[mfc.sh]: Exiting the Python virtual environment.
Our module files are simply too long, no matter how you think about the code abstractions.
I suggest that modules should not be longer than 1000 lines, though much shorter or somewhat longer could be appropriate in specific cases. How things should be broken up exactly should be deliberated before starting the refactor.
Also, this issue should probably not be addressed until the GPU-3D-unmanaged
branch is merged to master
.
Here are the current line counts in the GPU-3D-unmanaged
branch:
36414 total
3866 ./simulation_code/m_riemann_solvers.f90
3827 ./simulation_code/m_rhs.f90
2433 ./pre_process_code/m_initial_condition.f90
2396 ./simulation_code/m_data_output.f90
2099 ./pre_process_code/m_start_up.f90
1779 ./post_process_code/m_mpi_proxy.f90
1712 ./simulation_code/m_cbc.f90
1614 ./simulation_code/m_weno.f90
1547 ./simulation_code/m_mpi_proxy.f90
1421 ./simulation_code/m_derived_variables.f90
1137 ./post_process_code/m_data_output.f90
1134 ./simulation_code/m_start_up.f90
1086 ./post_process_code/m_data_input.f90
1047 ./simulation_code/m_global_parameters.f90
897 ./post_process_code/m_global_parameters.f90
876 ./pre_process_code/m_global_parameters.f90
826 ./pre_process_code/m_mpi_proxy.f90
774 ./post_process_code/m_derived_variables.f90
689 ./simulation_code/m_variables_conversion.f90
576 ./post_process_code/m_start_up.f90
514 ./post_process_code/p_main.f90
491 ./simulation_code/m_time_steppers.f90
484 ./simulation_code/m_bubbles.f90
456 ./pre_process_code/m_variables_conversion.f90
405 ./post_process_code/m_variables_conversion.f90
400 ./pre_process_code/m_data_output.f90
398 ./simulation_code/m_qbmm.f90
324 ./pre_process_code/m_grid.f90
301 ./master_scripts/m_silo_proxy.f90
216 ./master_scripts/m_mpi_proxy.f90
184 ./simulation_code/p_main.f90
131 ./pre_process_code/m_derived_types.f90
125 ./pre_process_code/p_main.f90
96 ./simulation_code/m_derived_types.f90
63 ./post_process_code/m_derived_types.f90
30 ./simulation_code/m_compile_specific.f90
30 ./pre_process_code/m_compile_specific.f90
30 ./post_process_code/m_compile_specific.f90
MFC build fails on MacOS due to attempting to fetch aarch64 cmake
[I]shb-m1pro: Downloads/MFC $ ./mfc.sh build -j 1
Traceback (most recent call last):
File "/opt/homebrew/bin/cmake", line 8, in <module>
sys.exit(cmake())
File "/opt/homebrew/lib/python3.10/site-packages/cmake/__init__.py", line 46, in cmake
raise SystemExit(_program('cmake', sys.argv[1:]))
File "/opt/homebrew/lib/python3.10/site-packages/cmake/__init__.py", line 42, in _program
return subprocess.call([os.path.join(CMAKE_BIN_DIR, name)] + args)
File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 345, in call
with Popen(*popenargs, **kwargs) as p:
File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/homebrew/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1847, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/opt/homebrew/lib/python3.10/site-packages/cmake/data/CMake.app/Contents/bin/cmake'
[mfc.sh]: CMake is out of date (current: < minimum: 3.18).
[mfc.sh]: Downloading CMake v3.24.2 for arm64 from https://github.com/Kitware/CMake.
--2022-10-28 21:29:09-- https://github.com/Kitware/CMake/releases/download/v3.24.2/cmake-3.24.2-linux-arm64.sh
Resolving github.com (github.com)... 140.82.114.4
Connecting to github.com (github.com)|140.82.114.4|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2022-10-28 21:29:09 ERROR 404: Not Found.
[mfc.sh]: Error: Failed to download a compatible version of CMake.
CMake is not discoverable or is an older release, incompatible with MFC. Please download
or install a recent version of CMake to get past this step. If you are currently on a
managed system like a cluster, provided there is no suitable environment module, you can
either build it from source, or get it via Spack.
- The minimum required version is currently CMake v3.18.0.
- We attempted to download CMake v3.24.2 from https://github.com/Kitware/CMake/releases/download/v3.24.2/cmake-3.24.2-linux-arm64.sh.
Need a file on how to contribute to MFC.
Hi MFC team,
I am new to MFC and I have some difficulties in installing MFC into me Ubuntu. It would be very helpful, if you could provide instructions to install MFC in a linux system. I have read the documentation, but installation was unsuccessful. Looking forward to hear from you. Thank you.
Not really sure what's going on but builds on Phoenix seem somewhat broken.
Here are the modules
atl1-1-02-009-34-0: p-sbryngelson3-0/MFC $ module list
Currently Loaded Modules:
1) pace-slurm/2022.06 7) bzip2/1.0.8-z5cmka (H) 13) gettext/0.19.8.1-yz6qtc (H) 19) xz/5.2.2-kbeci4 (H)
2) zlib/1.2.7-s3gked (H) 8) libmd/1.0.4-wdkbs3 (H) 14) libffi/3.4.2-bvfjil (H) 20) libxml2/2.9.13-d4fgiv (H)
3) nvhpc/22.11 9) libbsd/0.11.5-j4ccxs (H) 15) sqlite/3.38.5-sweldt (H) 21) cuda/11.6.0-u4jzhg
4) ncurses/6.2-qhoz4g (H) 10) expat/2.4.8-kng6xl 16) util-linux-uuid/2.36.2-6u5eni (H)
5) openssl/1.0.2k-fips-xbtc42 (H) 11) readline/8.1-v3ivmo (H) 17) python/3.9.12-rkxvr6
6) cmake/3.23.1-327dbl 12) gdbm/1.19-54ea7n (H) 18) libiconv/1.16-pbdcxj (H)
and here is the error
$ cmake -DMFC_SIMULATION=ON -Wno-dev -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DCMAKE_BUILD_TYPE=Release
-DCMAKE_PREFIX_PATH=/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/install
-DCMAKE_FIND_ROOT_PATH=/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/install
-DCMAKE_INSTALL_PREFIX=/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/install -DMFC_MPI=ON -DMFC_OpenACC=OFF -S
/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/ -B /storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/simulation
-- The C compiler identification is NVHPC 22.11.0
-- The Fortran compiler identification is NVHPC 22.11.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/local/pace-apps/manual/packages/nvhpc/Linux_x86_64/22.11/compilers/bin/nvc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Check for working Fortran compiler: /usr/local/pace-apps/manual/packages/nvhpc/Linux_x86_64/22.11/compilers/bin/nvfortran - skipped
-- Performing Test SUPPORTS_MARCH_NATIVE
-- Performing Test SUPPORTS_MARCH_NATIVE - Success
-- Enabled IPO / LTO
-- Found MPI_Fortran: /storage/pace-apps/manual/packages/nvhpc/Linux_x86_64/22.11/comm_libs/openmpi/openmpi-3.1.5/lib/libmpi_usempif08.so (found version "3.1")
-- Found MPI: TRUE (found version "3.1") found components: Fortran
-- Found CUDAToolkit: /usr/local/pace-apps/spack/packages/linux-rhel7-x86_64/gcc-4.8.5/cuda-11.7.0-7sdye3id7ahz34mzhyzzqbxowjxgxkhu/include (found version "11.7.64")
-- Looking for pthread.h
-- Looking for pthread.h - not found
CMake Error at /storage/pace-apps/spack/packages/linux-rhel7-x86_64/gcc-4.8.5/cmake-3.23.1-327dblnbramviejdezocehqsujhu7yyg/share/cmake-3.23/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
Could NOT find Threads (missing: Threads_FOUND)
Call Stack (most recent call first):
/storage/pace-apps/spack/packages/linux-rhel7-x86_64/gcc-4.8.5/cmake-3.23.1-327dblnbramviejdezocehqsujhu7yyg/share/cmake-3.23/Modules/FindPackageHandleStandardArgs.cmake:594 (_FPHSA_FAILURE_MESSAGE)
/storage/pace-apps/spack/packages/linux-rhel7-x86_64/gcc-4.8.5/cmake-3.23.1-327dblnbramviejdezocehqsujhu7yyg/share/cmake-3.23/Modules/FindThreads.cmake:238 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
/storage/pace-apps/spack/packages/linux-rhel7-x86_64/gcc-4.8.5/cmake-3.23.1-327dblnbramviejdezocehqsujhu7yyg/share/cmake-3.23/Modules/FindCUDAToolkit.cmake:910 (find_package)
CMakeLists.txt:259 (find_package)
-- Configuring incomplete, errors occurred!
See also "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/build/simulation/CMakeFiles/CMakeOutput.log".
See also "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/build/simulation/CMakeFiles/CMakeError.log".
This sometimes occurs for regular MFC, sometimes not.
It seems like there are "helper functions" peppered around the codebase. These are doing things like computing finite difference coefficients, applying divergence theorems, etc. etc.
Can we (1) make a list of these and then, after review, (2) move them all to a common "helper" module?
Dead code needs to me removed. We seem to have thousands of lines of it. Here is one example:
MFC/src/simulation_code/m_rhs.f90
Lines 358 to 544 in fc40cce
We also have two entire RHS functions:
MFC/src/simulation_code/m_rhs.f90
Line 2928 in fc40cce
MFC/src/simulation_code/m_rhs.f90
Line 1010 in fc40cce
The unused one needs to be deleted.
I'd like to see a PR that deletes thousands of lines of unused code.
Feature request: Import an arbitrary geometry as a patch via an STL parser. There are many Fortran examples:
The command in README.md should be ./input.py pre_process, not python pre_process.
From here:
#13 (comment)
We see that we compute the primitive variables "by hand" several times throughout the code.
This happens in m_data_output
and some other places as well.
Now that @anshgupta1234 has made the routines nice and simple for this, we should call those instead.
Also, the speed of sound c
is computed by hand in several places, though this could be moved out to common/
. c
is also sometimes called things like c_avg
or c_L
or c_R
.
I propose moving all monopole source terms in m_rhs.f90
into a new module (m_monopole.f90
) and viscous terms (which right now are in a separate subroutine in m_rhs.f90
to yet another new module (m_viscous.f90
).
Describe the bug
Doxygen doesn't recognize fypp files and thus doesn't build documentation for them.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Documentation should be produced for all source files.
If fypp and Doxygen do not play nicely, one can autogenerate the code and use Doxygen on those files.
Part of the reason m_riemann_solvers
is so long is because, for HLLC, there are two separate calls for model_eqns == 2
and model_eqns == 2 .and. bubbles
. These can be nicely condensed, with appropriate if
statements if (bubbles) then
, though it requires some care.
Add favicon to Doxygen-generated website: https://stackoverflow.com/questions/18215463/how-to-set-a-favicon-for-doxygen-output
Some example cases have CFL errors before completing. For example, 2d-shock-bubble
and 3d-sph-bubble-collapse
.
When building MicroFC on Phoenix (CPU), I ran into the following warnings.
I expect they also show up with MFC.
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/include/case.fpp(5): warning #5117: Bad # preprocessor line
(there are several of these). This seems to be a known thing for Intel compilers.
login-phoenix-4: p-sbryngelson3-0/MicroFC $ module list
Currently Loaded Modules:
1) xalt/2.8.4 2) intel/19.0.5 3) mvapich2/2.3.2 4) gcc-compatibility/8.3.0 5) pace/2020.01
Building simulation:
$ cmake -DMFC_SIMULATION=ON -Wno-dev -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DCMAKE_BUILD_TYPE=Release
-DCMAKE_PREFIX_PATH=/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/install
-DCMAKE_FIND_ROOT_PATH=/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/install
-DCMAKE_INSTALL_PREFIX=/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/install -DMFC_MPI=ON -DMFC_OpenACC=OFF -S
/storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/ -B /storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/simulation
-- The C compiler identification is Intel 19.0.5.20190815
-- The Fortran compiler identification is Intel 19.0.5.20190815
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/local/pace-apps/spack/packages/0.13/linux-rhel7-cascadelake/intel-19.0.5/mvapich2-2.3.2-hpgbkqoytbjh35qn2t63rdorepxcezek/bin/mpicc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Check for working Fortran compiler: /usr/local/pace-apps/spack/packages/0.13/linux-rhel7-cascadelake/intel-19.0.5/mvapich2-2.3.2-hpgbkqoytbjh35qn2t63rdorepxcezek/bin/mpif90 - skipped
-- Performing Test SUPPORTS_MARCH_NATIVE
-- Performing Test SUPPORTS_MARCH_NATIVE - Success
-- Enabled IPO / LTO
-- Found MPI_Fortran: /usr/local/pace-apps/spack/packages/0.13/linux-rhel7-cascadelake/intel-19.0.5/mvapich2-2.3.2-hpgbkqoytbjh35qn2t63rdorepxcezek/bin/mpif90 (found version "3.1")
-- Found MPI: TRUE (found version "3.1") found components: Fortran
-- Configuring done
-- Generating done
-- Build files have been written to: /storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/build/simulation
$ cmake --build /storage/coda1/p-sbryngelson3/0/sbryngelson3/MicroFC/build/simulation --target simulation -j 4 --config Release
[ 6%] Preprocessing (Fypp) p_main.fpp
[ 6%] Preprocessing (Fypp) m_data_output.fpp
[ 13%] Preprocessing (Fypp) m_global_parameters.fpp
[ 13%] Preprocessing (Fypp) m_mpi_proxy.fpp
[ 17%] Preprocessing (Fypp) m_rhs.fpp
[ 20%] Preprocessing (Fypp) m_riemann_solvers.fpp
[ 24%] Preprocessing (Fypp) m_start_up.fpp
[ 27%] Preprocessing (Fypp) m_time_steppers.fpp
[ 31%] Preprocessing (Fypp) m_variables_conversion.fpp
[ 41%] Preprocessing (Fypp) m_viscous.fpp
[ 41%] Preprocessing (Fypp) m_weno.fpp
[ 41%] Preprocessing (Fypp) macros.fpp
Scanning dependencies of target simulation
[ 48%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/m_nvtx.f90.o
[ 48%] Building Fortran object CMakeFiles/simulation.dir/src/common/m_derived_types.f90.o
[ 51%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/macros.fpp.f90.o
[ 55%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_global_parameters.fpp.f90.o
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/include/case.fpp(5): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_global_parameters.fpp" 2
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
[ 58%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_mpi_proxy.fpp.f90.o
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/include/case.fpp(5): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_mpi_proxy.fpp" 2
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_mpi_proxy.fpp(592): warning #6843: A dummy argument with an explicit INTENT(OUT) declaration is not given an explicit value. [CCFL_MAX_GLB]
ccfl_max_glb, &
-------------------------------------------------------^
[ 65%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_variables_conversion.fpp.f90.o
[ 65%] Building Fortran object CMakeFiles/simulation.dir/src/common/m_compile_specific.f90.o
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/common/macros.fpp(28): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/common/m_variables_conversion.fpp" 2
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
[ 79%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_riemann_solvers.fpp.f90.o
[ 79%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_data_output.fpp.f90.o
[ 79%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_start_up.fpp.f90.o
[ 79%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_weno.fpp.f90.o
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/include/case.fpp(5): warning #5117: Bad # preprocessor line
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/common/macros.fpp(28): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_start_up.fpp" 2
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_data_output.fpp" 2
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/common/macros.fpp(28): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_weno.fpp" 2
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
[ 82%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_viscous.fpp.f90.o
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/common/macros.fpp(28): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_viscous.fpp" 2
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
[ 86%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_rhs.fpp.f90.o
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/common/macros.fpp(28): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_rhs.fpp" 2
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
[ 89%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/m_time_steppers.fpp.f90.o
/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/common/macros.fpp(28): warning #5117: Bad # preprocessor line
# 6 "/storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/autogen//storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MicroFC/src/simulation/m_time_steppers.fpp" 2
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------^
[ 93%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/m_derived_variables.f90.o
[ 96%] Building Fortran object CMakeFiles/simulation.dir/src/simulation/autogen/p_main.fpp.f90.o
[100%] Linking Fortran executable simulation
[100%] Built target simulation
Currently, patch types 14 to 19 are undocumented and they perhaps should be.
While on the topic, wouldn't it be clearer if each patch type had its own derived type within patch_icpp(i)
? For example, an STL patch (#83) could be defined as:
patch_icpp(2)%geometry: 20,
patch_icpp(2)%stl%filepath: 'path/to/stl/file`,
patch_icpp(2)%stl%scale(2): 1.0,
patch_icpp(2)%stl%offset(3): 0.25,
This, in a sense, self-documents what properties each patch type can have. Every attribute that is common to all patches would be in the base patch_icpp(i)%
scope.
Move most start_up.f90
checks to the Python parser. Some will probably have to remain at runtime, but most can be moved.
Describe the bug
Some _idx
and iXX
variables are described using type(bounds_info)
instead of the type(int_bounds_info)
derived type. Possibly unexpected behavior can occur.
I suspect the first six occurrences below have this problem, plus the last one:
MFC/src $ grep -iR '(bounds_info)' ./*
./post_process/m_global_parameters.f90: type(bounds_info) :: cont_idx !< Indexes of first & last continuity eqns.
./post_process/m_global_parameters.f90: type(bounds_info) :: mom_idx !< Indexes of first & last momentum eqns.
./post_process/m_global_parameters.f90: type(bounds_info) :: adv_idx !< Indexes of first & last advection eqns.
./post_process/m_global_parameters.f90: type(bounds_info) :: internalEnergies_idx !< Indexes of first & last internal energy eqns.
./post_process/m_global_parameters.f90: type(bounds_info) :: stress_idx !< Indices of elastic stresses
./post_process/m_derived_variables.f90: type(bounds_info) :: iz1
./pre_process/m_global_parameters.fpp: type(bounds_info) :: x_domain, y_domain, z_domain !<
./pre_process/m_initial_condition.fpp: type(bounds_info) :: x_boundary, y_boundary, z_boundary !<
./simulation/m_global_parameters.fpp: type(bounds_info) :: stress_idx !< Indexes of first and last shear stress eqns.
If I clone @henryleberre 's fork and issue ./mfc.sh
I immediately get:
[I]shb-m1pro: Downloads/MFC $ ./mfc.sh
./mfc.sh: line 223: /Users/spencer/Downloads/MFC/build/venv/bin/activate: No such file or directory
Collecting pyyaml
Using cached PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl (173 kB)
Installing collected packages: pyyaml
Successfully installed pyyaml-6.0
Collecting rich
Using cached rich-12.5.1-py3-none-any.whl (235 kB)
Requirement already satisfied: pygments<3.0.0,>=2.6.0 in /Users/spencer/Library/Python/3.10/lib/python/site-packages (from rich) (2.13.0)
Collecting commonmark<0.10.0,>=0.9.0
Using cached commonmark-0.9.1-py2.py3-none-any.whl (51 kB)
Installing collected packages: commonmark, rich
Successfully installed commonmark-0.9.1 rich-12.5.1
Collecting fypp
Using cached fypp-3.1-py3-none-any.whl
Installing collected packages: fypp
Successfully installed fypp-3.1
usage: ./mfc.sh [-h] {run,test,build,clean} ...
Wecome to the MFC master script. This tool automates and manages building, testing, running, and cleaning of MFC in various configurations on
all supported platforms. The README documents this tool and its various commands in more detail. To get started, run ./mfc.sh build -h.
Thus, it still works fine (seemingly, as aside from the 'Wecome' typo), but it lets out a warning on the first stdout line. It didn't do this a few weeks ago, so it's a new change I believe.
It does the same thing on subsequent mfc.sh
calls.
This code used to be in the m_bubbles
modules, but now it's also in s_compute_rhs
? Also, s_compute_rhs
is 1863 lines long.
Lines 1113 to 1321 in ecbcb72
Thoughts on removing _code
from the src/
subdirectories? e.g. pre_process_code
-> pre_process
. @henryleberre @anandrdbz
Duplicate of https://github.com/MFlowCode/MFC-develop/issues/116 as discussed in #74.
Also, notice this specific line below:
[36m[m: Entering the Python virtual environment (venv).
Fix is just to skip any np=2
parallel test cases if --no-mpi
is enabled/built.
[I]shb-m1pro: Downloads/MFC $ ./mfc.sh test -j 8
[mfc.sh]: Entering the Python virtual environment (venv).
___ ___ ___
/__/\ / /\ / /\ [email protected] [Darwin]
| |::\ / /:/_ / /:/ ---------------------------------------
| |:|:\ / /:/ /\ / /:/
__|__|:|\:\ / /:/ /:/ / /:/ ___
/__/::::| \:\ /__/:/ /:/ /__/:/ / /\ --jobs: 8
\ \:\~~\__\/ \ \:\/:/ \ \:\ / /:/ --mode: release-cpu
\ \:\ \ \::/ \ \:\ /:/
\ \:\ \ \:\ \ \:\/:/
\ \:\ \ \:\ \ \::/
\__\/ \__\/ \__\/ $ ./mfc.sh [build, run, test, clean] --help
Building pre_process:
$ cd "/Users/spencer/Downloads/MFC/build/pre_process" && cmake --build . -j 8 --target pre_process --config Release
ninja: no work to do.
$ cd "/Users/spencer/Downloads/MFC/build/pre_process" && cmake --install .
-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/pre_process
Building simulation:
$ cd "/Users/spencer/Downloads/MFC/build/simulation" && cmake --build . -j 8 --target simulation --config Release
ninja: no work to do.
$ cd "/Users/spencer/Downloads/MFC/build/simulation" && cmake --install .
-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/simulation
Test | from 5EB1467A to 177B85F6 (136 tests)
tests/UUID Summary
5EB1467A 1D (m=299,n=0,p=0) -> bc=-1
7633CC50 1D (m=299,n=0,p=0) -> bc=-7
B20A6EDF 1D (m=299,n=0,p=0) -> bc=-5
B9B1D51C 1D (m=299,n=0,p=0) -> bc=-2
BB633DEF 1D (m=299,n=0,p=0) -> bc=-6
DE580877 1D (m=299,n=0,p=0) -> bc=-9
C7B4AC8B 1D (m=299,n=0,p=0) -> bc=-8
A9612DE8 1D (m=299,n=0,p=0) -> bc=-4
187180D7 1D (m=299,n=0,p=0) -> bc=-10
A2B1419E 1D (m=299,n=0,p=0) -> bc=-11
1199FE98 1D (m=299,n=0,p=0) -> bc=-12
48F2140A 1D (m=299,n=0,p=0) -> bc=-3
986F8670 1D (m=299,n=0,p=0) -> bc=-3 -> weno_order=3 -> (mapped_weno=F,mp_weno=F)
DEC0D29F 1D (m=299,n=0,p=0) -> bc=-3 -> weno_order=3 -> (mapped_weno=T,mp_weno=F)
2CD5CD51 1D (m=299,n=0,p=0) -> bc=-3 -> weno_order=5 -> (mapped_weno=F,mp_weno=F)
9B2B644F 1D (m=299,n=0,p=0) -> bc=-3 -> weno_order=5 -> (mapped_weno=T,mp_weno=F)
3DF3CF18 1D (m=299,n=0,p=0) -> bc=-3 -> weno_order=5 -> (mapped_weno=F,mp_weno=T)
05677938 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=1 -> riemann_solver=1 -> mixture_err=T
2594B368 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=1 -> riemann_solver=1 -> avg_state=1
CB13F6BE 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=1 -> riemann_solver=1 -> wave_speeds=2
17B60C53 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=1 -> riemann_solver=2 -> mixture_err=T
70F8A4AE 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=1 -> riemann_solver=2 -> avg_state=1
BA7CD3C6 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=1 -> riemann_solver=2 -> wave_speeds=2
9E3BD925 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=1 -> riemann_solver=2 -> model_eqns=3
29C5E4F8 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=1 -> mixture_err=T
09C40057 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=1 -> avg_state=1
93D76284 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=1 -> wave_speeds=2
FC4F775E 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=1 -> mpp_lim=T
195C6466 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=2 -> mixture_err=T
68979A4C 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=2 -> avg_state=1
14ACFE2E 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=2 -> wave_speeds=2
9F0CEFA7 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=2 -> model_eqns=3
93C304EB 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=2 -> alt_soundspeed=T
EAFB27F8 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> riemann_solver=2 -> mpp_lim=T
C2C2056C 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> Viscous -> weno_Re_flux=F
D7DBAB59 1D (m=299,n=0,p=0) -> bc=-3 -> num_fluids=2 -> Viscous -> weno_Re_flux=T
A85CEC37 1D (m=299,n=0,p=0) -> bc=-3 -> bubbles=T -> Monopole=T -> polytropic=T -> bubble_model=3
A8D0F761 1D (m=299,n=0,p=0) -> bc=-3 -> bubbles=T -> Monopole=T -> polytropic=T -> bubble_model=2
FF085677 1D (m=299,n=0,p=0) -> bc=-3 -> bubbles=T -> Monopole=T -> polytropic=F -> bubble_model=2
1A6B6EB3 1D (m=299,n=0,p=0) -> bc=-3 -> bubbles=T -> Monopole=T -> nb=1
74FE6AA7 2D (m=49,n=39,p=0) -> bc=-1
C435F933 1D (m=299,n=0,p=0) -> bc=-3 -> bubbles=T -> Monopole=T -> qbmm=T -> bubble_model=3
3E60C4D1 2D (m=49,n=39,p=0) -> bc=-2
06601C13 1D (m=299,n=0,p=0) -> bc=-3 -> bubbles=T -> Monopole=T -> qbmm=T
F1051537 2D (m=49,n=39,p=0) -> bc=-5
23FE7630 2D (m=49,n=39,p=0) -> bc=-4
[36m[m: Entering the Python virtual environment (venv).
___ ___ ___
/__/\ / /\ / /\ [email protected] [Darwin]
| |::\ / /:/_ / /:/ ---------------------------------------
| |:|:\ / /:/ /\ / /:/
__|__|:|\:\ / /:/ /:/ / /:/ ___
/__/::::| \:\ /__/:/ /:/ /__/:/ / /\ --jobs: 1
\ \:\~~\__\/ \ \:\/:/ \ \:\ / /:/ --mode: release-cpu
\ \:\ \ \::/ \ \:\ /:/ --targets: pre_process and simulation
\ \:\ \ \:\ \ \:\/:/
\ \:\ \ \:\ \ \::/
\__\/ \__\/ \__\/ $ ./mfc.sh --help
Run
Acquiring /Users/spencer/Downloads/MFC/tests/9A665F13/case.py...
Configuration:
Input /Users/spencer/Downloads/MFC/tests/9A665F13/case.py
Job Name (-#) unnamed
Engine (-e) interactive
Nodes (-N) 1
CPUs (/node) (-n) 2
GPUs (/node) (-g) 0
MPI Binary (-b) mpirun
Running pre_process:
Running pre_process:
$ mpirun -np 2 "/Users/spencer/Downloads/MFC/build/install/bin/pre_process"
s_mpi_bcast_user_inputs not supported without MPI.
s_mpi_decompose_computational_domain not supported without MPI.
s_mpi_bcast_user_inputs not supported without MPI.
s_mpi_decompose_computational_domain not supported without MPI.
At line 99 of file /Users/spencer/Downloads/MFC/src/pre_process/m_data_output.f90 (unit = 1)
Fortran runtime error: Cannot open file './p_all/p0/0/x_cb.dat': File exists
Error termination. Backtrace:
s_mpi_barrier not supported without MPI.
Final Time 3.4029999999999993E-003
s_mpi_finalize not supported without MPI.
#0 0x102a53187
#1 0x102a53d37
#2 0x102a54613
#3 0x102b3a1e3
#4 0x102b3a3fb
#5 0x1025b39cf
#6 0x1025b93a7
#7 0x1025d407f
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[60189,1],1]
Exit code: 2
--------------------------------------------------------------------------
MFC is filled with lines like this:
MFC/src/simulation/m_hypoelastic.f90
Lines 30 to 32 in a39962c
where the precision is declared via kind(0d0)
.
We also have a ton of this
MFC/src/simulation/m_riemann_solvers.fpp
Lines 547 to 556 in a39962c
where inline constants have precision declared in a "hard-coded" way.
What is better is declaring a separate constant that we can change as needed, like this
example, though there are many others.
A fix for this issue would remove all cases of 0d0
and kind(0d0)
and replace them with a constant declared in the common/
directory. I think this is a suitable task for @anshgupta1234 .
I realize one can force precision via compiler variables, but I believe we should avoid this because there is an established language standard.
Implement continuous benchmarking via CI of a set of MFC examples.
A good example: https://github.com/benchmark-action/github-action-benchmark
./mfc.sh test
sometimes hangs, doing nothing. This occurs when I test the GPU build on various servers. Using -b XXXX
can often fix the problem. However, a new user will have no idea what to do and assume the tests are very slow (or have a bug). At the very least we should kill the test if it isn't going to work and then recommend certain things to try.
Currently, if you have a fork of MFC with CI/Workflows enabled, GitHub tries to run the self-hosted
job (from the matrix configuration) and stalls for 10+ hours waiting for a self-hosted runner to become available, before ultimately failing.
[I]lawn-100-70-34-65: Downloads/MFC $ ./mfc.sh run ./examples/2D_mixing_nobubble/case.py
___ ___ ___
/__/\ / /\ / /\ [email protected] [Darwin]
| |::\ / /:/_ / /:/ --------------------------------------------------
| |:|:\ / /:/ /\ / /:/
__|__|:|\:\ / /:/ /:/ / /:/ ___
/__/::::| \:\ /__/:/ /:/ /__/:/ / /\ --jobs: 1
\ \:\~~\__\/ \ \:\/:/ \ \:\ / /:/ --mode: release-cpu
\ \:\ \ \::/ \ \:\ /:/ --targets: pre_process, simulation, and post_process
\ \:\ \ \:\ \ \:\/:/
\ \:\ \ \:\ \ \::/
\__\/ \__\/ \__\/ $ ./mfc.sh [build, run, test, clean] --help
Run
Acquiring ./examples/2D_mixing_nobubble/case.py...
Configuration:
Input ./examples/2D_mixing_nobubble/case.py
Job Name (-#) unnamed
Engine (-e) interactive
Nodes (-N) 1
CPUs (/node) (-n) 1
GPUs (/node) (-g) 0
MPI Binary (-b) mpirun
Running pre_process:
Building pre_process:
$ cd "/Users/spencer/Downloads/MFC/build/pre_process" && cmake --build . -j 1 --target pre_process --config Release
ninja: no work to do.
$ cd "/Users/spencer/Downloads/MFC/build/pre_process" && cmake --install .
-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/pre_process
Running pre_process:
$ mpirun -np 1 "/Users/spencer/Downloads/MFC/build/install/bin/pre_process"
Final Time 0.10148899999999997
Done (in 0:00:05.448766)
Running simulation:
Building simulation:
$ cd "/Users/spencer/Downloads/MFC/build/simulation" && cmake --build . -j 1 --target simulation --config Release
ninja: no work to do.
$ cd "/Users/spencer/Downloads/MFC/build/simulation" && cmake --install .
-- Install configuration: "Release"
-- Up-to-date: /Users/spencer/Downloads/MFC/build/install/bin/simulation
Running simulation:
$ mpirun -np 1 "/Users/spencer/Downloads/MFC/build/install/bin/simulation"
At line 114 of file /Users/spencer/Downloads/MFC/src/simulation_code/autogen/m_start_up.f90 (unit = 1, file = './simulation.inp')
Fortran runtime error: Cannot match namelist object name .0t_step_save
Error termination. Backtrace:
Could not print backtrace: executable file is not an executable
#0 0x103753187
#1 0x103753d37
#2 0x103754613
#3 0x103838f9b
#4 0x1038413d3
#5 0x103841687
#6 0x102ebe227
#7 0x102ecd7e3
#8 0x102f5a0cf
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[42162,1],0]
Exit code: 2
--------------------------------------------------------------------------
Error: Failed to execute command "cd "/Users/spencer/Downloads/MFC/examples/2D_mixing_nobubble" && mpirun -np 1
"/Users/spencer/Downloads/MFC/build/install/bin/simulation"".
It seems as though compressing (and subsequent decompressing) of the halo regions sent over MPI might better performance. Lawrence Livermore National Laboratory's ZFP seems like a good candidate.
Idea credit: @sbryngelson & @henryleberre
Issue: Some subroutines are both manually inlined and still exist as separate subroutines.
For example: in m_weno
, the s_preserve_monotonicity
exists but is never called because it was manually inlined. I suspect there are probably others like this, it is just the one I found.
Question: Is it necessary to manually copy/paste this subroutine into the s_weno
subroutine? What are the performance tradeoffs?
Expected action: I suspect the additional cost associated with manually placing these routines into the main code is not worth the longer, more confusing code it creates. If we pursue this strategy, we could theoretically just have one extremely long subroutine that "does everything" and reap some small performance benefit, but I suspect that we agree this isn't a good idea.
Current Behavior
Removing a directory on *nix systems through a call to the s_delete_directory
subroutine invokes rm -rf
to remove a directory along with any files contained within. Calling the wrong path with elevated permissions can be dangerous. Only directories created by the program should be deleted, which should not necessitate the use of the force flag with correct filesystem permissions. If there are incorrect filesystem permissions, it is not the responsibility of the subroutine to rectify them.
Proposed Change
Change the system call to rm -r
./mfc.sh
builds python packages for venv
on the first launch without letting the user know what it's doing. I think this is a bit confusing, and a message before and after would be useful!
Title says it all. mflowcode.github.io does not seem to link back to the MFC org. page or MFC repo. I think that should be front and center.
Full 6 equation model with phase change on GPUs.
This has been a persisting issue for years. It is unclear what q_prim_vf(i)
corresponds to what variable for what cases for each i
. Likewise, what q_cons_vf(i)
gets paired with it. Need a resolution for this and add it to the docs. Probably want a simple table with two columns that list the variables in the order they would appear, should they exist. For example, I believe sub-grid bubble variables come before hypoelastic ones, but I'm not 100% sure. What about the 6-equation model?
Describe the bug
Some malformed case.py
will be processed into a malformed pre_process.inp
file that will fail to be successfully read into the namelist by read
in s_read_input_file()
. An undescriptive error message and a stack trace are shown upon program abort.
To Reproduce
One example of a malformed file:
case.py
with a non-numeric stringmfc.sh run path-to-case-py -t pre_process
Expected behavior
A clear error message informing the user of the malformed pre_process.inp file.
Proposed Fix
Check iostat flag and print an useful error message before aborting. E.g.:
if (iostatus /= 0) then
backspace(1)
read(1, fmt='(A)') line
print '(A)', 'Invalid line in pre_process.inp around: '//trim(line)
print '(A)', 'Exiting ...'
call s_mpi_abort()
end if
./mfc.sh load
doesn't work on Phoenix. It exits in the following fashion:
login-phoenix-4: p-sbryngelson3-0/MFC $ ./mfc.sh load
mfc: Select a system:
mfc: ORNL: Ascent (a), Crusher (c), Summit (s), Wombat (w)
mfc: ACCESS: Bridges2 (b), Expanse (e)
mfc: GaTech: Phoenix (p)
mfc: CALTECH: Richardson (r)
mfc: (a/c/s/w/b/e/p/r): p
mfc:
mfc: Select configuration:
mfc: - CPU (c)
mfc: - GPU (g)
mfc: (c/g): c
mfc:
mfc: Loading modules for CPU mode:
mfc: - Load intel/19.0.5 - [SUCCESS]
mfc: - Load mvapich2/2.3.2 [SUCCESS]
mfc: - Load python/3.7.4 - [SUCCESS]
mfc: - Load cmake/3.20.3 - [SUCCESS]
mfc: OK > All modules have been loaded.
./mfc.sh: line 211: return: can only `return' from a function or sourced script
mfc: Found CMake: /storage/home/hcoda1/6/sbryngelson3/p-sbryngelson3-0/MFC/build/cmake/bin/cmake.
mfc: OK > (venv) Entered the Python virtual environment.
usage: ./mfc.sh [-h] {run,test,build,clean,bench} ...
Welcome to the MFC master script. This tool automates and manages building,
testing, running, and cleaning of MFC in various configurations on all
supported platforms. The README documents this tool and its various commands
in more detail. To get started, run ./mfc.sh build -h.
positional arguments:
{run,test,build,clean,bench}
run Run a case with MFC.
test Run MFC's test suite.
build Build MFC and its dependencies.
clean Clean build artifacts.
bench Benchmark MFC (for CI).
optional arguments:
-h, --help show this help message and exit
mfc: (venv) Exiting the Python virtual environment.
where ./mfc.sh: line 211: return: can only return from a function or sourced script
is the relevant problem.
In the end, it does not end up loading any of the modules it purports to.
The equation of state is used to compute the pressure from the variables. This EOS changes slightly for different models. The EOS implementation is repeated several places in the code, including in m_variables_converison
, m_data_output
(including serial data output and probe outputs), and likely more. Following the DRY principle (don't repeat yourself), we should fix this. It has also caused multiple false bugs in the past.
If cmake finds an issue with your compiler, then throw an additional error message that says to consider what modules you have loaded and to check if the env variables are set, e.g.:
CC=gcc CXX=g++ FC=gfortran
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.