Code Monkey home page Code Monkey logo

magic's Introduction

alt tag

Language Build workflow Documentation DOI GPLv3

Foreword

  • MagIC is a numerical code that can simulate fluid dynamics in spherical geometry. MagIC solves for the Navier-Stokes equation including Coriolis force, optionally coupled with an induction equation for Magneto-Hydro Dynamics (MHD), a temperature (or entropy) equation and an equation for chemical composition under both the anelastic and the Boussinesq approximations.

  • MagIC uses either Chebyshev polynomials or finite differences in the radial direction and spherical harmonic decomposition in the azimuthal and latitudinal directions. MagIC supports several Implicit-Explicit time schemes where the nonlinear terms and the Coriolis force are treated explicitly, while the remaining linear terms are treated implicitly.

  • MagIC is written in Fortran and designed to be used on supercomputing clusters. It thus relies on a hybrid parallelisation scheme using both OpenMP and MPI. Postprocessing functions written in python (requiring matplotlib and scipy are also provided to allow a useful data analysis.

  • MagIC is a free software. It can be used, modified and redistributed under the terms of the GNU GPL v3 licence.

Quickly start using MagIC

1) In order to check out the code, use the command

$ git clone https://github.com/magic-sph/magic.git

or via SSH (it requires a public key):

$ git clone ssh://[email protected]/magic-sph/magic.git

2) Go to the root directory and source the environment variables (useful for python and auto-tests)

$ cd magic

If you are using sh, bash or zsh as default shell (echo $SHELL), just use the command

$ source sourceme.sh

If you are using csh or tcsh, then use the following command

$ source sourceme.csh

3) Install SHTns (recommended)

SHTns is a an open-source library for the Spherical Harmonics transforms. It is significantly faster than the native transforms implemented in MagIC, and it is hence recommended (though not mandatory) to install it. To install the library, first define a C compiler:

$ export CC=gcc

or

$ export CC=icc

Then make sure a FFT library such FFTW or the MKL is installed on the target machine. Then make use of the install script

$ cd $MAGIC_HOME/bin
$ ./install-shtns.sh

or install it manually after downloading and extracting the latest version here

$ ./configure --enable-openmp --prefix=$HOME/local

if FFTW is used or

$ ./configure --enable-openmp --enable-ishioka --enable-magic-layout --prefix=$HOME/local --enable-mkl

if the MKL is used. Possible additional options may be required depending on the machine (check the website). Then compile and install the library

$ make
$ make install

4) Set up your compiler and compile the code

a) Using CMake (recommended)

Create a directory where the sources will be built

$ mkdir $MAGIC_HOME/build
$ cd $MAGIC_HOME/build

Set up your Fortran compiler

$ export FC=mpiifort

or

$ export FC=mpif90

Compile and produce the executable (options can be passed to cmake using -DOPTION=value)

$ cmake .. -DUSE_SHTNS=yes
$ make -j

The executable magic.exe has been produced!

b) Using make (backup solution)

Go to the source directory

$ cd $MAGIC_HOME/src

Edit the Makefile with your favourite editor and specify your compiler (intel, gnu, portland) and additional compiler options (SHTns, production run or not, debug mode, MKL library, ...)

$ make -j

The executable magic.exe has been produced!

5) Go to the samples directory and check that everything is fine

$ cd $MAGIC_HOME/samples
$ ./magic_wizard.py --use-mpi --nranks 4 --mpicmd mpiexec

If everything is correctly set, all auto-tests should pass!

6) You're ready for a production run

$ cd $SCRATCHDIR/run
$ cp $MAGIC_HOME/build/magic.exe .
$ cp $MAGIC_HOME/samples/hydro_bench_anel/input.nml .

Then change the input namelist to the setup you want and run the code:

$ export OMP_NUM_THREADS=2
$ export KMP_AFFINITY=verbose,granularity=core,compact,1
$ mpiexec -n 4 ./magic.exe input.nml

7) Data visualisation and postprocessing

a) Set-up your PYTHON environment (ipython, scipy and matplotlib are needed)

b) Modify magic.cfg according to your machine in case the auto-configuration didn't work

$ vi $MAGIC_HOME/python/magic/magic.cfg

c) You can now import the python classes:

python> from magic import *

and use them to read time series, graphic files, movies, ...

python> ts = MagicTs(field='e_kin', all=True)
python> s = Surf()
python> s.equat(field='vr')
python> ...

8) Modify the code and submit your modifications

a) Before commiting your modifications always make sure that the auto-tests pass correctly.

b) Try to follow the same coding style rules as in the rest of the code:

  1. Never use TABS but always SPACES instead
  2. Use 3 spaces for indentation
  3. Never use capital letters for variable declaration or Fortran keywords
  4. Never use dimension(len) for declaring array but rather real(cp) :: data(len)
  5. Always use the default precisions when introducing new variables (cp)

More on that topic here

9) Make sure you cite the following papers if you intend to publish scientific results using MagIC:

MagIC has been tested and validated against several international dynamo benchmarks:

magic's People

Contributors

ankitbarik avatar bputigny avatar fgerick avatar jwicht avatar ldvduarte avatar rraynaud avatar t-schwaiger avatar tgastine avatar thtassin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

magic's Issues

ipython --pylab prevents building the f2py libraries

Report from Vincent Boening: after installing MagIC from scratch, and starting python the first time with "ipython --pylab", the command "from magic import *" yields an error with the loading of Legendre transforms (after the compilation of the fortran stuff was done, also giving out "python" instead of "f2py"), which cannot be repaired. If one however, at the first time after installing, starts python with "ipython" (without "--pylab"), then everything works. At the second time, one can then also start python with the pylab option, and everything still works, just at the very first time, one has to not use the pylab option.

angular momentum correction with SBDF3

Something weird is happening on restarts of SBDF-like schemes when AM correction is turned on (coupled-Earth like setups): toroidal axi. energy is jumping. This is happening at the first iteration of each restart, maybe this is because the implicit rhs is not computed for SBDF schemes. In that case, it might be that AM correction does not work as expected. Further investigations are needed.

Questions about nRotIC

I'm really sorry that I didn't raise all the relevant questions last time. I would like to ask what are the units of columns 2, 3 and 4 in rot.TAG. After adding "kbotb=3, sigma_ratio=1.D0, nRotIC=1," to the benchmark from dynamo_benchmark, I checked the rot.TAG file and found that the rotation rate of inner core was so huge and even up to the amazing value of 76. If the unit of this rotation rate is also rad/time scale, is this value normal?

parR.TAG has redundant columns

When l_anel is .true., parR.TAG contains additional radial profiles of lengthscales but because of the cancellation of the density profile rho between the numerator and the denominator they are actually equal to the regular ones. It needs to be fixed and possibly replaced by a radial profile of dmV.

RMS calculations

RMS calculations need to be ported again from the old MagIC branch. The force balance diagnostics are thus not trustable in the current version.

dtVrms with FD

Some outputs of the RMS spectra like Arc or ArcMag seem to wrong when using FD. This might be due to an error handling dpkindr in RMS.

Problem when I use data visualisation and postprocessing on Unix

When I use Data visualisation and postprocessing to try to run a sample on Unix, I meet a error that it made the sample stopped runing. To be precise, Python broke when running the case.
2018-11-16 14:07:48.195 Python[27926:1769878] -[NSApplication _setup:]: unrecognized selector sent to instance 0x7fa10ff1afd0
2018-11-16 14:07:48.198 Python[27926:1769878] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSApplication _setup:]: unrecognized selector sent to instance 0x7fa10ff1afd0'
*** First throw call stack:
(
0 CoreFoundation 0x00007fff446202db __exceptionPreprocess + 171
1 libobjc.A.dylib 0x00007fff6b7d5c76 objc_exception_throw + 48
2 CoreFoundation 0x00007fff446b8db4 -[NSObject(NSObject) doesNotRecognizeSelector:] + 132
3 CoreFoundation 0x00007fff44596820 forwarding + 1456
4 CoreFoundation 0x00007fff445961e8 _CF_forwarding_prep_0 + 120
5 Tk 0x00007fff50cfe318 TkpInit + 467
6 Tk 0x00007fff50c7d252 Tk_Init + 1710
7 _tkinter.so 0x00000001129e0bc4 Tcl_AppInit + 84
8 _tkinter.so 0x00000001129e058f Tkinter_Create + 1069
9 Python 0x0000000109d02ea6 PyEval_EvalFrameEx + 19200
10 Python 0x0000000109cfe18c PyEval_EvalCodeEx + 1540
11 Python 0x0000000109ca49cd function_call + 327
12 Python 0x0000000109c86de1 PyObject_Call + 97
13 Python 0x0000000109c916d0 instancemethod_call + 163
14 Python 0x0000000109c86de1 PyObject_Call + 97
15 Python 0x0000000109d06b28 PyEval_CallObjectWithKeywords + 159
16 Python 0x0000000109c8f9b0 PyInstance_New + 123
17 Python 0x0000000109c86de1 PyObject_Call + 97
18 Python 0x0000000109d02e90 PyEval_EvalFrameEx + 19178
19 Python 0x0000000109cfe18c PyEval_EvalCodeEx + 1540
20 Python 0x0000000109d071ed fast_function + 284
21 Python 0x0000000109d02d90 PyEval_EvalFrameEx + 18922
22 Python 0x0000000109cfe18c PyEval_EvalCodeEx + 1540
23 Python 0x0000000109ca49cd function_call + 327
24 Python 0x0000000109c86de1 PyObject_Call + 97
25 Python 0x0000000109d03767 PyEval_EvalFrameEx + 21441
26 Python 0x0000000109cfe18c PyEval_EvalCodeEx + 1540
27 Python 0x0000000109d071ed fast_function + 284
28 Python 0x0000000109d02d90 PyEval_EvalFrameEx + 18922
29 Python 0x0000000109d0718f fast_function + 190
30 Python 0x0000000109d02d90 PyEval_EvalFrameEx + 18922
31 Python 0x0000000109cfe18c PyEval_EvalCodeEx + 1540
32 Python 0x0000000109ca49cd function_call + 327
33 Python 0x0000000109c86de1 PyObject_Call + 97
34 Python 0x0000000109c916d0 instancemethod_call + 163
35 Python 0x0000000109c86de1 PyObject_Call + 97
36 Python 0x0000000109d06b28 PyEval_CallObjectWithKeywords + 159
37 Python 0x0000000109c8f9b0 PyInstance_New + 123
38 Python 0x0000000109c86de1 PyObject_Call + 97
39 Python 0x0000000109d02e90 PyEval_EvalFrameEx + 19178
40 Python 0x0000000109cfe18c PyEval_EvalCodeEx + 1540
41 Python 0x0000000109cfdb82 PyEval_EvalCode + 32
42 Python 0x0000000109d1f577 run_mod + 49
43 Python 0x0000000109d1f3b4 PyRun_InteractiveOneFlags + 344
44 Python 0x0000000109d1ee47 PyRun_InteractiveLoopFlags + 87
45 Python 0x0000000109d1ed5c PyRun_AnyFileExFlags + 60
46 Python 0x0000000109d30c10 Py_Main + 3136
47 libdyld.dylib 0x00007fff6c3ef015 start + 1
)
libc++abi.dylib: terminating with uncaught exception of type NSException
Abort trap: 6
I have tried it on multiple versions of the OS X, but I have been unable to solve the problem. Now my system is 10.13.6.

dentropy0 in updateS.f90

There is something wrong when dentropy0 is treated explicitly (i.e. in updateWP.f90). This is correct
when treated implictily (i.e. in updateWPS.f90).

lbDiss in output.f90

It seems there is something wrong with lbDiss in the ouput par.TAG (at least a factor 2 is required in
the numerator).

Problem when running the sample named varCond

When running the sample named varCond, I meet a error made that sample stoped running.
mpiexec -n 1 magic.exe input.nml
` !--- Program MagIC 5.6 ---!
! Start time:
2018/11/12 12:07:18
! Reading grid parameters!
! Reading control parameters!
! Reading physical parameters!
! Reading start information!
! Reading output information!
! Reading inner core information!
! Reading mantle information!
! Reading B external parameters!
! No B_external namelist found!
0: lmStartB= 1, lmStopB= 561
Using snake ordering.
1 1 561 561
rank no 0 has l1m0 in block 1
!-- Blocking information:

! Number of LM-blocks: 1
! Size of LM-blocks: 561
! nThreads: 1

! Number of theta blocks: 12
! size of theta blocks: 4
! ideal size (nfs): 4
Using rIteration type: rIterThetaBlocking_seq_t

! This is an anelastic model
! You use entropy and pressure as thermodynamic variables
! You use entropy diffusion
! You solve two small matrices
! The key parameters are the following
! DissNb = 1.081202E+00
! ThExpNb= 1.000000E+00
! GrunNb = 5.000000E-01
! N_rho = 1.000000E+00
! pol_ind= 2.000000E+00

! Const. entropy at outer boundary S = -3.846154E-02
! Const. entropy at inner boundary S = 9.615385E-01
! Total vol. buoy. source = 0.000000E+00
-----> rank 0 has 103321356 B allocated

&grid
n_r_max = 49,
n_cheb_max = 47,
n_phi_tot = 96,
n_theta_axi = 0,
n_r_ic_max = 17,
n_cheb_ic_max = 15,
minc = 1,
nalias = 20,
l_axi = F,
fd_stretch = 3.000000E-01,
fd_ratio = 1.000000E-01,
fd_order = 2,
fd_order_bound = 2,
/
&control
mode = 0,
tag = "test",
n_time_steps = 500,
n_tScale = 0,
n_lScale = 0,
alpha = 6.000000E-01,
enscale = 1.000000E+00,
l_update_v = T,
l_update_b = T,
l_update_s = T,
l_update_xi = T,
l_newmap = F,
map_function = "ARCSIN",
alph1 = 8.000000E-01,
alph2 = 0.000000E+00,
dtstart = 0.000000E+00,
dtMax = 5.000000E-06,
courfac = 2.500000E+00,
alffac = 1.000000E+00,
l_cour_alf_damp = T,
intfac = 1.500000E-01,
n_cour_step = 6,
difnu = 0.000000E+00,
difeta = 0.000000E+00,
difkap = 0.000000E+00,
difchem = 0.000000E+00,
ldif = 1,
ldifexp = -1,
l_correct_AMe = T,
l_correct_AMz = T,
l_non_rot = F,
l_runTimeLimit = T,
runHours = 0,
runMinutes = 50,
runSeconds = 0,
tEND = 0.000000E+00,
radial_scheme = "CHEB",
polo_flow_eq = "WP",
anelastic_flavour= "NONE",
thermo_variable = "NONE",
/
&phys_param
ra = 1.500000E+07,
raxi = 0.000000E+00,
pr = 1.000000E+00,
sc = 1.000000E+01,
prmag = 1.000000E+00,
ek = 1.000000E-04,
po = 0.000000E+00,
prec_angle = 2.350000E+01,
po_diff = 0.000000E+00,
diff_prec_angle = 2.350000E+01,
epsc0 = 0.000000E+00,
epscxi0 = 0.000000E+00,
DissNb = 1.081202E+00,
strat = 1.000000E+00,
polind = 2.000000E+00,
ThExpNb = 1.000000E+00,
epsS = 0.000000E+00,
cmbHflux = 0.000000E+00,
slopeStrat = 2.000000E+01,
rStrat = 1.300000E+00,
ampStrat = 1.000000E+01,
thickStrat = 1.000000E-01,
nVarEntropyGrad = 0,
radratio = 2.000000E-01,
l_isothermal = F,
interior_model = "NONE",
g0 = 0.000000E+00,
g1 = 1.000000E+00,
g2 = 0.000000E+00,
ktopv = 1,
kbotv = 2,
ktopb = 1,
kbotb = 3,
ktopp = 1,
ktops = 1,
kbots = 1,
Bottom boundary l,m,S:
0 0 9.615385E-01 0.000000E+00
Top boundary l,m,S:
0 0 -3.846154E-02 0.000000E+00
impS = 0,
nVarCond = 2,
con_DecRate = 9.000000E+00,
con_RadRatio = 8.000000E-01,
con_LambdaMatch = 5.000000E-01,
con_LambdaOut = 1.000000E-01,
con_FuncWidth = 2.500000E-01,
r_LCR = 2.000000E+00,
difExp = -5.000000E-01,
nVarDiff = 0,
nVarVisc = 0,
nVarEps = 0,
/
&B_external
n_imp = 0,
l_imp = 1,
rrMP = 0.000000E+00,
amp_imp = 0.000000E+00,
expo_imp = 0.000000E+00,
bmax_imp = 0.000000E+00,
l_curr = F,
amp_curr = 0.000000E+00,
/
&start_field
l_start_file = F,
start_file = "rst_end.tag",
inform = -1,
l_reset_t = F,
scale_s = 1.000000E+00,
scale_xi = 1.000000E+00,
scale_b = 1.000000E+00,
scale_v = 1.000000E+00,
tipdipole = 0.000000E+00,
init_s1 = 505,
init_s2 = 0,
init_v1 = 0,
init_b1 = 3,
init_xi1 = 0,
init_xi2 = 0,
imagcon = 0,
amp_s1 = 1.000000E-01,
amp_s2 = 0.000000E+00,
amp_v1 = 0.000000E+00,
amp_b1 = 5.000000E+00,
amp_xi1 = 0.000000E+00,
amp_xi2 = 0.000000E+00,
/
&output_control
n_graph_step = 0,
n_graphs = 0,
t_graph_start = 0.000000E+00,
t_graph_stop = 0.000000E+00,
dt_graph = 0.000000E+00,
n_rst_step = 0,
n_rsts = 1,
t_rst_start = 0.000000E+00,
t_rst_stop = 0.000000E+00,
dt_rst = 0.000000E+00,
n_stores = 1,
n_log_step = 10,
n_logs = 0,
t_log_start = 0.000000E+00,
t_log_stop = 0.000000E+00,
dt_log = 0.000000E+00,
n_spec_step = 0,
n_specs = 1,
t_spec_start = 0.000000E+00,
t_spec_stop = 0.000000E+00,
dt_spec = 0.000000E+00,
n_cmb_step = 50,
n_cmbs = 0,
t_cmb_start = 0.000000E+00,
t_cmb_stop = 0.000000E+00,
dt_cmb = 0.000000E+00,
n_r_field_step = 0,
n_r_fields = 0,
t_r_field_start = 0.000000E+00,
t_r_field_stop = 0.000000E+00,
dt_r_field = 0.000000E+00,
l_movie = T,
n_movie_step = 100,
n_movie_frames = 0,
t_movie_start = 0.000000E+00,
t_movie_stop = 0.000000E+00,
dt_movie = 0.000000E+00,
movie = FL,
movie = SL,
movie = Br SUR,
l_probe = F,
n_probe_step = 0,
n_probe_out = 0,
t_probe_start = 0.000000E+00,
t_probe_stop = 0.000000E+00,
dt_probe = 0.000000E+00,
r_probe = 0.000000E+00,
theta_probe = 0.000000E+00,
n_phi_probes = 0,
l_average = F,
l_cmb_field = T,
l_dt_cmb_field = F,
l_save_out = F,
l_true_time = F,
lVerbose = F,
l_rMagSpec = T,
l_DTrMagSpec = T,
l_max_cmb = 14,
l_r_field = T,
l_r_fieldT = F,
l_max_r = 32,
n_r_step = 2,
n_coeff_r = 2,
n_coeff_r = 4,
n_coeff_r = 6,
n_coeff_r = 8,
n_coeff_r = 10,
l_earth_likeness= F,
l_max_comp = 8,
l_hel = T,
l_AM = T,
l_power = F,
l_viscBcCalc = F,
l_fluxProfs = F,
l_perpPar = F,
l_PressGraph = T,
l_energy_modes = F,
m_max_modes = 14,
l_drift = F,
l_iner = F,
l_TO = F,
l_TOmovie = F,
l_PV = T,
l_storeBpot = F,
l_storeVpot = F,
l_RMS = F,
l_par = F,
l_corrMov = F,
rCut = 0.000000E+00,
rDea = 0.000000E+00,
/
&mantle
conductance_ma = 0.000000E+00,
rho_ratio_ma = 1.000000E+00,
nRotMa = 0,
omega_ma1 = 0.000000E+00,
omegaOsz_ma1 = 0.000000E+00,
tShift_ma1 = 0.000000E+00,
omega_ma2 = 0.000000E+00,
omegaOsz_ma2 = 0.000000E+00,
tShift_ma2 = 0.000000E+00,
amp_RiMaAsym = 0.000000E+00,
omega_RiMaAsym = 0.000000E+00,
m_RiMaAsym = 0,
amp_RiMaSym = 0.000000E+00,
omega_RiMaSym = 0.000000E+00,
m_RiMaSym = 0,
/
&inner_core
sigma_ratio = 1.000000E+00,
rho_ratio_ic = 1.000000E+00,
nRotIc = 0,
omega_ic1 = 0.000000E+00,
omegaOsz_ic1 = 0.000000E+00,
tShift_ic1 = 0.000000E+00,
omega_ic2 = 0.000000E+00,
omegaOsz_ic2 = 0.000000E+00,
tShift_ic2 = 0.000000E+00,
BIC = 0.000000E+00,
amp_RiIcAsym = 0.000000E+00,
omega_RiIcAsym = 0.000000E+00,
m_RiIcAsym = 0,
amp_RiIcSym = 0.000000E+00,
omega_RiIcSym = 0.000000E+00,
m_RiIcSym = 0,
/

! Using dtMax time step: 5.000000E-06
! NO STARTFILE READ, SETTING Z10!

! Entropy initialized at mode: l= 5 m= 5 Ampl= 0.10000
! Only l=m=0 comp. in tops:

! Self consistent dynamo integration.
! Normalized OC moment of inertia: 7.379734E+00
! Normalized IC moment of inertia: 4.447778E-03
! Normalized MA moment of inertia: 2.941732E+02
! Normalized IC volume : 6.544985E-02
! Normalized OC volume : 8.115781E+00
! Normalized IC surface : 7.853982E-01
! Normalized OC surface : 1.963495E+01

! Grid parameters:
n_r_max = 49 = number of radial grid points
n_cheb_max = 47
max cheb deg.= 46
n_phi_max = 96 = no of longitude grid points
n_theta_max = 48 = no of latitude grid points
n_r_ic_max = 17 = number of radial grid points in IC
n_cheb_ic_max= 14
max cheb deg = 28
l_max = 32 = max degree of Plm
m_max = 32 = max oder of Plm
lm_max = 561 = no of l/m combinations
minc = 1 = longitude symmetry wave no
nalias = 20 = spher. harm. deal. factor

! STARTING TIME INTEGRATION AT:
start_time = 0.0000000000E+00
step no = 0
start dt = 5.0000E-06
start dtNew= 5.0000E-06

! Starting time integration!

! WRITING MOVIE FRAME NO 1 at time/step 0.000000E+00 1
forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line Source
magic.exe 000000000072B253 Unknown Unknown Unknown
libpthread-2.28.s 00007FCDB95E1DD0 Unknown Unknown Unknown
magic.exe 0000000000537090 geos_mod_mp_outpv 767 outGeos.f90
magic.exe 0000000000598651 output_mod_mp_out 1021 output.f90
magic.exe 000000000067D6C7 step_time_mod_mp_ 1084 step_time.f90
magic.exe 0000000000407DEC MAIN__ 367 magic.f90
magic.exe 0000000000406B22 Unknown Unknown Unknown
libc-2.28.so 00007FCDB927C09B __libc_start_main Unknown Unknown
magic.exe 0000000000406A2A Unknown Unknown Unknown
`
I found that error would happened when I set "&grid" as this sample. When I copy these "&grid" parameters to other samples, the error will also happen.
My system is Ubuntu18.10 but I meet same problem in Ubuntu18.04 and Ubuntu16.04. Should I change my environment or do other adjustment?

Fortran Libs not importing properly

With build_lib set to true, from magic import * fails. Specifically, it has trouble import magic.legendre as leg which leads me to believe that the fortran libraries are not importing correctly. Any help would be greatly appreciated! Traceback:

Please wait: building greader_single...
Please wait: building greader_double...
Please wait: building lmrreader_single...
Please wait: building Legendre transforms...
Please wait: building vtklib...

ModuleNotFoundError Traceback (most recent call last)
in
----> 1 from magic import *

~/magic/python/magic/init.py in
14 from .checker import *
15 from .thHeat import *
---> 16 from .coeff import *
17 from .radialSpectra import *
18 from .checkpoint import *

~/magic/python/magic/coeff.py in
4 import numpy as np
5 import matplotlib.pyplot as plt
----> 6 from .spectralTransforms import SpectralTransforms
7 from magic.setup import labTex
8 import copy

~/magic/python/magic/spectralTransforms.py in
4
5 if buildSo:
----> 6 import magic.legendre as leg
7
8

ModuleNotFoundError: No module named 'magic.legendre'

Segmentation fault happened when use axisymmetric shtns functions on KNL nodes

Some outputs (power, perpPar, TO, etc.) call shtns.axi_to_spat, shtns.spat_to_SH_axi and lead to a segmentation fault on KNL nodes.
I'm using the latest version of the code and the SHTNS 3.2.2 version compiled with the configuration command :
./configure --enable-mkl --enable-knl --enable-magic-layout --prefix=$HOME/
Without these outputs, the code works well otherwise.

power.TAG outputs

In case of an anelastic model, the viscous dissipation computed in the power.TAG output file is wrong.

Non-dimensional temperature does not show 0 and 1 at boundaries

Hello all,
I am trying to simulate a simple convection problem in a spherical shell with constant boundary temperatures where I intend to have 1 at inner boundary and 0 at outer boundary for that I give s_top = 0 0 0 0 and s_bot = 0 0 1 0 in the input file but it overwrites my inputs and in log file shows:
! Const. temp. at outer boundary S = -3.288591E-01
! Const. temp. at inner boundary S = 6.711409E-01

phi-slice movies

phi-slice movies are not working in MagIC5: axisymmetric averages are fine but not slices. This is due to a wrong MPI communicator in this case: phi-slice have a size of 2_n_theta_max x n_r_max, while phi-average have a size of n_theta_max x n_r_max.

Crank Nicolson when starting from zero

The assembly of the right hand side of the Crank-Nicolson is wrong when starting from zero (the calculation of the linear terms of the current time step is not done when one starts from zero).
This is however correct when restarting from a restart file. Fixing it would break some auto-tests though.

compilation failed with OpenMP

Configure with:
cmake .. -DUSE_SHTNS=yes -DUSE_OMP=yes
With gnu fortran 8.2.1 I get the following error when compiling:

[ 48%] Building Fortran object src/CMakeFiles/magic.exe.dir/get_nl.f90.o
/tmp/magic/src/get_nl.f90:461:0:

            if ( l_anel ) then

Erreur: « l_anel » non spécifié dans la région « parallel » englobante
/tmp/magic/src/get_nl.f90:460:0:

         do nPhi=1,n_phi_max

Erreur: région « parallel » englobante
make[2]: *** [src/CMakeFiles/magic.exe.dir/build.make:453: src/CMakeFiles/magic.exe.dir/get_nl.f90.o] Error 1
make[2]: *** Attente des tâches non terminées....
make[1]: *** [CMakeFiles/Makefile2:91: src/CMakeFiles/magic.exe.dir/all] Error 2
make: *** [Makefile:84: all] Error 2

Unknown error when running program

Trying out the boussBenchSat program with some modified parameters, and I run into an error when executing magic.exe:

! Something went wrong, MagIC will stop now
! See below the error message:

! Singular Matrix ps0Mat in ps_cond!

Input statement requires too much data, unit 10, error when running the auto-tests 'Test coeff outputs'

Dear community,

I have successfully installed MAGIC and got the magic.exe. WHen I run the auto-tests with the following command

./magic_wizard.py --use-mpi --nranks 4 --mpicmd mpiexec

I encountered 'the input statement requires too much data, unit 10, /magic/samples/testCoeffOutputs/./B_lmr_ave_-1.start' error when the test ran up to Test coeff outputs.

Has anyone run into this issue? Any help is very much appreciated!

Thanks a lot!
Dunyu Liu

Issue using l_true_time in input.nml file

Hello,

I am attempting to utilize the line l_true_time = .true. in my input.nml file in the hopes of using the movie files created from my run in ParaView. However, when I write that line of code, MagIC says that it does not recognize l_true_time from the namelist and does not begin running. I tried changing l_true_time to false (the default value) to see if it would recognize the command then, but it still did not work. Removing the line from my code entirely works, but defeats the goal of eventually using it in my input.nml. I was wondering if it is possible I am typing the command incorrectly, or if this is a separate issue entirely? I would love any help with this, thank you!

A question about Poincaré number

Based on the paper Scale variability in convection-driven MHD dynamos at low Ekman number and the file input.nml in directory magic-master/samples/dynamo_benchmark, ra = 1.5D6, ek = 2.0D-4, nVarDiff = 4, pr = 1.0D0 and prmag = 2.0D0 are set, the simulation result in ten time scales (n_tScale = 0) is indeed a Earth-like geodynamo. But when Poincaré number (po = -1.05D-7, according to the reality of the earth) is added input.nml, the simulated magnetic field will decay rapidly in the first time scales. But in theory, the addition of this item should not produce such a drastic change in such a short time. Similarly, changing the Rayleigh Numbers above to 1.75D6, 2.0D6, and 2.5D6 gives similar results. Considering that precession is not explained in detail in the magic manual, so I want to ask if the code for the precession term is complete or exits another problems.

problem with concatenation in MagicTs

Hi!

There is a problem with concatenation of two long files in MagicTs:

MagicTs(field='e_kin',tag='1') ==> Figure_1
MagicTs(field='e_kin',tag='2') ==> Figure_2
MagicTs(field='e_kin',tag='[1,2]') ==> Figure_3

The time of the last checkpoint in log.1 corresponds to the end time in e_kin.1 and start in e_kin.2.

wc -l e_kin.1 ==> 26554
wc -l e_kin.2 ==> 26379

May be it is overflow? I used sngl precision output. Each run is 24hs.

For short time series it is OK.

I can provide the link to the data.

Figure_1
Figure_2
Figure_3

Finite difference + inner core

Stability issues are frequent when using a conducting and/or rotating inner core and finite differences in the outer core.

viscous torque

In case of rigid mechanical boundaries and an anelastic refence state, the computation of the viscous torques are likely wrong.

l_LCR + finite differences

There is a possible bug with the low conductivity region approximation when finite differences are employed: in this case n_r_lcr does not necessarily belong to the same rank. A communication in updateB.f90 is likely required.

“Segmentation fault - invalid memory reference” happened when use openmp

I can pass the magic_wizard.py with MPI. However, if I try to run the hybrid code, “Segmentation fault" error would always happen.
Here I try to compile and run the dynamo_benchmark sample:

$ export FC=mpifort
$ export CC=mpicc
$ cmake .. -DUSE_FFTLIB=JW -DUSE_LAPACKLIB=JW
-- The C compiler identification is GNU 7.3.0
-- The CXX compiler identification is GNU 7.3.0
-- Check for working C compiler: /usr/bin/mpicc
-- Check for working C compiler: /usr/bin/mpicc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The Fortran compiler identification is GNU 7.3.0
-- Check for working Fortran compiler: /usr/bin/mpifort
-- Check for working Fortran compiler: /usr/bin/mpifort -- works
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Checking whether /usr/bin/mpifort supports Fortran 90
-- Checking whether /usr/bin/mpifort supports Fortran 90 -- yes
-- Could not find hardware support for AVX2 on this machine.
-- Set architecture to '64'
-- Set precision to 'dble'
-- Set output precision to 'sngl'
-- Use MPI
-- Try OpenMP Fortran flag = [-qopenmp]
-- Try OpenMP Fortran flag = [-fopenmp]
-- Found OpenMP_Fortran: -fopenmp
-- Use OpenMP
-- Use 'JW' for the FFTs
-- Use 'JW' for the LU factorisations
-- FFTW3: '/usr/lib/x86_64-linux-gnu/libfftw3.so'
-- FFTW3_OMP: '/usr/lib/x86_64-linux-gnu/libfftw3_omp.so'
-- Use SHTNS: no
-- Compilation flags: -fopenmp -m64 -std=f2008 -g -fbacktrace -fconvert=big-endian -cpp
-- Optimisation flags: -O3 -march=native
-- Configuring done
-- Generating done
-- Build files have been written to: /home/wzzhan/magic/openmp
$ make -j
Scanning dependencies of target magic.exe
……
[100%] Built target magic.exe
$ export OMP_NUM_THREADS=2
$ export KMP_AFFINITY=verbose,granularity=core,compact,1
$ mpiexec -n 2 ./magic.exe input.nml
!--- Program MagIC 5.6 ---!
! Start time: 2019/04/01 17:01:55
! Reading grid parameters!
! Reading control parameters!
! Reading physical parameters!
! Reading start information!
! Reading output information!
! Reading inner core information!
! Reading mantle information!
! Reading B external parameters!
! No B_external namelist found!
0: lmStartB= 1, lmStopB= 77
1: lmStartB= 78, lmStopB= 153
Using rIteration type: rIterThetaBlocking_OpenMP_t
! Uneven load balancing in LM blocks!
! Load percentage of last block: 98.701298701298697
0: lmStartB= 1, lmStopB= 77
1: lmStartB= 78, lmStopB= 153
Using snake ordering.
1 1 77 77
2 78 153 76
rank no 0 has l1m0 in block 1
!-- Blocking information:
! Number of LM-blocks: 2
! Size of LM-blocks: 77
! nThreads: 2
! Number of theta blocks: 2
! size of theta blocks: 12
! ideal size (nfs): 12
Using rIteration type: rIterThetaBlocking_OpenMP_t
! Const. entropy at outer boundary S = -1.091314E-01
! Const. entropy at inner boundary S = 8.908686E-01
! Total vol. buoy. source = 0.000000E+00
-----> rank 0 has 7384296 B allocated
……
! Using dtMax time step: 1.000000E-04
! NO STARTFILE READ, SETTING Z10!
! Entropy initialized at mode: l= 4 m= 4 Ampl= 0.10000
! Only l=m=0 comp. in tops:
! Self consistent dynamo integration.
! Normalized OC moment of inertia: 1.436464E+01
! Normalized IC moment of inertia: 7.584414E-02
! Normalized MA moment of inertia: 2.848460E+02
! Normalized IC volume : 6.539622E-01
! Normalized OC volume : 1.459880E+01
! Normalized IC surface : 3.643504E+00
! Normalized OC surface : 2.974289E+01
! Grid parameters:
n_r_max = 33 = number of radial grid points
n_cheb_max = 31
max cheb deg.= 30
n_phi_max = 48 = no of longitude grid points
n_theta_max = 24 = no of latitude grid points
n_r_ic_max = 17 = number of radial grid points in IC
n_cheb_ic_max= 14
max cheb deg = 28
l_max = 16 = max degree of Plm
m_max = 16 = max oder of Plm
lm_max = 153 = no of l/m combinations
minc = 1 = longitude symmetry wave no
nalias = 20 = spher. harm. deal. factor
! STARTING TIME INTEGRATION AT:
start_time = 0.0000000000E+00
step no = 0
start dt = 1.0000E-04
start dtNew= 1.0000E-04
! Starting time integration!
! BUILDING MATRICIES AT STEP/TIME: 1 1.000000E-04
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
#0 0x7ff752ffb2da in ???
#1 0x7ff752ffa503 in ???
#2 0x7ff75242ef1f in ???
#3 0x7ff753bac1f2 in __updates_mod_MOD_updates._omp_fn.8
at /home/wzzhan/magic/src/updateS.f90:268
#4 0x7ff752a20888 in ???
#5 0x7ff752a2910f in ???
#6 0x7ff753bab5de in __updates_mod_MOD_updates._omp_fn.6
at /home/wzzhan/magic/src/updateS.f90:189
#7 0x7ff752a1dece in ???
#8 0x7ff753bae4de in __updates_mod_MOD_updates
at /home/wzzhan/magic/src/updateS.f90:189
#9 0x7ff753a1336e in __lmloop_mod_MOD_lmloop
at /home/wzzhan/magic/src/LMLoop.f90:207
#10 0x7ff753b91698 in __step_time_mod_MOD_step_time
at /home/wzzhan/magic/src/step_time.f90:1250
#0 0x7f8bbabfb2da in ???
#1 0x7f8bbabfa503 in ???
#2 0x7f8bba02ef1f in ???
#11 0x7ff753a11607 in magic
at /home/wzzhan/magic/src/magic.f90:367
#12 0x7ff753a10f08 in main
at /home/wzzhan/magic/src/magic.f90:89
#3 0x7f8bbb7e8205 in __algebra_MOD_cgeslml
at /home/wzzhan/magic/src/algebra.f90:178
==================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 4652 RUNNING AT DESKTOP-ER7J2RN
= EXIT CODE: 139
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
==================================================================
YOUR APPLICATION TERMINATED WITH THE EXIT STRING: Segmentation fault (signal 11)
This typically refers to a problem with your application.
Please see the FAQ page for debugging suggestions

I use MPICH to compile this code. However, even though I change it to open-mpi 4.0.1 or open-mpi 3.0.1 which running in another workstation, it still can't work. I don't know openmp much so I'm unable to figure out where the problem is.

add topics

I suggest adding the topics mhd magnetohydrodynamics cfd computational-fluid-dynamics to the About section.

Python module magic.legendre not available when compiling f2py libraries

I followed the steps from https://magic-sph.github.io/postProc.html#secpythonpostproc to enable python post-processing. However currently the fortran librairies cannot be successfully compiled.
What I did:

  • cd ~/magic/python/magic/
  • installed required libs like scipy, matplotlib and numpy
  • cp $MAGIC_HOME/python/magic/magic.cfg.default $MAGIC_HOME/python/magic/magic.cfg
  • set buildLib = True
  • f2py magic.cfg

Output:

Reading fortran codes...
Reading file 'magic.cfg' (format:free)
Post-processing...
Applying post-processing hooks...
  character_backward_compatibility_hook
Post-processing (stage 2)...
Building modules...
  • Then start python. I'm using python3.11
  • Call magic libs >>>from magic import *
    Output:
Please wait: building greader_single...
Please wait: building greader_double...
Please wait: building lmrreader_single...
Please wait: building Legendre transforms...
Please wait: building vtklib...
Please wait: building cylavg...
python
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/ygaillard/magic/python/magic/__init__.py", line 17, in <module>
    from .coeff import *
  File "/home/ygaillard/magic/python/magic/coeff.py", line 9, in <module>
    from .spectralTransforms import SpectralTransforms
  File "/home/ygaillard/magic/python/magic/spectralTransforms.py", line 6, in <module>
    import magic.legendre as leg
ModuleNotFoundError: No module named 'magic.legendre'

It seems that fortranLib/legendre.f90 is not compiled, hence cannot be called.

updateZ + l10 + get_rot_rates

The final section of updateZ after get_tor_rhs_imp should in principle be moved into the get_tor_rhs_imp subroutine.

The unit of omega_ic

I would like to know whether the units of omega_ic1, omega_ic2 and another similar namelists are degrees /time scale

Problem with 2D visualization with MPI

Hello! I have a problem with 2D visualization (1D works), Ubuntu 18.04.
What I can do:

i) without MPI and OpenMP I have plots produced from G*.TAG data files using greader_*so (buildLib = True) for the both precisions:

from magic import *
gr = MagicGraph(ivar=1, tag='1',precision=np.float64) ....
either
s=Surf(precision=np.float64) ....

it's ok

ii) But with MPI and without OpenMP (with buildLib = True):

At line 39 of file readG_double.f90 (unit = 10, file = './G_1.1')
Fortran runtime error: I/O past end of record on unformatted file

If I understand it correctly the mpi header's lines 725--755 in out_graph_file.f90 do not correspond to 36 -- .. lines in readG_double.f90. The difference is additional write statements of the length. Can you comment this?

iii) When the python's reader is used (with buildLib = False): then I have

gr = MagicGraph(ivar=1, tag='1',precision=np.float64)

in fort_read(self, dt, shape, endian, order, head_size)
318 dtype=dt,
319 buffer=buf,
--> 320 order=order)
321 #fortran record ends with the header repeated. skip this.
322 self.seek(head_size,1)

TypeError: buffer is too small for requested array

  2  n_r_max     =97,
  3  n_cheb_max  =95,
  4  n_phi_tot   =288,
  5 n_cheb_ic_max=15,

The result does not depend on MPI. And I have 128Hb shared memeory.

update Z + precession

right now precession is added in the r.h.s. but it looks like it could better to handle it in the implicit assembly stage (get_tor_rhs_imp routineà

MagicCoeffCmb does not work with ave=True

Hi!

MagicCoeffCmb does not work with ave=True:

cmb = MagicCoeffCmb(precision=np.float64,tag=2,ave=True)

Reading ./B_coeff_cmb_ave.2

ValueError Traceback (most recent call last)
in ()
----> 1 cmb = MagicCoeffCmb(precision=np.float64,tag=2,ave=True)

/home/mr/Work/2019/magic/python/magic/coeff.pyc in init(self, tag, datadir, ratio_cmb_surface, scale_b, iplot, lCut, precision, ave, sv, quiet)
173 while 1:
174 try:
--> 175 data.append(f.fort_read(precision))
176 except TypeError:
177 break

/home/mr/Work/2019/magic/python/magic/npfile.pyc in fort_read(self, dt, shape, endian, order, head_size)
307 known_dimensions_size)
308 if illegal:
--> 309 raise ValueError("unknown dimension doesn't match record size")
310 shape[shape.index(-1)] = unknown_dimension_size
311 else:

ValueError: unknown dimension doesn't match record size

ls -lh B_coeff_cmb_ave.2
-rw-rw-r-- 1 mr mr 3,1M окт 23 19:17 B_coeff_cmb_ave.2

Thanks.

Auto-test failed

Hi !

can't seem to pass the test step:

./magic_checks.pl --all --clean
Use of uninitialized value $ENV{"OMP_NUM_THREADS"} in string at ./magic_checks.pl line 114.
Compilation.. 
Warning: mpimod.f90:2: Illegal preprocessor directive
Warning: mpimod.f90:4: Illegal preprocessor directive

/Users/jrek/magic/samples/dynamo_benchmark: (1/13)
exitcode = 256    Validating results..                    not ok: 
[Couldn't open e_kin.test]
    Time used:                                        00:00

similar errors are produced for the remaining 12 tests so that the summary reads:

----------------------------------------------------------------------
### auto-test failed ###
Failed 13 test(s) out of 13:
  /Users/jrek/magic/samples/varCond (results)
  /Users/jrek/magic/samples/varProps (results)
  /Users/jrek/magic/samples/dynamo_benchmark_condICrotIC (results)
  /Users/jrek/magic/samples/testMapping (results)
  /Users/jrek/magic/samples/isothermal_nrho3 (results)
  /Users/jrek/magic/samples/testRadialOutputs (did not run)
  /Users/jrek/magic/samples/hydro_bench_anel (results)
  /Users/jrek/magic/samples/fluxPerturbation (results)
  /Users/jrek/magic/samples/testOutputs (did not run)
  /Users/jrek/magic/samples/testRestart (results)
  /Users/jrek/magic/samples/boussBenchSat (results)
  /Users/jrek/magic/samples/testTruncations (results)
  /Users/jrek/magic/samples/dynamo_benchmark (results)

The compilation seems to be working all right:

$cmake output :

-- Checking whether Fortran compiler has -isysroot
-- Checking whether Fortran compiler has -isysroot - yes
-- Checking whether Fortran compiler supports OSX deployment target flag
-- Checking whether Fortran compiler supports OSX deployment target flag - yes
-- Set precision to 'dble'
-- Set output precision to 'sngl'
-- Could not find MPI wrappers in CMAKE_C_COMPILER or CMAKE_Fortran_COMPILER. Trying to find MPI libs by default
-- Use MPI
-- Use OpenMP
-- MKL was not found
-- Could NOT find Threads (missing:  Threads_FOUND) 
-- A library with BLAS API found.
-- Could NOT find Threads (missing:  Threads_FOUND) 
-- A library with LAPACK API found.
-- LAPACK was found
-- Use 'JW' for the FFTs
-- Use 'LAPACK' for the LU factorisations
-- HDF5 for check points: 'no'
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/jrek/magic/build

$make-j output :

...
[ 96%] Building Fortran object src/CMakeFiles/magic.exe.dir/cutils_iface.f90.o
[ 98%] Building Fortran object src/CMakeFiles/magic.exe.dir/magic.f90.o
[ 98%] Building C object src/CMakeFiles/magic.exe.dir/c_utils.c.o
[100%] Linking Fortran executable ../magic.exe
[100%] Built target magic.exe

omega_ic=0 first line of rot.TAG

After restarting from checkpoints, omega_ic does not show in the first output of rot.TAG. Likely, this is not on rank=0. Further testing required.

dtVrms outputs when using SDIRK with assembly stage

It looks like the viscosity (and possibly some dtBrms outputs as well) are wrong when SDIRK with an assembly stage are employed (KC564 for instance). This is not clear to me why since it looks like it is correctly computed at the assembly stage, so maybe the output takes some stage value instead.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.