Code Monkey home page Code Monkey logo

swiftsim / swift Goto Github PK

View Code? Open in Web Editor NEW
80.0 17.0 52.0 136.07 MB

Modern astrophysics and cosmology particle-based code. Mirror of gitlab developments at https://gitlab.cosma.dur.ac.uk/swift/swiftsim

Home Page: http://www.swiftsim.com

License: GNU Lesser General Public License v3.0

Makefile 0.38% Shell 0.48% M4 2.29% C 86.02% Python 2.84% TeX 7.70% MATLAB 0.10% HTML 0.08% Fortran 0.12%
cosmology sph hydrodynamics nbody nbody-simulation task-based hpc astrophysics simulations mpi

swift's Introduction

SWIFT: SPH WIth Fine-grained inter-dependent Tasking

Build Status

SWIFT is a gravity and SPH solver designed to run cosmological simulations on peta-scale machines, scaling well up to 10's of thousands of compute node.

More general information about SWIFT is available on the project webpages.

For information on how to run SWIFT, please consult the onboarding guide available here. This includes dependencies, and a few examples to get you going.

We suggest that you use the latest release branch of SWIFT, rather than the current master branch as this will change rapidly. We do, however, like to ensure that the master branch will build and run.

This GitHub repository is designed to be an issue tracker, and a space for the public to submit patches through pull requests. It is synchronised with the main development repository that is available on the ICC's GitLab server which is available here.

Please feel free to submit issues to this repository, or even pull requests. We will try to deal with them as soon as possible, but as the core development team is quite small this could take some time.

Disclaimer

We would like to emphasise that SWIFT comes without any warranty of accuracy, correctness or efficiency. As mentioned in the license, the software comes as-is and the onus is on the user to get meaningful results. Whilst the authors will endeavour to answer questions related to using the code, we recommend users build and maintain their own copies. This documentation contains the most basic information to get started. Reading it and possibly also the source code is the best way to start running simulations.

The users are responsible to understand what the code is doing and for the results of their simulation runs.

Note also that the values of the parameters given in the examples are only indicative. We recommend users experiment by themselves and a campaign of experimentation with various values is highly encouraged. Each problem will likely require different values and the sensitivity to the details of the physical model is something left to the users to explore.

Acknowledgment & Citation

The SWIFT code was last described in this paper: https://ui.adsabs.harvard.edu/abs/2023arXiv230513380S. The core solver, the numerical methods as well as many extensions where described there. We ask users running SWIFT for their research to please cite this paper when they present their results.

In order to keep track of usage and measure the impact of the software, we kindly ask users publishing scientific results using SWIFT to add the following sentence to the acknowledgment section of their papers:

"The research in this paper made use of the SWIFT open-source simulation code (http://www.swiftsim.com, Schaller et al. 2018) version X.Y.Z."

with the version number set to the version used for the simulations and the reference pointing to the ASCL entry of the code: https://ascl.net/1805.020.

Contribution Guidelines

The SWIFT source code uses a variation of the 'Google' formatting style. The script 'format.sh' in the root directory applies the clang-format-13 tool with our style choices to all the SWIFT C source file. Please apply the formatting script to the files before submitting a pull request.

Please check that the test suite still runs with your changes applied before submitting a pull request and add relevant unit tests probing the correctness of new modules. An example of how to add a test to the suite can be found by considering the tests/testGreeting case.

Any contributions that fail any of the automated tests will not be accepted. Contributions that include tests of the proposed modules (or any current ones!) are highly encouraged.

Runtime parameters

 Welcome to the cosmological hydrodynamical code
    ______       _________________
   / ___/ |     / /  _/ ___/_  __/
   \__ \| | /| / // // /_   / /
  ___/ /| |/ |/ // // __/  / /
 /____/ |__/|__/___/_/    /_/
 SPH With Inter-dependent Fine-grained Tasking

 Version : 1.0.0
 Website: www.swiftsim.com
 Twitter: @SwiftSimulation

See INSTALL.swift for install instructions.

Usage: swift [options] [[--] param-file]
   or: swift [options] param-file
   or: swift_mpi [options] [[--] param-file]
   or: swift_mpi [options] param-file

Parameters:

    -h, --help                        show this help message and exit

  Simulation options:

    -b, --feedback                    Run with stars feedback.
    -c, --cosmology                   Run with cosmological time integration.
    --temperature                     Run with temperature calculation.
    -C, --cooling                     Run with cooling (also switches on --temperature).
    -D, --drift-all                   Always drift all particles even the ones
                                      far from active particles. This emulates
                                      Gadget-[23] and GIZMO's default behaviours.
    -F, --star-formation              Run with star formation.
    -g, --external-gravity            Run with an external gravitational potential.
    -G, --self-gravity                Run with self-gravity.
    -M, --multipole-reconstruction    Reconstruct the multipoles every time-step.
    -s, --hydro                       Run with hydrodynamics.
    -S, --stars                       Run with stars.
    -B, --black-holes                 Run with black holes.
    -k, --sinks                       Run with sink particles.
    -u, --fof                         Run Friends-of-Friends algorithm to
                                      perform black hole seeding.
    --lightcone                       Generate lightcone outputs.
    -x, --velociraptor                Run with structure finding.
    --line-of-sight                   Run with line-of-sight outputs.
    --limiter                         Run with time-step limiter.
    --sync                            Run with time-step synchronization
                                      of particles hit by feedback events.
    --csds                            Run with the Continuous Simulation Data
                                      Stream (CSDS).
    -R, --radiation                   Run with radiative transfer.
    --power                           Run with power spectrum outputs.

  Simulation meta-options:

    --quick-lyman-alpha               Run with all the options needed for the
                                      quick Lyman-alpha model. This is equivalent
                                      to --hydro --self-gravity --stars --star-formation
                                      --cooling.
    --eagle                           Run with all the options needed for the
                                      EAGLE model. This is equivalent to --hydro
                                      --limiter --sync --self-gravity --stars
                                      --star-formation --cooling --feedback
                                      --black-holes --fof.
    --gear                            Run with all the options needed for the
                                      GEAR model. This is equivalent to --hydro
                                      --limiter --sync --self-gravity --stars
                                      --star-formation --cooling --feedback.
    --agora                           Run with all the options needed for the
                                      GEAR model. This is equivalent to --hydro
                                      --limiter --sync --self-gravity --stars
                                      --star-formation --cooling --feedback.
                                      
  Control options:

    -a, --pin                         Pin runners using processor affinity.
    --nointerleave                    Do not interleave memory allocations across
                                      NUMA regions.
    -d, --dry-run                     Dry run. Read the parameter file, allocates
                                      memory but does not read the particles
                                      from ICs. Exits before the start of time
                                      integration. Checks the validity of
                                      parameters and IC files as well as memory
                                      limits.
    -e, --fpe                         Enable floating-point exceptions (debugging
                                      mode).
    -f, --cpu-frequency=<str>         Overwrite the CPU frequency (Hz) to be
                                      used for time measurements.
    -n, --steps=<int>                 Execute a fixed number of time steps.
                                      When unset use the time_end parameter
                                      to stop.
    -o, --output-params=<str>         Generate a parameter file with the options
                                      for selecting the output fields.
    -P, --param=<str>                 Set parameter value, overiding the value
                                      read from the parameter file. Can be used
                                      more than once {sec:par:value}.
    -r, --restart                     Continue using restart files.
    -t, --threads=<int>               The number of task threads to use on each
                                      MPI rank. Defaults to 1 if not specified.
    --pool-threads=<int>              The number of threads to use on each MPI
                                      rank for the threadpool operations.
                                      Defaults to the numbers of task threads
                                      if not specified.
    -T, --timers=<int>                Print timers every time-step.
    -v, --verbose=<int>               Run in verbose mode, in MPI mode 2 outputs
                                      from all ranks.
    -y, --task-dumps=<int>            Time-step frequency at which task graphs
                                      are dumped.
    --cell-dumps=<int>                Time-step frequency at which cell graphs
                                      are dumped.
    -Y, --threadpool-dumps=<int>      Time-step frequency at which threadpool
                                      tasks are dumped.
    --dump-tasks-threshold=<flt>      Fraction of the total step's time spent
                                      in a task to trigger a dump of the task plot
                                      on this step

See the file examples/parameter_example.yml for an example of parameter file.

swift's People

Contributors

aborissov avatar aidanchalk avatar alalazo avatar angusl avatar bwvdnbro avatar darwin-roduit avatar edoaltamura avatar evgeniichaikin avatar filiphusko avatar fonotec avatar gonnet avatar jborrow avatar jchelly avatar jkeger avatar john-regan avatar loikki avatar lonelycat124 avatar matthieuschaller avatar mladenivkovic avatar pelahi avatar pwdraper avatar rgbower avatar rtobar avatar stan-verhoeve avatar stefan-arridge avatar sylviaploeckinger avatar willjroper avatar wullm avatar ymbahe avatar yrevaz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

swift's Issues

Issues with CoolingHalo examples

Hello,

There seem to be a few issues with the CoolingHalo and CoolingHaloWithSpin examples:

  • A script radial_profile.py is in run.sh, but there is none in the project
  • I had to decrease dt_min to 1e-8 or it complained about parts wanting time-steps below the dt_min
  • internal_energy_profile.py refers to unrelated Hydrostatic_####.hdf5 files
  • test_energy_conservation.py refers to SoftenedIsothermalPotential:vrot, which doesn't exist anymore since softened isothermal potential was removed
  • parameters passed to the python scripts from run.sh don't make sense. The python scripts seem to expect different numbers of parameters than what the run.sh scripts pass in. One of them doesn't parse correctly ("2." can't be parsed as an int)

So it isn't really possible to run those examples as they are right now.

Issue configuring SWIFT.

I'm trying to compile a specific version of SWIFT on a remote server and keep encountering the same error (condensed message):

...
checking for ANSI C header files... (cached) yes
checking for sqrt in -lm... no
configure: error: something is wrong with the math library!

I do not have HEALPix installed so I don't initialise it in the configuration. I also don't have the 'parmetis' library, only 'metis'.
I have linked here the config.log file for convenience.

config.log

Restart without feedback

Hi,

When I try to restart a SWIFT run without stellar feedback (both the initial run and the restart are without feedback, and even without stars), I get the following error:

'feedback/EAGLE/yield_tables.h:read_yield_tables():101: unable to open file /SNIa.hdf5'

The reason this happens, I'm guessing, is that the path to the yield tables is empty (since the path isn't read from the parameter file, if feedback is turned off). The larger issue is why does the code even try to read the yield tables with feedback turned off?

AVX512F instructions

Hi,

First of all - thanks for creating (and open-sourcing) this swift code! Looks great!

I was looking through the SIMD wrappers for AVX512F in vector.h and I noticed a few wrappers that refer to non-existent intrinsics (at least in AVX512F) or have better implementations. In particular, vec_and maps to _mm512_and_ps, which does not exist (at least according to the Intel Intrinsics Guide). From the looks of it, all and/or operations are now only relevant for masks and not for individual data-types.

I also saw that vec_fabs is implemented via two intrinsics -- is the new _mm512_abs_ps intrinsic too slow?

I am also curious - I do not see any references to any mask(z)_load. I found those masks quite useful for staying in SIMD mode and eliminating the serial part of the code (dealing with remainder loops for array lengths not divisible by the SIMD width).

Once again, the performance gains look awesome!

Compilation error due to MPI_Waitall

Firstly, thanks for producing this code. I encountered the following errors while compiling. How can this be fixed?

engine_strays.c: In function โ€˜engine_exchange_straysโ€™:
engine_strays.c:275:7: error: โ€˜MPI_Waitallโ€™ accessing 20 bytes in a region of size 0 [-Werror=stringop-overflow=]
275 | if (MPI_Waitall(e->nr_proxies, reqs_out, MPI_STATUSES_IGNORE) != MPI_SUCCESS)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
engine_strays.c:275:7: note: referencing argument 3 of type โ€˜MPI_Status *โ€™
In file included from /usr/include/x86_64-linux-gnu/mpich/mpi.h:977,
from engine_strays.c:27:
/usr/include/x86_64-linux-gnu/mpich/mpi_proto.h:592:5: note: in a call to function โ€˜MPI_Waitallโ€™
592 | int MPI_Waitall(int count, MPI_Request array_of_requests[], MPI_Status array_of_statuses[])
| ^~~~~~~~~~~
engine_strays.c:538:9: error: โ€˜MPI_Waitallโ€™ accessing 20 bytes in a region of size 0 [-Werror=stringop-overflow=]
538 | if (MPI_Waitall(5 * e->nr_proxies, reqs_out, MPI_STATUSES_IGNORE) !=
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
engine_strays.c:538:9: note: referencing argument 3 of type โ€˜MPI_Status *โ€™
In file included from /usr/include/x86_64-linux-gnu/mpich/mpi.h:977,
from engine_strays.c:27:
/usr/include/x86_64-linux-gnu/mpich/mpi_proto.h:592:5: note: in a call to function โ€˜MPI_Waitallโ€™
592 | int MPI_Waitall(int count, MPI_Request array_of_requests[], MPI_Status array_of_statuses[])

Planetary impact simulation super slow with large dead time

Dear SWIFT team,
I'm having a serious slow downing issue when trying to use swift_mpi running planetary impact simulation on our HPC. The issue can only happen if I use a specific number of nodes and CPUs (basically all happened when I tried to use 2 or 4 full nodes). I've listed some situations below that I've tested:

  • HPC1 -----10^5 particles 2node56cpu----Failed
  • HPC1------10^5 particles 2node16cpu----good
  • HPC1------10^5 particles 1node28cpu----good
  • HPC2------10^5 particles 2node48cpu----good
  • HPC2------10^6 particles 4node96cpu----Failed
    the timestep plot of the HPC-2 10^6 particles simulation is like this:
    bp_issue_65e5_4node96cpu
    After some steps, the dead time becomes extremely large basically taking 98% CPU time of each step.

Here is my configure and running recipe:

./configure --with-hydro=planetary --with-equation-of-state=planetary --enable-compiler-warnings=yes 
--enable-ipo CC=icc MPICC=mpiicc --with-tbbmalloc --with-gravity=basic

submit script like this:

##HPC-2 10^6 particles 4node96cpu
#PBS -l select=4:ncpus=24:mpiprocs=2:ompthreads=12:mem=150gb

time mpirun -np 8 ./swift_mpi -a -v 1 -s -G -t 12 parameters_impact.yml 2>&1 | tee $SCRATCH/output_${PBS_JOBNAME}.log
##HPC-1 10^5 particles 2node56cpu
#SBATCH --cpus-per-task=14
#SBATCH --tasks-per-node=2
#SBATCH --nodes 2
#SBATCH --mem=120G 

time mpirun -np 4 ./swift_mpi_intel -a -v 1 -s -G -t 14 parameters_impact.yml 2>&1 | tee $SCRATCH/output_${SLURM_JOB_NAME}.log

parameters file and initial condition:
parameters yml file
Initial condition

ParMETIS lib is currently unavailable on both HPC, so I can't use that. Also, HPC 2 doesn't have parallel hdf5 lib, but the same issue happened on HPC 1 which has parallel hdf5 loaded.

Here below is the log file from 10^6 particles simulation on HPC2:
10^6 output log file
rank_cpu_balance.log
rank_memory_balance.log
task_level_0000_0.txt
timesteps_96.txt

I'm doing some not very series benchmark tests with swift and Gadget when running planetary simulations. The below plot shows some results, which suggest SWIFT is always running a little bit slower than Gadget2 during the period of 1.5~8h (simulation time). The gif shows the period where this sluggish situation happened. Looks like SWIFT gets slower when the position of particles changed dramatically. After this period two planets just merged into a single one and behave not as drastically as in the first stage. I was expecting SWIFT to be always faster during the simulation, do you have any suggestions on how I could improve the performance of the code.
benchmark2
ezgif com-gif-maker

MPI issue at start

Hi,
I have been trying a custom hydrostatic halo run that seems to fail over MPI (but runs happily with a single node) at the beginning even with simple physics. The error I get is:

[0001] [00009.4] scheduler.c:scheduler_addunlock():106: Unlocking task is NULL.
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
[0001] [00009.4] scheduler.c:scheduler_addunlock():106: Unlocking task is NULL.
[0001] [00009.4] scheduler.c:scheduler_addunlock():106: Unlocking task is NULL.
[0001] [00009.4] scheduler.c:scheduler_addunlock():106: Unlocking task is NULL.
[0001] [00009.4] scheduler.c:scheduler_addunlock():106: Unlocking task is NULL.
[0001] [00009.4] scheduler.c:scheduler_addunlock():106: Unlocking task is NULL.
[0001] [00009.4] scheduler.c:scheduler_addunlock():106: Unlocking task is NULL.
[0001] [00009.4] scheduler.c:scheduler_addunlock():106: Unlocking task is NULL.

The failure happens at

[0000] [00006.2] engine_init_particles: Setting particles to a valid state...
[0000] [00006.3] engine_init_particles: Computing initial gas densities and approximate gravity.
[0000] [00006.3] space_rebuild: (re)building space

The configure command is --with-subgrid=EAGLE-XL --with-hydro=sphenix --with-kernel=wendland-C2 --enable-debug --enable-debugging-checks --disable-optimization

and the submission script

#!/bin/bash
#SBATCH -J swift
#SBATCH -N 2
#SBATCH --tasks-per-node=2
#SBATCH -o outFile.out
#SBATCH -e errFile.err
#SBATCH -p cosma6
#SBATCH -A dp004
#SBATCH -t 72:00:00

module unload gnu_comp intel_comp intel_mpi ucx parmetis parallel_hdf5 fftw gsl llvm

module load intel_comp/2021.1.0 compiler
module load intel_mpi/2018
module load ucx/1.10.1
module load fftw/3.3.9cosma7 # or fftw/3.3.9 on cosma 5 & 6
module load parallel_hdf5/1.10.6 parmetis/4.0.3-64bit gsl/2.5

mpirun -np 4 /cosma/home/durham/dc-husk1/SWIFT_spin_bh_new/swiftsim/examples/swift_mpi --hydro --temperature --threads=8 --limiter --sync --pin  isolated_galaxy.yml

As you can see, this is even without gravity. I have tried the example hydrostatic halo setup supplied with the code, that one worked with 2 nodes. So I'm thinking that this has to be related to the initial conditions. But the confusing thing is that the same setup worked with an older version of SWIFT (10 months old). And the same setup works on 1 node with this version.

Would appreciate any help with this. Thanks!

HM80 ice sitting on HM80 hydrogen helium in giant impact simulations

I am currently running giant impact simulations of an icy object impacting on a Uranus-like ice giant planet using the equations of state of Hubbard and Macfarlane (1980). However, whenever I try to run this in SWIFT, it seems to leave a layer of the impactor ice on top of the hydrogen helium atmosphere of the ice giant. This clearly isn't a physical result. I have checked the mass of the impactor and target particles and they are both similar enough such that this shouldn't be a problem, but it still seems to be happening. Do you have any suggestions on what I could change to remove this problem

impact.mp4

Clock ticks are not always long long ints

When running make check some tests will fail to build if clock ticks are not recorded as long long ints e.g. on ARM64

test27cells.c:596:61: warning: format specifies type 'long long' but the argument has type 'unsigned long' [-Wformat]
  message("Corner calculations took       : %15lli ticks.", corner_time / runs);
                                            ~~~~~~          ^~~~~~~~~~~~~~~~~~
                                            %15lu
../src/error.h:130:14: note: expanded from macro 'message'
           ##__VA_ARGS__);                                              \
             ^~~~~~~~~~~

This also affects compilation with the --enable-mpiuse-reports option.

Tested on the Isambard system (Thunder X2 processors) with GCC 9.3.0 and ARM 20.0 (based on LLVM 9.0.1) compilers called from Cray wrappers

Cooling issue running EAGLE 6-100 Mpc box examples

Hi,

When I run any of the EAGLE cosmological examples (in the '/examples/EAGLE_ICs' folder) with EAGLE cooling, I get the following error:
'cooling/EAGLE/cooling.c:cooling_get_subgrid_temperature():722: This cooling model does not use subgrid quantities!'.

The EAGLE runs (e.g. 6 Mpc box) work fine up until time-step 376 (redshift 16.33). This is after the first few FoF searches, and after the first snapshot is written out. I tried the same EAGLE cooling configuration with the 'FeedbackEvent_3D' cooling example, as well as the 'IsolatedGalaxy_feedback' one. They both worked fine. I also tried the EAGLE cosmological box examples with Colibre cooling; those complete successfully.

As a note, I am using the standard modules and options listed here: https://gitlab.cosma.dur.ac.uk/swift/swiftsim/-/wikis/COSMA-build. I'm running the examples on cosma7 nodes.

Precision problem when reading ICs

Hi, I generated cosmological dark-matter-only ICs for Swift using Python (h5py + numpy).

Specifically, I'm using Python 3.9.12 (Miniconda + gcc 7.5.0), numpy 1.22.4, and h5py 3.6.0.

I wrote out my ICs with an initial starting scale factor of 0.008, using the above libraries.

When I tried to run Swift (with debugging checks), I had the following output near the very start of the run:

[0000] [00000.0] main: Using ParMETIS partitioning:
[0000] [00000.0] main: initial partitioning: memory and edge balanced cells using particle weights
[0000] [00000.0] main: repartitioning: edge and vertex task cost weights
[0000] [00000.1] main: Internal unit system: U_M = 1.988410e+43 g.
[0000] [00000.1] main: Internal unit system: U_L = 3.085678e+24 cm.
[0000] [00000.2] main: Internal unit system: U_t = 3.085678e+19 s.
[0000] [00000.2] main: Internal unit system: U_I = 1.000000e+00 A.
[0000] [00000.2] main: Internal unit system: U_T = 1.000000e+00 K.
[0000] [00000.2] phys_const_print: Gravitational constant = 4.300918e+01
[0000] [00000.2] phys_const_print: Speed of light = 2.997925e+05
[0000] [00000.2] phys_const_print: Planck constant = 1.079940e-99
[0000] [00000.2] phys_const_print: Boltzmann constant = 6.943482e-70
[0000] [00000.2] phys_const_print: Thomson cross-section = 6.986845e-74
[0000] [00000.2] phys_const_print: Electron-Volt = 8.057577e-66
[0000] [00000.2] phys_const_print: Vacuum permeability = 1.950089e-30
[0000] [00000.2] phys_const_print: Proton mass = 8.411856e-68
[0000] [00000.2] phys_const_print: Year = 1.022690e-12
[0000] [00000.2] phys_const_print: Parsec = 1.000000e-06
[0000] [00000.2] phys_const_print: Astronomical Unit = 4.848137e-12
[0000] [00000.2] phys_const_print: Earth radius = 2.067001e-16
[0000] [00000.2] phys_const_print: Solar mass = 1.000000e-10
[0000] [00000.2] phys_const_print: Solar luminosity = 5.940412e-01
[0000] [00000.2] phys_const_print: H_0 / h = 100 km/s/Mpc = 1.000000e+02
[0000] [00000.2] phys_const_print: T_CMB0 = 2.725500e+00
[0001] [00000.7] cosmology.c:cosmology_get_time_since_big_bang():207: Error a can't be smaller than a_begin
[0004] [00000.7] cosmology.c:cosmology_get_time_since_big_bang():207: Error a can't be smaller than a_begin

I specified a_begin in the parameter file as 0.008.

This is a precision issue that can be avoided if I set a_begin = 0.07999999. I'm not sure why this is happening, but it should be reproducible.

Error compiling/running with AVX2

I'm having an issue with AVX2 on an AMD processor.

When I compile with ./configure CC=icc --with-hydro=planetary --with-equation-of-state=planetary the compilation complete but I get the following error when trying to run the planetary example:
"Please verify that both the operating system and the processor support Intel(R) X87, CMOV, MMX, FXSAVE, SSE, SSE2, SSE3, SSSE3, SSE4_1, SSE4_2, MOVBE, POPCNT, F16C, AVX, FMA, BMI, LZCNT and AVX2 instructions."

Some googling let me to believe this was due to AVX2 compatibility issues with some AMD processors.

I was able to successfully run the example using the following:
./configure CC=icc --enable-optimization=no CFLAGS="-O3 -mavx" --with-hydro=planetary --with-equation-of-state=planetary
based on the suggestion in Issue#13.

However I am not sure how this decreased optimization affects the code. Are there alternative/better flags I should be using to compile given this issue?

ParMETIS can't find when configuring

Dear Swift team,
I'm having some compile issues when trying to use ParMETIS on our HPC, the error goes below:

checking for ParMETIS_V3_RefineKway in -lparmetis... no
checking for ParMETIS_V3_PartKway in -lparmetis... no
configure: error: "Failed to find a ParMETIS library"

And configure like this:

./configure --with-parmetis=/mnt/storage/apps/parmetis/4.0.3

What includes in the directory of apps/parmetis/4.0.3 are just:(last four are executable)

Makefile  cmake_install.cmake  mtest  parmetis  pometis  ptest

The HPC staff just got ParMETIS installed and gives me no instruction on how to use it, please give me some suggestion on how to fix this problem. Cheers!

Uninitialised pointer in src/proxy.c?

Hi,

I was just trying to build with a Clang based compiler and noticed the following

proxy.c:111:7: error: variable 'cids_in' is used uninitialized whenever '||' condition is true [-Werror,-Wsometimes-uninitialized]
  if ((reqs_in = (MPI_Request *)malloc(sizeof(MPI_Request) *

In src/proxy.c, initialising the pointer cids_in fixes it, i.e.

int *cids_in=NULL;

Thanks.

Configuring with --enable-debug does not turn off optimisation

I configured swift with the --enable-debug flag when running swift it says that it was compiled with the following CFLAGS
CFLAGS : '-g -O0 -O3 -fomit-frame-pointer -fstrict-aliasing -ffast-math -funroll-loops -mcpu=native -fno-vectorize -fno-slp-vectorize -pthread -fno-builtin-malloc -fno-builtin-calloc -fno-builtin-realloc -fno-builtin-free -Wall -Wextra -Wno-unused-parameter -Wshadow -Werror -Wstrict-prototypes'
and the -O0 from the --enable-debug flag is overwritten.

Swift webstorage is inaccessible

When entering the site this error is given:

Error 403: Forbidden

You don't have permission to access /swift-webstorage on this server
Screenshot 2023-08-05 120942

AVX2 vectorization seems to cause to errors in tests

I tried to compile swift with icc 18 successfully, but encountered some errors when running make check.

If I do not provide any specific CFLAGS when running configuration script, it will automatically use -O3 -ansi-alias -xCORE-AVX2, which leads to:

  • FAIL: testMaths
  • FAIL: testVoronoi2D
  • FAIL: test27cellsStarsPerturbed.sh
  • FAIL: test27cellsPerturbed.sh
  • FAIL: test27cells.sh

If I use --no-vec, the script will use -O3 -xCORE-AVX2 -ansi-alias -no-simd -no-vec (weird that -xCORE-AVX2 still exists), which leads to:

  • FAIL: testMaths
  • FAIL: testVoronoi2D
  • FAIL: test27cellsStarsPerturbed.sh

If I disable AVX2 by manually specifying --no-optimization and CFLAGS="-O3 -mavx", all tests will pass.

By trying more compiling options I believe that enabling AVX2 does cause these tests to fail. I also read your jenkins build output and found that your CI server does not support AVX2 so that no errors were reported.

Configure output does not match config.log

Hi, I noticed a mismatch between the ./configure output and the config.log output, relating to the HDF5 library.

I'm using HDF5(+MPI) 1.8.22 without specifying --with-hdf5 in the ./configure command. Configure finds the correct library with no issues, and the configure information gives:

HDF5 enabled : yes
- parallel : yes

During my run, I noticed that the snapshot time steps were taking ages. It turns out, config.log says that:

HDF5_Type='serial'

That is odd, so I reconfigured by passing --with-hdf5=which h5pcc and that successfully made the two outputs match. I thought this was a bit misleading and should probably match. I'm not sure if the issue is reproducible or not at this time.

Can't find METIS library

Dear all,

I try to configure swift --with-metis=my path to metis, but it occurs to the problem that

checking for METIS_PartGraphKway in -lmetis... no
configure: error: "Failed to find a METIS library"

I have tried both static and shared library when installing metis, but it doesn't help. I don't know what's happening. Does anyone have any ideas of it? Any help will be highly appreciated.

Thank you,
Tianning

Unable to build with EAGLE cooling

Hello!

I was trying to run the SmallCosmoVolume_cooling example and got a message that cooling was not enabled when I compiled SWIFT, so I tried to do that with the EAGLE option (which I chose arbitrarily because I don't yet know what I'm doing ๐Ÿ˜„)

When I tried to configure with

./configure --with-cooling=EAGLE

and did make, I got the below errors

(giant console output)

in file included from cooling/EAGLE/cooling.c:36:0:
cooling/EAGLE/cooling_rates.h: In function โ€˜abundance_ratio_to_solarโ€™:
cooling/EAGLE/cooling_rates.h:58:24: error: โ€˜const struct chemistry_part_dataโ€™ has no member named โ€˜smoothed_metal_mass_fractionโ€™
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_H] *
                        ^
cooling/EAGLE/cooling_rates.h:58:54: error: โ€˜chemistry_element_Hโ€™ undeclared (first use in this function)
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_H] *
                                                      ^~~~~~~~~~~~~~~~~~~
cooling/EAGLE/cooling_rates.h:58:54: note: each undeclared identifier is reported only once for each function it appears in
cooling/EAGLE/cooling_rates.h:62:24: error: โ€˜const struct chemistry_part_dataโ€™ has no member named โ€˜smoothed_metal_mass_fractionโ€™
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_He] *
                        ^
cooling/EAGLE/cooling_rates.h:62:54: error: โ€˜chemistry_element_Heโ€™ undeclared (first use in this function)
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_He] *
                                                      ^~~~~~~~~~~~~~~~~~~~
cooling/EAGLE/cooling_rates.h:66:24: error: โ€˜const struct chemistry_part_dataโ€™ has no member named โ€˜smoothed_metal_mass_fractionโ€™
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_C] *
                        ^
cooling/EAGLE/cooling_rates.h:66:54: error: โ€˜chemistry_element_Cโ€™ undeclared (first use in this function)
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_C] *
                                                      ^~~~~~~~~~~~~~~~~~~
cooling/EAGLE/cooling_rates.h:70:24: error: โ€˜const struct chemistry_part_dataโ€™ has no member named โ€˜smoothed_metal_mass_fractionโ€™
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_N] *
                        ^
cooling/EAGLE/cooling_rates.h:70:54: error: โ€˜chemistry_element_Nโ€™ undeclared (first use in this function)
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_N] *
                                                      ^~~~~~~~~~~~~~~~~~~
cooling/EAGLE/cooling_rates.h:74:24: error: โ€˜const struct chemistry_part_dataโ€™ has no member named โ€˜smoothed_metal_mass_fractionโ€™
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_O] *
                        ^
cooling/EAGLE/cooling_rates.h:74:54: error: โ€˜chemistry_element_Oโ€™ undeclared (first use in this function)
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_O] *
                                                      ^~~~~~~~~~~~~~~~~~~
cooling/EAGLE/cooling_rates.h:78:24: error: โ€˜const struct chemistry_part_dataโ€™ has no member named โ€˜smoothed_metal_mass_fractionโ€™
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_Ne] *
                        ^
cooling/EAGLE/cooling_rates.h:78:54: error: โ€˜chemistry_element_Neโ€™ undeclared (first use in this function)
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_Ne] *
                                                      ^~~~~~~~~~~~~~~~~~~~
cooling/EAGLE/cooling_rates.h:82:24: error: โ€˜const struct chemistry_part_dataโ€™ has no member named โ€˜smoothed_metal_mass_fractionโ€™
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_Mg] *
                        ^
cooling/EAGLE/cooling_rates.h:82:54: error: โ€˜chemistry_element_Mgโ€™ undeclared (first use in this function)
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_Mg] *
                                                      ^~~~~~~~~~~~~~~~~~~~
cooling/EAGLE/cooling_rates.h:86:24: error: โ€˜const struct chemistry_part_dataโ€™ has no member named โ€˜smoothed_metal_mass_fractionโ€™
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_Si] *
                        ^
cooling/EAGLE/cooling_rates.h:86:54: error: โ€˜chemistry_element_Siโ€™ undeclared (first use in this function)
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_Si] *
                                                      ^~~~~~~~~~~~~~~~~~~~
cooling/EAGLE/cooling_rates.h:91:24: error: โ€˜const struct chemistry_part_dataโ€™ has no member named โ€˜smoothed_metal_mass_fractionโ€™
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_Si] *
                        ^
cooling/EAGLE/cooling_rates.h:97:24: error: โ€˜const struct chemistry_part_dataโ€™ has no member named โ€˜smoothed_metal_mass_fractionโ€™
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_Si] *
                        ^
cooling/EAGLE/cooling_rates.h:102:24: error: โ€˜const struct chemistry_part_dataโ€™ has no member named โ€˜metal_mass_fractionโ€™
       p->chemistry_data.metal_mass_fraction[chemistry_element_Fe] *
                        ^
cooling/EAGLE/cooling_rates.h:102:45: error: โ€˜chemistry_element_Feโ€™ undeclared (first use in this function)
       p->chemistry_data.metal_mass_fraction[chemistry_element_Fe] *
                                             ^~~~~~~~~~~~~~~~~~~~
cooling/EAGLE/cooling.c: In function โ€˜newton_iterโ€™:
cooling/EAGLE/cooling.c:234:24: error: โ€˜struct chemistry_part_dataโ€™ has no member named โ€˜smoothed_metal_mass_fractionโ€™
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_H];
                        ^
cooling/EAGLE/cooling.c:234:54: error: โ€˜chemistry_element_Hโ€™ undeclared (first use in this function)
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_H];
                                                      ^~~~~~~~~~~~~~~~~~~
cooling/EAGLE/cooling.c: In function โ€˜cooling_cool_partโ€™:
cooling/EAGLE/cooling.c:527:24: error: โ€˜struct chemistry_part_dataโ€™ has no member named โ€˜smoothed_metal_mass_fractionโ€™
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_H];
                        ^
cooling/EAGLE/cooling.c:527:54: error: โ€˜chemistry_element_Hโ€™ undeclared (first use in this function)
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_H];
                                                      ^~~~~~~~~~~~~~~~~~~
cooling/EAGLE/cooling.c:529:24: error: โ€˜struct chemistry_part_dataโ€™ has no member named โ€˜smoothed_metal_mass_fractionโ€™
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_He];
                        ^
cooling/EAGLE/cooling.c:529:54: error: โ€˜chemistry_element_Heโ€™ undeclared (first use in this function)
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_He];
                                                      ^~~~~~~~~~~~~~~~~~~~
cooling/EAGLE/cooling.c: In function โ€˜cooling_get_temperatureโ€™:
cooling/EAGLE/cooling.c:749:24: error: โ€˜const struct chemistry_part_dataโ€™ has no member named โ€˜smoothed_metal_mass_fractionโ€™
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_H];
                        ^
cooling/EAGLE/cooling.c:749:54: error: โ€˜chemistry_element_Hโ€™ undeclared (first use in this function)
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_H];
                                                      ^~~~~~~~~~~~~~~~~~~
cooling/EAGLE/cooling.c:751:24: error: โ€˜const struct chemistry_part_dataโ€™ has no member named โ€˜smoothed_metal_mass_fractionโ€™
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_He];
                        ^
cooling/EAGLE/cooling.c:751:54: error: โ€˜chemistry_element_Heโ€™ undeclared (first use in this function)
       p->chemistry_data.smoothed_metal_mass_fraction[chemistry_element_He];

I did not get errors compiling SWIFT when I tried with const-lambda.

Stuck in EAGLE 50

I was trying to run the Eagle 50 example (swiftsim/examples/EAGLE_low_z/EAGLE_50) as part of preparing for a competition. All tests pass, but the simulation seems to be stuck before even starting properly (cpu utilisation goes to 1-2 cores per rank, and nothing changes after running like this for 12h). I used --cosmology --hydro --self-gravity --stars --threads=16 -n 64 --pin

.
.
.
[0000] [00141.5] engine_config: Absolute minimal timestep size: 6.613473e-19
[0000] [00120.4] engine_config: Minimal timestep size (on time-line): 8.189315e-11
[0000] [00120.4] engine_config: Maximal timestep size (on time-line): 5.366949e-06
[0000] [00121.7] engine_config: Restarts will be dumped every 6.000000 hours
[0000] [00121.7] main: engine_init took 1279.084 ms.
[0000] [00121.7] main: Running on 404421250 gas particles, 20786477 stars particles and 425259008 DM particles (850466735 gravity particles)
[0000] [00121.7] main: from t=1.276e-02 until t=1.413e-02 with 4 ranks, 16 threads / rank and 16 task queues / rank (dt_min=1.000e-10, dt_max=1.000e-05)...
[0000] [00164.0] engine_init_particles: Setting particles to a valid state...
[0000] [00165.2] engine_init_particles: Computing initial gas densities.
[0000] [00270.9] engine_init_particles: Converting internal energy variable.
[0000] [00271.4] engine_init_particles: Running initial fake time-step.
#   Step           Time Scale-factor     Redshift      Time-step Time-bins      Updates    g-Updates    s-Updates  Wall-clock time [ms]  Props

full output

Sometimes it gets stuck earlier (at main: from t=1.276e-02 until t=1.413e-02 with 4 ranks...) but I've never gotten it further than what's shown above.

What am I doing wrong?

Onboarding guide could suggest better configure flags

Overall the onboarding guide is very helpful. However, I found I had to specify paths to FFTW3 and GSL, even though these were both accessible via pkg-config. Moreover despite these being listed as essential requirements, the configure process did not complain but rather allowed me to compile a binary that was incapable of running the cosmological example.

This was easily fixed but my onboarding experience could have been even smoother, and for inexperienced users it could be particularly useful to directly instruct them on the flags.

(In case relevant, this is on MacOS with macports-provided gcc/pkg-config/fftw3/gsl)

Configure SWIFT with VELOCIraptor

Dear all,

I'm trying to configure SWIFT with VELOCIraptor. I have installed VELOCIraptor(with hdf5 parallel support and mpi), and when I configure SWIFT with it, it could be found well:

checking for InitVelociraptor in -lvelociraptor... yes
checking for VR_NOMASS in -lvelociraptor... no
VELOCIraptor not compiled to so as to *not* store masses per particle.

However, in the configure summary,

VELOCIraptor enabled : no

I've tried to run swift with the label '--velociraptor', the result shows that 'VELOCIraptor is not available'. I don't know why.

Thank you,
Tianning

Can't compile swift with MPI VELOCIraptor

Hello,

I am trying to compile SWIFT with VELOCIraptor and following https://swift.dur.ac.uk/docs/VELOCIraptorInterface/stfwithswift.html it seems rather straightforward. Nevertheless I fail, no matter which combination of options I try.
I am using a regular desktop PC using Debian Testing.

The steps to reproduce from fresh clones are:

git clone https://github.com/ICRAR/VELOCIraptor-STF.git
cd VELOCIraptor-STF
git rev-parse HEAD # returns dc6d330eef60b7ca10e029d9a9af434454575daa
mkdir build-sp
cd build-sp
cmake ../ -DVR_USE_HYDRO=ON -DVR_USE_SWIFT_INTERFACE=ON -DCMAKE_CXX_FLAGS="-fPIC" -DCMAKE_BUILD_TYPE=Release -DVR_MPI=OFF
make
cd ..
mkdir build-mp
cd build-mp
cmake ../ -DVR_USE_HYDRO=ON -DVR_USE_SWIFT_INTERFACE=ON -DCMAKE_CXX_FLAGS="-fPIC" -DCMAKE_BUILD_TYPE=Release -DVR_MPI=ON
make
cd ../../
git clone https://gitlab.cosma.dur.ac.uk/swift/swiftsim.git
cd swiftsim
git rev-parse HEAD # returns 25a7aaa4cb35c42cbee9e7ae78c48eb10a7844c5
./autogen.sh
autoreconf --version # returns autoreconf (GNU Autoconf) 2.71
./configure --enable-fof --with-velociraptor=/home/lukas/git/VELOCIraptor-STF/build-sp/src --with-velociraptor-mpi=/home/lukas/git/VELOCIraptor-STF/build-mp/src
make

The compilation then halts with this MPI error in libvelociraptor.a:

libtool: link: mpicc -I../src -I../argparse -I/usr/include -I/usr/include/hdf5/serial -fopenmp -DWITH_MPI "-DENGINE_POLICY=engine_policy_keep | engine_policy_setaffinity" -O3 -fomit-frame-pointer -malign-double -fstrict-aliasing -ffast-math -funroll-loops -march=amdfam10 -mavx2 -pthread -fopenmp -fopenmp -Wall -Wextra -Wno-unused-parameter -Wshadow -Werror -Wstrict-prototypes -o swift_mpi swift_mpi-main.o  -L/usr/lib/x86_64-linux-gnu/hdf5/serial ../src/.libs/libswiftsim_mpi.a ../argparse/.libs/libargparse.a -L/home/lukas/git/VELOCIraptor-STF/build-mp/src -lvelociraptor -lmpi -lstdc++ -lgsl -lgslcblas -lhdf5_hl -lhdf5 -lcrypto -lcurl -lsz -lz -ldl -lfftw3_threads -lfftw3 -lnuma -lpthread -lm -pthread -fopenmp
/usr/bin/ld: /home/lukas/git/VELOCIraptor-STF/build-mp/src/libvelociraptor.a(swiftinterface.cxx.o): in function `MPI::Op::Init(void (*)(void const*, void*, int, MPI::Datatype const&), bool)':
swiftinterface.cxx:(.text._ZN3MPI2Op4InitEPFvPKvPviRKNS_8DatatypeEEb[_ZN3MPI2Op4InitEPFvPKvPviRKNS_8DatatypeEEb]+0x19): undefined reference to `ompi_mpi_cxx_op_intercept'
/usr/bin/ld: /home/lukas/git/VELOCIraptor-STF/build-mp/src/libvelociraptor.a(swiftinterface.cxx.o): in function `MPI::Intracomm::Clone() const':
swiftinterface.cxx:(.text._ZNK3MPI9Intracomm5CloneEv[_ZNK3MPI9Intracomm5CloneEv]+0x2c): undefined reference to `MPI::Comm::Comm()'
/usr/bin/ld: /home/lukas/git/VELOCIraptor-STF/build-mp/src/libvelociraptor.a(swiftinterface.cxx.o): in function `MPI::Graphcomm::Clone() const':
swiftinterface.cxx:(.text._ZNK3MPI9Graphcomm5CloneEv[_ZNK3MPI9Graphcomm5CloneEv]+0x27): undefined reference to `MPI::Comm::Comm()'
/usr/bin/ld: /home/lukas/git/VELOCIraptor-STF/build-mp/src/libvelociraptor.a(swiftinterface.cxx.o): in function `MPI::Cartcomm::Sub(bool const*) const':
swiftinterface.cxx:(.text._ZNK3MPI8Cartcomm3SubEPKb[_ZNK3MPI8Cartcomm3SubEPKb]+0x7e): undefined reference to `MPI::Comm::Comm()'
/usr/bin/ld: /home/lukas/git/VELOCIraptor-STF/build-mp/src/libvelociraptor.a(swiftinterface.cxx.o): in function `MPI::Intracomm::Create_graph(int, int const*, int const*, bool) const':
swiftinterface.cxx:(.text._ZNK3MPI9Intracomm12Create_graphEiPKiS2_b[_ZNK3MPI9Intracomm12Create_graphEiPKiS2_b]+0x2e): undefined reference to `MPI::Comm::Comm()'
/usr/bin/ld: /home/lukas/git/VELOCIraptor-STF/build-mp/src/libvelociraptor.a(swiftinterface.cxx.o): in function `MPI::Cartcomm::Clone() const':
swiftinterface.cxx:(.text._ZNK3MPI8Cartcomm5CloneEv[_ZNK3MPI8Cartcomm5CloneEv]+0x27): undefined reference to `MPI::Comm::Comm()'
/usr/bin/ld: /home/lukas/git/VELOCIraptor-STF/build-mp/src/libvelociraptor.a(swiftinterface.cxx.o):swiftinterface.cxx:(.text._ZNK3MPI9Intracomm11Create_cartEiPKiPKbb[_ZNK3MPI9Intracomm11Create_cartEiPKiPKbb]+0x93): more undefined references to `MPI::Comm::Comm()' follow
/usr/bin/ld: /home/lukas/git/VELOCIraptor-STF/build-mp/src/libvelociraptor.a(swiftinterface.cxx.o):(.data.rel.ro._ZTVN3MPI8DatatypeE[_ZTVN3MPI8DatatypeE]+0x78): undefined reference to `MPI::Datatype::Free()'
/usr/bin/ld: /home/lukas/git/VELOCIraptor-STF/build-mp/src/libvelociraptor.a(swiftinterface.cxx.o):(.data.rel.ro._ZTVN3MPI3WinE[_ZTVN3MPI3WinE]+0x48): undefined reference to `MPI::Win::Free()'
collect2: error: ld returned 1 exit status
make[2]: *** [Makefile:800: swift_mpi] Fehler 1
make[2]: Verzeichnis โ€ž/home/lukas/tmp/swiftsim/examplesโ€œ wird verlassen
make[1]: *** [Makefile:525: all-recursive] Fehler 1
make[1]: Verzeichnis โ€ž/home/lukas/tmp/swiftsimโ€œ wird verlassen
make: *** [Makefile:457: all] Fehler 2

(in case you need the full log or any other additional information, I can share it too)

I'm not the biggest expert on openMPI, but everything obvious seems correct to me:

  • it is using mpicc for compilation
  • it is linking mpi (-lmpi)
  • build-mp/src/libvelociraptor.a was built with mpi enabled
  • swift without mpi compiled correctly at this point
  • there is only one version of openmpi installed on my computer (the one from libopenmpi-dev (4.1.2-2))

Nevertheless I don't doubt that I could be missing something obvious (that could then maybe also be added to the docs)

I also found https://gitlab.cosma.dur.ac.uk/swift/swiftsim/-/issues/780, so I am wondering if maybe something broke with that PR in the setup explained in the docs.

Unknown intel compiler version (error: _AX_COMPILER_VERSION_INTEL)

Hi,

We're having difficulty compiling the code on Ubuntu 18.04.1 using ICC version 18.0.1.
After running ./autogen, using ./configure CC=icc MPICC=mpicc exits with the following error:

checking for C compiler version... configure: error: in `/path/to/files':
configure: error: _AX_COMPILER_VERSION_INTEL unknown intel compiler version

However, using the exact same compiler on Ubuntu 16.04.5 seems to work fine.

Hardcoding the version number using ax_cv_c_compiler_version="18.0.1" lets the configure step complete successfully (although CFLAGS have to be set manually).

Thanks!

Small mod required for build

Hi,

Attached is a file cycle.h.patch.txt which can be used to patch src/cycle.h so that SWIFT builds on Arm.

Please try patch src/cycle.h -i cycle.h.patch.txt -o cycle.h

this should create a new file cycle.h which should replace src/cycle.h

Thanks.

Mistake in BlobTest_3D example

Hi SWIFT team,

I was trying to use the makeIC.py file from the BlobTest_3D example for a different simulation when I noticed a mistake in the function generate_bcc_lattice:

def generate_bcc_lattice(num_on_side, side_length=1.0):
cube = generate_cube(num_on_side // 2, side_length)
mips = side_length / num_on_side
positions = np.concatenate([cube, cube + mips * 0.5])
return positions

The function should create a bcc lattice by generating a grid, and adding a second grid that is shifted by a half step in all 3 directions. However in this case the half step mips = side_length / num_on_side is calculated wrong, which then results in the second grid being moved too little.

A 2D plot for a grid generated with this function (with num_on_side = 10) looks like this:

bcc_lattice_wrong

A possible solution would be to change the line mips = side_length / num_on_side to mips = side_length / (num_on_side // 2).
This would result in a grid that looks like this:

bcc_lattice_right

Planetary impact simulation failing after random amount of time: "error: [05595.1] xmf.c:xmf_prepare_file():86: Unable to open temporary file"

Dear SWIFT team,

I have been using your code (mainly version 0.9) now for the past 10 months or so now to run giant impact simulations at a range of resolutions (10^5โ€“10^7 particles). I haven't really had any issues that we haven't been able to fix before, but recently my jobs have started failing after some random amount of time (anywhere from a few hours to a few days), with the error in the log file: "error: [05595.1] xmf.c:xmf_prepare_file():86: Unable to open temporary file". I am also getting this issue when now running with SWIFT version 1.0.0.

We have yet to be able to resolve this issue and figure out what is going on, and it seems strange that this has only started happening fairly recently since I have had no issues running exactly the same types of jobs previously. For reference, I am also using WoMa/Seagen to build the planets/set-up the impacts etc. Any help or guidance would be very much appreciated!

Thanks very much once again. Best wishes,

Matt

Interacting unsorted cells

Hello,

I am adding a black hole model into Swift and encountered the following fatal error:

1707 4.251840e-04 0.0830958 11.0343041 9.438521e-08 41 41 2 2 0 0 0 3.139 0 1.375
[03640.3] runner_doiact_functions_hydro.h:runner_dopair_subset_branch_density():883: Interacting unsorted cells.

My commits/branch are public and located here:

https://github.com/romeeld/swiftsim/commits/rennehan_feature_blackholes

We are running the Eagle s25n188 volumes with SPHENIX hydrodynamics and COLIBRE cooling and EAGLE stars. We are using SIMBA feedback and SIMBA black holes.

My branch is a branch off of the "Swimba" branch. In Swimba, we added decoupled stellar winds from the hydrodynamics scheme and have not encountered this error with the new decoupled winds scheme. After I added in the black hole model, I receive this error very early (z ~ 11). I implemented a kinetic kick for black hole feedback, and also decouple the black hole winds for a short period of time.

All decoupled winds skip the cooling step completely (in the runner_others.c file in my branch, line 157). Decoupled winds are also not considered for accretion/feedback in the black holes modules. We allow decoupled winds to contribute to the hydro density, and all other density loops.

At this time, there has been no black hole feedback, but there has been black hole accretion. Our black hole accretion model relies on a new loop over the star particle neighbours of black hole particles (BH loop over star particles). I added in 3 additional loops in:

runner_doiact_functions_black_holes.h

Which can be found in that file by looking for the "bh_stars_loop_is_active" function. After the stellar properties are calculated, the black holes nibble on gas particles and take a small fraction of their mass (about 10%). They are temporarily marked as "to be swallowed" until the black hole feedback step, when they are then marked "not swallowed" since we are only nibbling.

If there is any other information I can give, please let me know.

Thank you!

Issue Encountered Running autogen.sh Script in SWIFT-master Directory

Dear SWIFT Team,

I hope this message finds you well.

I'm writing to report an issue I encountered while setting up the SWIFT project on my Windows 11 system using Windows PowerShell.

Issue Description:
After downloading the SWIFT-master directory from GitHub and following the provided setup instructions, I faced difficulty executing the autogen.sh script in Windows PowerShell. Instead of running as expected, the script prompted me to select a program to open it with, rather than executing it as a shell script.

Steps Taken:

  1. Downloaded SWIFT-master directory from GitHub.
  2. Attempted to execute autogen.sh script in Windows PowerShell.
  3. Encountered a prompt to choose a program to open the script with, rather than executing it.

Expected Behavior:
The autogen.sh script should execute as a shell script, generating the necessary files for configuring the SWIFT project.

Observed Behavior:
The script prompts the user to select a program to open it with, rather than executing.

Efforts to Resolve:

  1. Tried granting executable permissions to the script using chmod +x autogen.sh, but the issue persisted.

System Information:

  • Operating System: Windows 11
  • Terminal: Windows PowerShell

I kindly request your assistance in resolving this issue and successfully configuring the SWIFT project on my Windows 11 system. Any guidance or support you can provide would be greatly appreciated.

Thank you for your attention to this matter.

Tests failing on ARM64: getopt and (un)signed char

Running make check on an ARM64 system fails to build some tests. One problem is related assigning the return value of getopt to a default char. On ARM64 platforms the default char type is unsigned so compiler warnings are generated which are promoted to errors e.g.

testPeriodicBC.c:408:59: warning: result of comparison of constant -1 with expression of type 'char' is always true [-Wtautological-constant-out-of-range-compare]
  while ((c = getopt(argc, argv, "m:s:h:n:r:t:d:f:v:a:")) != -1) {
         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^  ~~
1 warning generated.

Changing char to signed char should fix this. Files affected:

testActivePair.c
test27cells.c
test125cells.c
testPeriodicBC.c
test27cellsStars.c

Tested on the Isambard system (Thunder X2 processors) with GCC 9.3.0 and ARM 20.0 (based on LLVM 9.0.1) compilers called from Cray wrappers.

Example Scripts: Migration to Python3

Hi possible contributors!

Something that would make for a great first issue is to download and compile SWIFT, and then check out the examples and try running a few. The majority of these examples currently use python2 scripts for initial conditions generation and analysis; it would be great if you could upgrade these to use python3 instead.

Thanks!

Energy Conservation in giant impact simulations

I am currently running giant impact simulations using the EoS of Hubbard and Macfarlane (1980) and swift seems to have trouble with energy conservation whenever I set the max timestep to be large (100s) to allow swift to find its preffered timestep, it seems to have trouble with energy conservation, which in turn leads to problems with the simulation itself, but when I cap the max dt at 10s, it forces the timesteps to be smaller, and it conserves energy much better, but this in turn leads the simulation to take much longer than if I set the max dt to be 100s. I'm not quite sure what the problem here might be, so could it be a problem with swift itself
Energy_10
Energy_16
?
The top graph is where the timesteps have been capped at 10s, the bottom is a maximum of 100s

Initial baryon temperature of small_cosmo_volume example seems wrong

In small_cosmo_volume.yml, the initial temperature of the baryons at z~=50 is set to ~7000K. An accompanying comment says (1 + z_ini)^2 * 2.72K.

This temperature seems far too high, and the comment makes it sound like it's a mistake rather than intentional. Baryons thermally decouple from the CMB photons at z~150, and only after that time would their temperature redshift as (1+z)^2. Thus, a more appropriate temperature would seem to be T_cmb (1+z)^2 / (1+z_dec) which comes to ~45K, below the temperature floor for the simulation.

Other examples give more reasonable initialisation temperatures, e.g. eagle_50.yml initialises to T=268.7K at z=127, which seems sensible. Given that it is quoted to 4 significant figures, perhaps it's based on a more detailed calculation than the above hand-waving (which would produce a slightly higher estimate).

I have never played much with changing initialisation temperatures. I would assume the effect is fairly modest, but such an extreme discrepancy in the provided small_cosmo_volume.yml example seems like it could generate unphysical thermal smoothing effects at early times. Is there a specific reason for it?

compile issue when trying planetary simulation

I'm trying to run the /Planetary/EarthImpact example provided and I have some issues trying to get swift compile.

./configure --with-hydro=planetary --with-equation-of-state=planetary --with-hdf5=/path/to/hdf5
make -j8 check 

and I got the error

In file included from testRiemannTRRS.c:26:0:
../src/riemann/riemann_trrs.h:37:2: error: #error "The TRRS Riemann solver currently only supports an ideal gas equation of state.
 Either select this equation of state, or try using another Riemann solver!"

The code was run on a Centos 7 Linux distribution with Intel Xeon CPU. The hdf5 libraries is hdf5-1.12.1-Linux-centos7-x86_64-gcc485.

could you please help?

Gasoline and Anarchy-PU crashing with additional physics

Hi SWIFT team,

I have been attempting some simulations using the Gasoline and Anarchy-PU hydro schemes. The setup is a spherically symmetric gas halo initially in hydrostatic equilibrium, using an external NFW potential. I have tested the setup with SPHENIX very well at this point across different resolution levels, up to 3 Gyr. Gasoline/Anarchy-PU both crash at around 100 Myr. I don't remember the exact error I got with Anarchy-PU, but this is what I get with gasoline:

[m7124:86540:0:86720] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x2af38c120440)
[m7127:123101:0:123272] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x2b31b42e27a0)
==== backtrace (tid:  86720) ====
0 0x00000000004ea4d2 space_parts_get_cell_index_mapper()  ???:0
1 0x000000000049b618 threadpool_runner()  threadpool.c:0
2 0x0000000000007ea5 start_thread()  pthread_create.c:0
3 0x00000000000fe9fd __clone()  ???:0
=================================
==== backtrace (tid: 123272) ====
0 0x00000000004ea4d2 space_parts_get_cell_index_mapper()  ???:0
1 0x000000000049b618 threadpool_runner()  threadpool.c:0
2 0x0000000000007ea5 start_thread()  pthread_create.c:0
3 0x00000000000fe9fd __clone()  ???:0
=================================
[m7125:70337:0:70525] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x2ac87ce8cb90)
==== backtrace (tid:  70525) ====
0 0x00000000004ea4d2 space_parts_get_cell_index_mapper()  ???:0
1 0x000000000049b618 threadpool_runner()  threadpool.c:0
2 0x0000000000007ea5 start_thread()  pthread_create.c:0
3 0x00000000000fe9fd __clone()  ???:0 

The run command is

mpirun -np 8 /cosma/home/durham/dc-husk1/SWIFT_SPH/swiftsim/examples/swift_mpi --external-gravity --self-gravity --hydro --temperature --threads=14 --limiter --sync --pin params.yml

This is with 4 nodes of cosma7, I have attempted both non-MPI and cosma6, I get errors regardless. The configure option is

--with-cooling=COLIBRE --with-chemistry=EAGLE --enable-boundary-particles=10000000 --with-hydro=gasoline --with-gravity=with-multi-softening --with-tracers=EAGLE --with-ext-potential=nfw

The paramater file contains the following:

metaData:
  run_name:   IsolatedGalaxy-EAGLE-Ref

# Define the system of units to use internally.
InternalUnitSystem:
  UnitMass_in_cgs:     1.98848e43    # 10^10 M_sun in grams
  UnitLength_in_cgs:   3.08566e21 # 1 kpc in cm
  UnitVelocity_in_cgs: 1e5           # 1 km/s in cm/s
  UnitCurrent_in_cgs:  1             # Amperes
  UnitTemp_in_cgs:     1             # Kelvin

# Parameters for the self-gravity scheme
Gravity:
  eta:          0.025                 # Constant dimensionless multiplier for time integration.
  MAC:          geometric
  theta_cr:     0.7                   # Opening angle (Multipole acceptance criterion).
  use_tree_below_softening:  0
  max_physical_DM_softening:     0.3 # Physical softening length (in internal units).
  max_physical_baryon_softening: 0.3 # Physical softening length (in internal units).
  mesh_side_length:              256

# Parameters governing the time integration (Set dt_min and dt_max to the same value for a fixed time-step run.)
TimeIntegration:
  time_begin:        0.    # The starting time of the simulation (in internal units).
  time_end:          2   # The end time of the simulation (in internal units).
  dt_min:            1e-14  # The minimal time-step size of the simulation (in internal units).
  dt_max:            1e-2  # The maximal time-step size of the simulation (in internal units).

# Parameters governing the snapshots
Snapshots:
  basename:              output      # Common part of the name of output files
  time_first:            0.          # Time of the first output if non-cosmological time-integration (in internal units)
  delta_time:            0.0125       # Time difference between consecutive outputs (in internal units)
  compression:           7           # Compress the snapshots
  select_output_on:      1
  select_output:         param_list.yml
  output_list_on:        1
  output_list:           output_list.txt

Restarts:
  delta_hours:           1

Scheduler:
  max_top_level_cells:   20

# Parameters governing the conserved quantities statistics
Statistics:
  delta_time:           1e-1     # Time between statistics output
  time_first:              0     # (Optional) Time of the first stats output if non-cosmological time-integration (in internal units)

# Parameters related to the initial conditions
InitialConditions:
  file_name:               ICs.hdf5 # The file to read
  periodic:                0            # Are we running with periodic ICs?
#  stars_smoothing_length:  0.6

# Parameters for the hydrodynamics scheme
SPH:
  resolution_eta:        1.2348   # Target smoothing length in units of the mean inter-particle separation (1.2348 == 48Ngbs with the cubic spline kernel).
  CFL_condition:         0.2      # Courant-Friedrich-Levy condition for time integration.
  h_min_ratio:           0.1      # Minimal smoothing in units of softening.
  h_max:                 10.
  minimal_temperature:   100.

# Standard EAGLE cooling options
EAGLECooling:
  dir_name:                /cosma6/data/dp004/dc-husk1/SWIFT/IsolatedGalaxy/IsolatedGalaxy_feedback/coolingtables/  # Location of the Wiersma+09 cooling tables
  H_reion_z:               7.5               # Redshift of Hydrogen re-ionization
  H_reion_eV_p_H:          2.0               # Energy inject by Hydrogen re-ionization in electron-volt per Hydrogen atom
  He_reion_z_centre:       3.5               # Redshift of the centre of the Helium re-ionization Gaussian
  He_reion_z_sigma:        0.5               # Spread in redshift of the  Helium re-ionization Gaussian
  He_reion_eV_p_H:         2.0               # Energy inject by Helium re-ionization in electron-volt per Hydrogen atom

# COLIBRE cooling parameters
COLIBRECooling:
  dir_name:                /cosma6/data/dp004/dc-husk1/SWIFT/IsolatedGalaxy/IsolatedGalaxy_feedback/UV_dust1_CR1_G1_shield1.hdf5 # Location of the Ploeckinger+20 cooling tables
  H_reion_z:               7.5               # Redshift of Hydrogen re-ionization (Planck 2018)
  H_reion_eV_p_H:          2.0
  He_reion_z_centre:       3.5               # Redshift of the centre of the Helium re-ionization Gaussian
  He_reion_z_sigma:        0.5               # Spread in redshift of the  Helium re-ionization Gaussian
  He_reion_eV_p_H:         2.0               # Energy inject by Helium re-ionization in electron-volt per Hydrogen atom
  delta_logTEOS_subgrid_properties: 0.3      # delta log T above the EOS below which the subgrid properties use Teq assumption
  rapid_cooling_threshold:          0.333333 # Switch to rapid cooling regime for dt / t_cool above this threshold.

# Use solar abundances
EAGLEChemistry:
  init_abundance_metal:     0.0129
  init_abundance_Hydrogen:  0.7065
  init_abundance_Helium:    0.2806
  init_abundance_Carbon:    0.00207
  init_abundance_Nitrogen:  0.000836
  init_abundance_Oxygen:    0.00549
  init_abundance_Neon:      0.00141
  init_abundance_Magnesium: 0.000591
  init_abundance_Silicon:   0.000683
  init_abundance_Iron:      0.0011

# NFW potential parameters
NFWPotential:
  useabspos:          0             # 0 -> positions based on centre, 1 -> absolute positions
  position:           [0.0,0.0,0.0] # Location of centre of the NFW potential with respect to centre of the box (internal units) if useabspos=0 otherwise with respect to the 0,0,0, coordinates.
  concentration:      5.6            # Concentration of the halo
  M_200:              10000.         # Mass of the halo (M_200 in internal units)
  critical_density:   1.36e-8       # Critical density (internal units).
  timestep_mult:      0.025          # Dimensionless pre-factor for the time-step condition, basically determines fraction of orbital time we need to do an integration step
  epsilon:            0.3
  h:                  0.7

I have tried using debug, debugging checks and sanitizer, these didn't yield any additional error-related info that I could see. I am running with these again and will share the new code outputs if you think that will help.

Thanks for the help in advance!

Edit: Here's the output with debugging turned on.

[m7031:259876:0:260037] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x2b14b40008c0)
[m7028:86884:0:87046] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x2ab4d45beb50)
[m7029:265468:0:265468] Caught signal 11 (Segmentation fault: address not mapped to object at address 0xfffffffe07982770)

/cosma/home/durham/dc-husk1/SWIFT_SPH/swiftsim/src/space_cell_index.c: [ space_parts_get_cell_index_mapper() ]
      ...
      145       /* Is this a place-holder for on-the-fly creation? */
      146       ind[k] = index;
      147       cell_counts[index]++;
==>   148       ++count_extra_part;
      149
      150     } else {
      151       /* Normal case: list its top-level cell index */

==== backtrace (tid:  87046) ====
 0 0x00000000004ea5fc space_parts_get_cell_index_mapper()  /cosma/home/durham/dc-husk1/SWIFT_SPH/swiftsim/src/space_cell_index.c:148
 1 0x000000000049b24a threadpool_chomp()  /cosma/home/durham/dc-husk1/SWIFT_SPH/swiftsim/src/threadpool.c:164
 2 0x000000000049b24a threadpool_runner()  /cosma/home/durham/dc-husk1/SWIFT_SPH/swiftsim/src/threadpool.c:191
 3 0x0000000000007ea5 start_thread()  pthread_create.c:0
 4 0x00000000000fe9fd __clone()  ???:0
=================================

Mistake in short range gravity task creation for non-periodic case

Hi SWIFT team,


I have come accross the following in getting gravity to work with the moving mesh hydro scheme.

In the function engine_make_self_gravity_tasks_mapper there is the following code block:

  /* Special case where every cell is in range of every other one */
  if (delta >= cdim[0] / 2) {
    if (cdim[0] % 2 == 0) {
      delta_m = cdim[0] / 2;
      delta_p = cdim[0] / 2 - 1;
    } else {
      delta_m = cdim[0] / 2;
      delta_p = cdim[0] / 2;
    }
  }

However, it is only true that every cell is in range of every other one if delta >= cdim[0] / 2 with periodic boundary conditions. This resulted in some particles not interacting with all the particles they are supposed to interact with. The following error is produced when I run a non-periodic simulation with self gravity with a very small IC of a grid of 15x15x15 particles all with the same non-zero mass:

[00002.1] runner_others.c:runner_do_end_grav_force():836: g-particle (id=1936, type=Gas) did not interact gravitationally with all other gparts gp->num_interacted=2250, total_gparts=3375 (local num_gparts=3375 inhibited_gparts=0)

I would suggest replacing the above code block: with the following to fix this:

  /* Special case where every cell is in range of every other one */
  if (periodic) {
    if (delta >= cdim[0] / 2) {
      if (cdim[0] % 2 == 0) {
        delta_m = cdim[0] / 2;
        delta_p = cdim[0] / 2 - 1;
      } else {
        delta_m = cdim[0] / 2;
        delta_p = cdim[0] / 2;
      }
    }
  } else {
    if (delta > cdim[0]) {
      delta_m = cdim[0];
      delta_p = cdim[0];
    }
  }

On a related note:
I have also noticed that the function cell_min_dist2_same_size (which is used in engine_make_self_gravity_tasks_mapper) contains the following code for the non-periodic case:

    const double dx = min(fabs(cix_max - cjx_min), fabs(cix_min - cjx_max));
    const double dy = min(fabs(ciy_max - cjy_min), fabs(ciy_min - cjy_max));
    const double dz = min(fabs(ciz_max - cjz_min), fabs(ciz_min - cjz_max));

I also think this is not true in general and should be replaced with:

    const double dx = min4(fabs(cix_min - cjx_min), fabs(cix_min - cjx_max), fabs(cix_max - cjx_min), fabs(cix_max - cjx_max));
    const double dy = min4(fabs(ciy_min - cjy_min), fabs(ciy_min - cjy_max), fabs(ciy_max - cjy_min), fabs(ciy_max - cjy_max));
    const double dz = min4(fabs(ciz_min - cjz_min), fabs(ciz_min - cjz_max), fabs(ciz_max - cjz_min), fabs(ciz_max - cjz_max));

This then also follows the same structure of the periodic case. Since the cells are assumed to be the same size in this function, fabs(cix_min - cjx_min) will be equal to fabs(cix_max - cjx_max) (and similarly for y and z of course), so this could maybe be simplified a bit.

The above seems to have fixed g-particle did not interact gravitationally with all other gparts in the majority of my test cases, but in some cases there seems to be one cell whose gparts do not interact with other gparts. I have yet to find the cause of that. I can provide more details about those cases here, but they are probably more related to this issue.


Thanks in advance for your input.

Yolan Uyttenhove

port to ARM system

Dear SWIFT developers,

Thank you very much for your previous help, I could run SWIFT with vectorization disabled on an ARM-based HPC.

The code is functioning, but it's running at a very slow speed. I'm planning to seek help from the HPC technician team to assist me in porting SWIFT onto the system. Can you please advise me on which files to inspect in order to make changes?

Many thanks for the help!

README.md link to onboarding guide is broken

The link is to http://www.swiftsim.com/onboarding.pdf but this redirects to https://swift.strw.leidenuniv.nl/ instead of https://swift.strw.leidenuniv.nl/onboarding.pdf. Either the web server configuration needs correcting, or the link should be fixed.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.