Code Monkey home page Code Monkey logo

ls1mardyn / ls1-mardyn Goto Github PK

View Code? Open in Web Editor NEW
23.0 9.0 12.0 259.5 MB

ls1-MarDyn is a massively parallel Molecular Dynamics (MD) code for large systems. Its main target is the simulation of thermodynamics and nanofluidics. ls1-MarDyn is designed with a focus on performance and easy extensibility.

Home Page: http://www.ls1-mardyn.de

License: Other

CMake 0.32% Shell 0.31% C++ 91.95% TeX 0.01% Makefile 3.60% C 0.33% Objective-C 0.08% Python 2.21% QMake 0.14% Perl 0.09% Gnuplot 0.03% XSLT 0.03% HTML 0.03% Fortran 0.87%
molecular-dynamics molecular-dynamics-simulation simulation scientific-computing ls1-mardyn nanofluidics autopas

ls1-mardyn's People

Contributors

amartyads avatar andreicostinescu avatar cniethammer avatar dust71 avatar eckhardw avatar fernanor avatar fg-tum avatar gralkapk avatar heierm avatar homesgh avatar hoppef avatar hpcbern avatar jakniem avatar joshuamarx avatar julianpelloth avatar kaethorn avatar kruegener avatar louievoit avatar maurerf avatar neothethird avatar njan0 avatar pramathe avatar samnewcome avatar scarfguy avatar ssauermann avatar steffenseckler avatar tchipev avatar tijana-kovacevic avatar tobiasrau avatar weisslj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ls1-mardyn's Issues

Multiple issues with mixing coefficients

  • misleading warning "This can happen because the xml-input doesn't support these yet!"
  • neededCoeffs check should be multiplied by two "if(dmixcoeff.size() < neededCoeffs){" and this should probably be an ==
  • duplication between Domain::_mixcoeff and EnsembleBase::_mixingrules
  • special mixing rules with xi and eta are not symmetric, but Comp2Param::initialize is symmetric?
  • appending values at the end of _mixcoeff in Simulation.cpp:250 error prone
  • initialization is dependent on the order in which they appear in the file and the passed componentid-s have no effect ?

enforce formatting through jenkins

Is your feature request related to a problem? Please describe.
Formatting in ls1 is almost arbitrary right now. At some point, we should enforce the formatting.

Describe the solution you'd like
Enforce it through jenkins.

AutoPas integration remarks/checks

Describe the bug
Some hints and checks for autopas are missing:

  • rebuild frequency and sampling rate should probably ALWAYS be the same! (in mpi-parallel simulations)

These things need to be documented and/or tested!

Add unit tests for measureload

Is your feature request related to a problem? Please describe.
Measureload is not unit-tested.

Describe the solution you'd like
Should be unit tested.

Describe alternatives you've considered
None.

Additional context
Unit tests ftw.

Plugin Integration Master

General Plugin Infrastructure:

  • Moving from pure OutputPlugins to more general plugins / cleanup Simulation.cpp c2a85be
  • Adjusted tests for new interface c685d13
  • move to /plugins ba1bcc0
  • siteWiseForces as new pluginStep a481a6a
  • PluginBase Doc 46b9258
  • General cleanup of Domain.cpp
  • remove GrandCanonical from Simulation.cpp

update warnings plugin usage

Is your feature request related to a problem? Please describe.
The warnings plugin in Jenkins is deprecated. We should use the warnings-ng-plugin.
It uses the command recordIssues instead of warnings.

Performance testing

While benchmarking for my thesis, I just measured the worst vectorization speed-ups ever. Now I'm trying to figure out whether its somehow due to the setup that I have (simulation parameters + hardware), or whether we have done something really bad since the last performance measurements. It would be very nice if we had some frequently updated performance reference to look at, in order to isolate one from the other.
If possible, I would like to have plots of performance of 1-2-3 systems over the different revisions. The systems can be adapted from the examples folder for e.g. Argon, CO2, EOX, and use the CubicGridGenerator.
I have seen such a plot somewhere, und zwar for mardyn, but I don't remember whether it was a paper or a dissertation and whether it was by Wolfgang Eckhardt or Christoph Niethammer or Martin Buchholz, or ...

  • - A first version can be to run with AVX2 in serial and plot the MMUPS over svn/git revisions.
  • - A second version can be something using all performance relevant features, e.g. 8 MPI x 4 OMP with AVX2, again over commits.
  • - A fully functional variant could also produce additional plots within a commit. They can test things one after the other, e.g. first vectorization modes (AOS, SOA, SSE, ...), then OpenMP with AVX2 (1, 2, 4, 8 threads), then MPI with AVX2 (1, 2, 4, 8 procs), then MPI+OMP with AVX2 (1x8, 2x4, 4x2, 8x1).

export-src.sh script no longer working, add libs/ to it

Describe the bug
The purpose of the export-src.sh script is to pack and compress all source files necessary for compilation on e.g. a cluster. It needs now also pack the newly created libs/ folder.

To Reproduce
Steps to reproduce the behavior:

  1. ./export-src.sh
  2. unpack
  3. compile

Expected behavior
compilation completes.

Unexpected behavior
compilation fails on rapidxml include.

Make ci-matrix more platform-agnostic

As discussed in #33, the ci-matrix jobs should be mostly platform-agnostic. One objective is to free up processing time on atsccs11 and move as much as possible to our private openshift cloud, which allows for more granular control and well-defined environments. To make sure the most common supported compilers are tested at least once, an additional axis for gcc, icc and clang should be added. If necessary, unsupported vectorization modes can be tested using sde, but jenkins try to select native executors. Blocked by #33 and related to #12.

Particle Generation out of domain box in DEBUG build only

Describe the bug
ONLY APPEARS IN DEBUG BUILD. (Master branch 0079c02)

When using the MultiObjectGenerator to fill the domain, some particles seem to be generated outside of the Domain which causes the Simulation to crash. This does not happen when build with RELEASE settings in the makefile.

To Reproduce
Steps to reproduce the behavior:

  1. Build MarDyn with TARGET = DEBUG
  2. Use the attached .xml to run, no difference between seq or mpirun
  3. Simulation will halt and claim 180 particles out of bounds, sometimes orders of magnitude outside the box

Expected behavior
The generation of particles or their movements in the first step should not result in the e.g. x=10e6 when MarDyn is build with DEBUG support.

Screenshots
None

Build environment (please complete the following information):

  • OS: Linux (Ubuntu 16.04. LTS)
  • Compiler: cfg=gcc, compiles with mpicxx
  • Build System: Make

Additional context
Attached file is named .txt because GitHub wont allow .xml uploads

debug_mode.txt

ParticleIterator in RMM mode iterates always in a sliced fashion

Is your feature request related to a problem? Please describe.
This was a quick fix at the time of the WR, to improve NUMA-awareness. It was, however, integrated in a dirty fashion and may decrease performance of the RMM mode if there is severe load-imbalance, in which case the strided iteration would be preferable.
On the other hand, NUMA-awareness is not optimal when using sliced in the Normal mode, because the strided iteration is always used in the Normal mode.

Describe the solution you'd like
A way to decide whether the classic (strided) iteration or the NUMA-aware (sliced) iteration over the cells should be used. Guess this should go to XML input.

MettDeamon is incompatible with AutoPas mode

Describe the bug
When using the MettDeamon and having AutoPas enabled, error 434 is thrown and lost particles would be expected.
Cause:
The MettDeamon explicitly calls ParticleContainer::update() which is not allowed when using AutoPas.

To Reproduce
Steps to reproduce the behavior:

  1. run an example with the MettDeamon and enabled AutoPas.
  2. observe errors.

Expected behavior
No errors are thrown.

Additional context
For now, I have added an explicit error message and abort the simulation if AutoPas is enabled and the MettDeamon's
readXML() function is called.

Build fails using latest gcc 9.2.0

Describe the bug
Build fails with the following error when building with the latest version of gcc (9.2.0)

In file included from ./bhfmm/containers/UniformPseudoParticleContainer.h:29,
from bhfmm/FastMultipoleMethod.h:12,
from Simulation.cpp:72:
./bhfmm/HaloBufferOverlap.h: In member function ‘int bhfmm::HaloBufferOverlap::getAreaSize()’:
./bhfmm/HaloBufferOverlap.h:42:10: error: cannot convert ‘bhfmm::Vector3’ to ‘int’ in return
42 | return _areaHaloSize;
| ^~~~~~~~~~~~~
| |
| bhfmm::Vector3
./bhfmm/HaloBufferOverlap.h: In member function ‘int bhfmm::HaloBufferOverlap::getEdgeSize()’:
./bhfmm/HaloBufferOverlap.h:45:10: error: cannot convert ‘bhfmm::Vector3’ to ‘int’ in return
45 | return _edgeHaloSize;
| ^~~~~~~~~~~~~
| |
| bhfmm::Vector3

To Reproduce
Do make CFG=gcc PARTYPE=PAR TARGET=RELEASE VECTORIZE_CODE=AOS
with the gcc 9.2.0 set up to use for compilation.
The options for PARTYPE and VECTORIZE_CODE may not make any difference I guess.

Expected behavior
The file Simulation.o should compile successfully.

Screenshots
If applicable, add screenshots to help explain your problem.

Build environment (please complete the following information):

  • OS: Linux
  • Compiler: gcc 9.2.0
  • Build System: make

Additional context
Add any other context about the problem here.

MultiObjectGenerator plugin generates critical high particle IDs

Describe the bug
When generating a start configuration with the MultiObjectGenerator plugin critical high particle IDs are assigned.

To Reproduce
Steps to reproduce the behavior:

  1. Generate a start configuration with the MultiObjectGenerator plugin e.g. on a grid.
  2. Write out a checkpoint in ASCII format.
  3. View particle IDs in checkpoint.

Expected behavior
Max. particle ID of generated start configuration should be equal to global particle count.

Screenshots
If applicable, add screenshots to help explain your problem.

Build environment (please complete the following information):

  • OS: Linux
  • Compiler: gcc/8.1.0
  • Build System: Make

Additional context
Add any other context about the problem here.

cleanup of ReplicaFiller.cpp

Describe the bug
ReplicaFiller.cpp is only hotfixes for the moment, it might be useful to fix it properly, i.e., do a cleanup and think about what is necessary and what is not.

Additional context

kdd decomposition: rebalanceLimit probably broken

Describe the bug
the rebalance limit in ls1 is currently probably using the wrong traversal times and thus is not triggered.

Further Action

  1. check whether it is indeed broken updateParticleContainerAndDecomposition(perStepTimer.get_etime()) is probably passing the wrong time
  2. fix

Note
don't forget to fix overlapping communication (if it's problematic)

Plugin Integration MichealaHeier Branch

Plugins integrated from branch MichaelaHeier or Domain.cpp:

  • Center of Mass Alignment plugin (COMAligner) 91844cb
  • WallPotential plugin 80a6d46
  • Mirror plugin 1fea066
  • Andersen Thermostat in TemperatureControl 474a4b4
  • KartesianProfile base plugin 934f740
    • DensityProfile (number density) 36ffbf2
    • Velocity3dProfile bea7801
    • VelocityAbsProfile 74973c1
    • DOFProfile b66957b
    • KineticProfile b66957b
    • TemperatureProfile b66957b
    • Update Plugin Summary
    • Check for Profiled Components
    • CylinderProfile?
    • VirialProfile
    • Domain cleanup

Bug in refreshIDs using mpt on HAWK

Describe the bug
When using the module mpt for the mpi implementation on HAWK (HLRS) and more than one node, the option "refreshIDs" leads to a freezing simulation after printing out "..., Started simulation".
When using the same configuration (input file, number of nodes, etc) , but the openmpi implementation, the simulation runs normally.

To Reproduce
Steps to reproduce the behavior:

  1. Go to eg. Argon in the examples
  2. Use 2 nodes and compile with mpt and eg. gcc
  3. Add <options> <option name="refreshIDs">1</option> </options>
  4. Run simulation and observe freeze

OpenMP parallelize TemperatureControl.

Is your feature request related to a problem? Please describe.
TemperatureControl appears to be an often-used plugin, but it isn't openmp parallelized.

Describe the solution you'd like
Parallelize it!

Describe alternatives you've considered
Prove that it cannot be parallelized.

Move Components vector out of EnsembleBase into e.g. Domain and provide Safe Access

Detaching the Components Vector from EnsembleBase to provide easy and safe access for all other classes.

Related to #27.

The current implementation of the Component Vector is stored in EnsembleBase, even though the list of components is not dependent on the ensemble type used.

This might be moved to Domain or even Simulation to provide easier global access and more to the point safe access. Problems regarding the current implementation are discussed in detail in #27.

NVE Ensemble

Is your feature request related to a problem? Please describe.
Currently, there is no intuitive way to set up an NVE ensemble. (See below how to do it currently)

Describe the solution you'd like

  • In the input XML set ensemble to NVE and that is it.
  • Thermostats that change the energy in the system should then either be disabled or the simulation should stop with an error about conflicting input.

Describe alternatives you've considered

Additional context

  • 100% conservation of energy will not happen due to numerical imprecision.

  • Currently (a2c2482) to set up an NVE ensemble configure your XML with the following:

    • Set ensemble to NVT
    • Choose thermostat type "TemperatureControl"
    • In the control tag of the thermostat, set start to an iteration number that will not be reached.

GDD specific "decomposition sizes"

Is your feature request related to a problem? Please describe.
MaMiCo might require fixing process boundaries to macroscopic cells.

Describe the solution you'd like
Implement in GDD.

Describe alternatives you've considered
Write our own load balancer. -- not preferred.

Bug in velocity assigners

Describe the bug
Velocity assigners (EqualVA and MaxwellVA) do not assign correct velocities to the molecule. This can be seen when looking at the global temperature in the 0./1. timestep.
Only the MaxwellVA in combination with single atom molecules (no rotational DOF) leads to correct result.

To Reproduce
Steps to reproduce the behavior:

  1. Go to: cd ./examples/Evaporation/stationary/sim01/run01
  2. Run:
    mpirun -np 28 ../../../../../src/MarDyn_bfa9cdf6.PAR_RELEASE_AVX2 config.xml --steps=5 --final-checkpoint=0 | grep "T_global\|T = "
  3. Temperature in 0./1. timestep does not match with T_global

Expected behavior
Temperatures should match.

Problem
EqualVelocityAssigner.h does not consider rotational degrees of freedom. Plus mixing up of absolute value of overall velocity and absolute value of each velocity component.
MaxwellVelocityAssigner.h does not consider rotational degrees of freedom.

Fix
EqualVelocityAssigner.h, line 16 should be
double v_abs = sqrt(/*kB=1*/ (3+molecule->component()->getRotationalDegreesOfFreedom())*T() / molecule->component()->m());
MaxwellVelocityAssigner.h, line 17 should be
double v_abs = sqrt(/*kB=1*/ (1+molecule->component()->getRotationalDegreesOfFreedom()/3)*T() / molecule->component()->m());

Best regards

ProfileOutput Branch: Cylinder output

Unclear from Domain.cpp (not working properly):

  • Define Loop order for write
  • Calculation of uID -> use of R or R^2? Linear or what spacing?
  • XML options for cylinder?
  • handle dimensions consistently

MPI_Finalize issue with plugins

Deconstructor call for plugins after MPI_Finalize results in errors:
To reproduce add a MPI_Info object to the configuration of the MPICheckpointwriter plugin:

*** The MPI_Info_free() function was called after MPI_FINALIZE was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort.

combine AoS and SoA build to novec build

Is your feature request related to a problem? Please describe.
Currently, both AoS and SoA are built separately. This is not really efficient, as the current way to decide between the two cellprocessors is done in the prepare_start() method:

#if ENABLE_VECTORIZED_CODE
#ifndef ENABLE_REDUCED_MEMORY_MODE
	global_log->info() << "Using vectorized cell processor." << endl;
	_cellProcessor = new VectorizedCellProcessor( *_domain, _cutoffRadius, _LJCutoffRadius);
#else
	global_log->info() << "Using reduced memory mode (RMM) cell processor." << endl;
	_cellProcessor = new VCP1CLJRMM( *_domain, _cutoffRadius, _LJCutoffRadius);
#endif // ENABLE_REDUCED_MEMORY_MODE
#else
	global_log->info() << "Using legacy cell processor." << endl;
	_cellProcessor = new LegacyCellProcessor( _cutoffRadius, _LJCutoffRadius, _particlePairsHandler);
#endif // ENABLE_VECTORIZED_CODE

The differentiation between SoA and AoS should be moved into the readXML functions, e.g., by settings some bool to use the non-vectorized version. Alternatively a command-line parameter would also work.

todos

  • remove different compile modes from code
  • fix validationTest to be able to use two different xml configs.
  • adapt jenkins accordingly

Component pointer in MoleculeInterface creates questionable dependencies.

I think the Component pointer in MoleculeInterface and derivatives is a bad design choice.
Here first some clarification on what I believe the pointer does:
It points to elements of some sort of database of components (retrieved via _simulation.getEnsemble()->getComponents()) stored in the Simulation singleton, which is most probably established once in the during init, and remains unchanged throughout the simulation.

Why is it bad?
In short, it hurts the C++ OCP and SRP principle. In detail:

  • I noticed this, while trying to serialize a molecule's data. Needless to say, a pointer is not serializeable.
  • As the database seems to be fixed, and organized as a std::vector (at least thats what getComponents() returns), it makes no sense to store a pointer. Just as e.g. ASCIIWriter class does, the index into the components data base is sufficient to fully describe the necessary information.
  • I presume the choice for a pointer (over an index) was performance related. In a std::vector passing the index to the database and retrieving the appropriate component breaks down to a single addition, and therefore is negligible even if done very often in the tightest compute loop.
  • Even worse, the ID that is serialized by e.g. writeBinary() method in MoleculeInterface is different from the index stored internally (off by one, cf. line 458 in FullMolecule.cpp). I believe this is because the (externally defined) database counts 1 based, and the std::vector counts 0-based. This essentially means that every code dealing with components needs to be aware of this shift and has to implement it, even though this clearly should be the responsibility of the database.
  • If the database does not exist, any code using molecules is at risk of breaking. This essentially makes it impossible to write a unit test for deserialize()-ing (without building at least a dummy database in the Simulation singleton), as that method attempts to convert the serialized component id back into a pointer. Of course, a dummy database can be built, but then its not a unit test, but depends on that database.

How to fix
Instead of storing a pointer, store an index.
The pointer would need to be substituted for a call to the database passing the index, returning the actual Component object (or a pointer to that), which can be stored in the instance, e.g. during contruction.
This would decouple the database, as the molecule is defined by its component id, regardless of how the database evaluates this. The responsibility of processing the id correctly would fall to the database implementation.
As the class-internal usage of the component pointer in FullMolecule.cpp is restricted to _component->ID(), other solutions might make sense. MoleculeRMM.cpp does not use it at all.
The interface additionally uses only _component->m(), so other solutions, omitting the pointer entirely, might be possible.
However, I did not think this through, might also be very hard to eliminate it completely.

Conclusion
I can see, that if that pointer is used in many different places and possibly indirectly, refactoring this may not be an option, but this is a mean trap, not to say a terrible design choice.
I guess the code is as it is for 'historical' reasons, but it would be a good idea to change that.

P.S.: Don't be mad for my little rant, if you don't want to change it, that's fine, but there won't be a unit test for deserializing :). Just close the issue if you think this is not worth your time.

RDF not working for mixtures?

I have a problem getting the RDF plugin to work for mixtures. For single components it works just fine, however for my binary mixture I get a segmentation fault as soon as the first sampling is supposed to occur.

Is this a known issue or does anyone know what might cause this?

Inputs and outputs are enclosed
RDF.zip

"nanofluids"

Eines der Key words für ls1 hier ist "nanofluids", und als Beschreibung wird angegeben: "[...] a massively parallel Molecular Dynamics (MD) code for large systems. Its main target is the simulation of nanofluids."

Der Begriff "nanofluid" steht normalerweise für ein Fluid, das Nanopartikel enthält. Das ist eindeutig nicht das Hauptanwendungsgebiet von ls1.

Seemingly inconsistent code in HaloBufferOverlap.h

There seems to be some issue of the code in HaloBufferOverlap.h which prevents me from compiling.
(How does this compile for anyone?). Offending code, e.g.:

41	int  getAreaSize(){
42		return _areaHaloSize;
43	}

does not seem comply with the definition of _areaHaloSize:

61 Vector3<int> _areaHaloSize,_edgeHaloSize;

Please fix!

GCMD explosion bug

Since I right now do not have the time to look into it in detail, I just report this:

Dominik (LTD) observed a bug in GCMD (an explosion occurs). There are many ways in which this could be caused by GCMD implementation mistakes, but I remember that long ago we had discussed issues concerning multiple insertions. Do we exclude the possibility that multiple processes insert two molecules at the same time, which become neighbours? They could then be arbitrarily close to each other.

Is GCMD even an active feature of the current master version?

--

Specify Wall-Time instead of number of iterations

Is your feature request related to a problem? Please describe.
A problem users often face is estimating wall time for job submissions on clusters. If you overestimate the number of iterations that would go through in the requested job wall time, your job will timeout and crash, without giving you the latest checkpoint. If you underestimate, you will underutilize the allocated resources (which on some clusters costs you CPU-hours - Hazel Hen?). If you estimate correctly, but the rate at which the simulation proceeds changes during the simulation, you run again in the problems from above.

Describe the solution you'd like
Add an ability to specify wall-time in the .xml file.
The responsibility to allow enough tolerance time around the mardyn-wall-time is left to the user. E.g. job-script wall-time is 24 hrs, mardyn-wall-time is 23.5 hrs, to allow time for initialization prior to start of mardyn, time for writing a checkpoint and eventual time for finalization.

Describe alternatives you've considered
I think that when your job runs out of wall-time, the scheduler sends some system signal to the process of the program, before sending the signal to kill it. Perhaps an alternative could, thus, be to listen for the signal and then trigger the writing of the checkpoint. If the time interval between the "warning, you ran out of time" signal and the "Just die already!" signal is not long enough, however, this would not work. So the above suggestion should be the better one.
Another thing I considered, but dismissed: if a checkpoint happens to be written during the running of the job-script, then the time needed for writing a checkpoint to disk can be stored. Let that time be C minutes. Then, when C*2 minutes of the wall-time remain, a checkpoint is written to disk (the factor of 2 is to allow a small safety buffer). I dismissed this, because on some clusters, MPI initialization might be happening in the job-script wall-time, so it would be better to just have some buffer between the job-script wall-time and the mardyn-wall-time, which renders this additional complication unnecessary.

Things to keep in mind
Unless mardyn was started with --final-checkpoint=0, a final checkpoint at the last iteration is written by default, so you shouldn't need to write code to trigger that.

Validation test and/or Unit Tests for Replica Filler needed

Describe the bug
Replica filler not working since a while, illustrating that we need validation tests and/or unit tests for it.
Details to be filled in soon.

To Reproduce
Steps to reproduce the behavior:
Execute an example with a replica filler, in particular Matthias Heinen's Exploding Liquid example. You get a segfault and a message
"Number of molecules in the replica: 0"

Initialization of special mixing rules in XML is order dependent and ignores the "cid1" and "cid2" values

Describe the bug
see title

To Reproduce
take an example with 3 components and specify
<mixing>
<rule type="LB" cid1="2" cid2="3">
<xi>0.3</xi>
<eta>0.6</eta>
</rule>
<rule type="LB" cid1="3" cid2="2">
<xi>0.4</xi>
<eta>0.7</eta>
</rule>
</mixing>

The first values would be applied between component 1 and 2, the next ones between 1 and 3. They will be applied symmetrically, which is another bug

testNoLostParticles failing for non-knl vectorization on KNL

When running non-knl vectorization modes on the KNL cluster, the following unit-tests fail:

DomainDecompBaseTest::testNoLostParticles (F) line: 97 parallel/tests/DomainDecompBaseTest.cpp
DomainDecompositionTest::testNoLostParticles (F) line: 121 parallel/tests/DomainDecompositionTest.cpp
KDDecompositionTest::testNoLostParticles (F) line: 241 parallel/tests/KDDecompositionTest.cpp
KDDecompositionTest::testNoLostParticles2 (F) line: 241 parallel/tests/KDDecompositionTest.cpp

I created a temporary branch that runs everything on KNL to test this: http://vmbungartz10.informatik.tu-muenchen.de/mardyn/blue/organizations/jenkins/ls1-mardyn/detail/neothethird_test/1/pipeline/

Is this a bug in mardyn or a faulty configuration in the Pipeline?

Try to improve OpenMP scalability of packing and unpacking MPI buffers

Not a problem, but these are one of the first (and only) tried approaches. Looking at the hybrid MPIxOMP scalability curves in the IJHPCA paper, perhaps more can be done to shift the peak of the (per size-) curve to the left, i.e. lefter than 8x4 and 6x8. Otherwise one can argue that all the OpenMP work pays only weakly off in terms of fastest MPIxOMP configuration.

Describe the solution you'd like
There is one instance where "critical" is being used - perhaps a solution can be conceived, which can avoid it.

Long range correction (LRC) for planar interface

Describe the bug
Long range correction (LRC) for planar interface has no effect.

To Reproduce
Run simulations in examples/surface-tension_LRC/

Expected behavior
Reproduce results (saturated liquid density, surface tension) of Werth et al., Molecular Physics 112 (2014) with comparatively small cutoff radius (3 or 4 sigma).

xerces not included

Bug
XERCES_INCDIR in vtk.cmake is not set. Therefore, the required include path is missing.

Reproduction
Make with VTK_ENABLED set to ON.

Required CMake Version incorrect

Root CMakeLists.txt requires CMake Version 3.3.
src/CMakeLists.txt has lists command FILTER, which is available for CMake Version 3.6 and greater.

Potential fix: Update root CMakeLists.txt accordingly.

Remove need for external Jenkins jobs

The current setup of the pipeline is not ideal. Especially when working with pull-requests from forks of the repository on GitHub. Since the downstream jobs take a branch as an argument, the pipeline can't work its magic when run for external pull-requests. Another implication of the current setup is that we can not have concurrent builds for the same branch.

These external matrix-builds should be eliminated.

Variable Data for Particles

Is your feature request related to a problem? Please describe.
For some plugins, information needs to be stored per particle. This information should be variable as it can be arbitrary. Additionally, it needs to be transferred through MPI

Describe the solution you'd like
Add some sort of manager that manages the additionally saved information and saves it. This manager will be called if any information is needed or if particles are transferred between MPI ranks.

Describe alternatives
One could also add the data to each particle directly. This will, however, introduce too much overhead. At least one vector would be needed per particle.

ToDos

  • implement manager
  • care for mpi

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.