Code Monkey home page Code Monkey logo

proteus's Introduction

Proteus: Computational Methods and Simulation Toolkit Build Status Binder DOI

Proteus (http://proteustoolkit.org) is a Python package for rapidly developing computer models and numerical methods.

Installation

conda install proteus -c conda-forge

For a development installation, you want to install Proteus's dependencies and compile Proteus from source:

conda env create -f environment-dev.yml
conda activate proteus-dev
make develop-conda # or pip install -v -e .

You can also build proteus and dependencies from source (without conda) with

make develop
make test

See https://github.com/erdc/proteus/wiki/How-to-Build-Proteus for more information on building the entire stack.

Developer Information

The source code, wiki, and issue tracker are on GitHub at https://github.com/erdc/proteus.

proteus's People

Contributors

adimako avatar ahmadia avatar alistairbntl avatar arnsong avatar bchambers28 avatar burgreen avatar cekees avatar davidbrochart avatar dloney avatar dtoconn avatar ejtovar avatar giovanni-cozzuto-1989 avatar idoakkerman avatar jan-janssen avatar jhcollins avatar johanmabille avatar majinsaha avatar malej avatar manuel-quezada avatar mfarthin avatar milad-rakhsha avatar nehaljwani avatar pedrohrw avatar smattis avatar srobertp avatar timothypovich avatar tjcorona avatar tridelat avatar wacyyang avatar zhang-alvin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

proteus's Issues

rewrite Archiver.py to use single metadata file instead of one per task and investigate parallel hdf5 to use one global hdf5 file as well.

This would start to go in here:
https://github.com/erdc-cm/proteus/blob/master/src/Archiver.py#L97

Some MPI communication about the mesh may be necessary, but we're talking about a handful of integers for each processor that have to be pulled to the master node. The parallel write's to per-processor HDF5 files would remain in the first step of switching to a single XMF file. Switching to parallel HDF5 and MPI I/O would be the second step.

Makefile won't try to rebuild $PROTEUS_PREFIX if it fails

The $PROTEUS_PREFIX directory gets created before the profile is completely built now, so if the stack build fails make still thinks that $PROTEUS_PREFIX is done. On the one hand, this could be considered a hashdist issue (@ahmadia @certik ?) because 'hit develop DEV_DIR' is failing but still leaving behind a broken DEV_DIR.

proteus relies on a non-installed feature of petsc4py

proteus currently relies on 'conf', a non-installed library in a standard petsc4py installation. I'm going to ask Lisandro if it's okay to add it to the proper petsc4py install.

In the meanwhile, I'm crafting a patch that modifies the petsc4py installer to also install the conf directory into the petsc4py/ package.

automate MACOSX_DEPLOYMENT_TARGET in Makefile

The new Makefile in pre-1.0 selects PROTEUS_ARCH based on 'uname -s'. On OS X we need to set the specific version of OS X as well because it affects the build tool chain for the whole stack. This task will add enough logic to the Makefile to set MACOSX_DEPLOYMENT_TARGET and pass it on to the stack spec.

launcher problem with parun on cygwin

@ahmadia I'm putting this here for Alex's issue.

Getting the following error on cygwin when running with parun from the commandline:

$parun dambreak_so.py -l 5 -b
Usage: You should set up symlinks to launcher.

See README on http://github.com/hashdist/hdist-launcher

Here is the build info

PROTEUS : /home/RDCHLAS3/proteus
PROTEUS_ARCH : cygwin
PROTEUS_PREFIX : /home/RDCHLAS3/proteus/cygwin
PROTEUS_VERSION : fb2c490
HASHDIST_VERSION : ec7b4d52b45dd2c0e9fc0fdc820e48cbfa99f7ab
HASHSTACK_VERSION: aff88b02172baadd026eae0fae441456efb11ddf

Build system

I'm looking to fix several critical issues, including:

  • It is possible in a variety of situations to use stale dependencies. If somebody updates hashstack, we want to try and rebuild the profile. If the profile is more recent than Proteus's install path, we want to rebuild Proteus's extensions.
  • Proteus needs to support extra link flags when building PETSc extensions, since on Garnet we need to provide a "-target=native" flag at link time to avoid chaos. I'm going to add the logic for this flag so that it is optional, that is, it defaults to an empty list if it is not specified.
  • There is a missing step between "invoking Proteus on a test problem" and "invoking Proteus for your own code". There are two important steps we are not supporting yet. The first is in generating the input to a submission script on a supercomputer. The second is in launching a local process with the right environment. I suggest that we reserve the "proteus" name for launching simulations that use the proteus environment. For example:
mpirun -np 4 ./proteus poisson_demo.py

The proteus script will get written into prefix/bin as part of the installation procedure, and contain all the correct environment invocations. It should be suitable for use within a job submission script or standalone.

I should note that for the future:

The current build system uses a multi-tier approach:

Makefile
  -> setuppyx.py
  -> setupf.py
  -> setuppetsc.py
  -> setup.py

The Makefile doesn't have a build stage, only an install stage. It also only invokes installation on the corresponding setup* files. This is only problematic in the sense that "install" stages aren't great at dependency tracking (indeed, running install in a setup.py file will cause it to re-copy all files over), and so we currently do not have the ability to quickly rebuild Proteus and test "in-place".

In order to fix this, we need to give top-level control to the setup.py file, which will then invoke the corresponding setup* files with sys.argv intact. That is, calling "setup.py build" will call "setupf.py build", "setuppetsc.py build", and "setuppyx.py" build. This means we will eventually deprecate the Makefile, since it serves no additional purpose beyond invoking the setup.

doc test example

added a doc test example, possibly based on one of the poisson or advection-diffusion smoke tests.

clean up partitioning

I'm not setting the mesh MPIAdj consistently in flcbdfWrappers. It's working, but it needs to be cleaned up and used consistently.

python documentation example

provide relatively complete documentation for a module that demonstrates a decent set of features of ReST with Sphinx extensions.

Tracking hashstack dependency more effectively

Right now we don't automatically set the hashstack commit that we're using in any way. The default behavior is to pull the latest commit on one of our tracking branch, and to commit this on builds.

One big issue is that when make is called in the future, there is no warning that a commit is possibly out of date. There have been a few proposed ideas and solutions for this:

  • Switch to .gitmodules for hashstack. This still relies on the user checking that the proteus commit is up-to-date, but warns them when their hashstack commit is out-of-date.
  • Add a .txt file that is manually updated by developers when they want to point to the most recent version of the repository.
  • Switch from the in-repository profile format to an external repository profile format, such as https://github.com/hashdist/hashstack/blob/master/examples/mpi4py.linux.yaml
  • Embed the hashstack repository into proteus branches, and put the maintenance work for communicating information between the repositories on developers comfortable with git subtree operations.
  • Other ideas?

There are merits and cons to each approach. I'm opening this issue just to get the discussion started. I'll try to summarize discussion in this top-level comment, and we should use the comments for discussion.

logging for C/C++/Fortran

We need to working out a system for logging when we drop out of python, preferably coordinated with the logging through proteus.Profiling

Command-line PROTEUS_ARCH

It's currently cumbersome to switch PROTEUS_ARCH. It should be a dead simple command-line parameter.

Review parallelism and flcbdfwrappers.so

flcbdfwrappers.so requires that MPI has been initialized before it is called. Unfortunately, this can lead to erroneous results when functions are called before MPI has been initialized. We need to perform the following tasks:

  • - Ensure that if MPI returns an error, this error is propagated up to an error handler.
  • - When flcbdfwrappers is initializated, it will ensure that MPI is called and that a default communicator is set.

think through C and fortran library testing

@mfarthin and @ahmadia what this issue is getting at is the significant amount of C and C++ code we're using. I'm guessing for now we should just focus on getting the test coverage at the level of the python interface to the functionality to keep it simple, but there is some functionality that we don't expose to python so to truly get coverage we'd have to write lower level tests or commit to exposing everything, which is a lot easier now with Cython then when we first wrote a lot of this code.

flcbdfWrappers.os mis-installed

With host-arch, it installs to linux2/lib/python2.7/site-packages/$PETSC_ARCH/proteus/flcbdfWrappers.so, thus needs to be copied to linux2/lib/python2.7/site-packages/proteus/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.