Code Monkey home page Code Monkey logo

solvcon / solvcon Goto Github PK

View Code? Open in Web Editor NEW
17.0 9.0 11.0 10.45 MB

A software framework of conservation-law solvers that use the space-time Conservation Element and Solution Element (CESE) method.

Home Page: http://solvcon.net/

License: BSD 3-Clause "New" or "Revised" License

Python 36.64% Shell 3.22% C 10.67% Makefile 0.25% HTML 0.06% Cuda 1.38% GLSL 0.04% Jupyter Notebook 6.77% CMake 0.29% C++ 38.72% Dockerfile 0.09% Singularity 0.03% Cython 1.83%
computational-science

solvcon's Introduction

SOLVCON implements conservation-law solvers that use the space-time Conservation Element and Solution Element (CESE) method.

Documentation Status

Install

Clone from https://github.com/solvcon/solvcon:

$ git clone https://github.com/solvcon/solvcon

SOLVCON needs the following packages: A C/C++ compiler supporting C++11, cmake 3.7+, pybind11 Git master, Python 3.6+, Cython 0.16+, Numpy 1.5+, LAPACK, NetCDF 4+, SCOTCH 6.0+, Nose 1.0+, Paramiko 1.14+, boto 2.29.1+, and gmsh 3+. Support for VTK is to be enabled for conda environment.

To install the dependency, run the scripts contrib/conda.sh and contrib/build-pybind11-in-conda.sh (they use Anaconda).

The development version of SOLVCON only supports local build:

$ make; python setup.py build_ext --inplace

To build SOLVCON from source code and install it to your system:

$ make; make install

Test the build:

$ nosetests --with-doctest
$ nosetests ftests/gasplus/*

Building document requires Sphinx 1.3.1+, pstake 0.3.4+, and graphviz 2.28+. Use the following command:

$ make -C doc html

The document will be available at doc/build/html/.

solvcon's People

Contributors

ericpo avatar tai271828 avatar yungyuc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

solvcon's Issues

Build system for externally depended packages

Originally reported by: Yung-Yu Chen (BitBucket: yungyuc, GitHub: yungyuc)


SOLVCON has various external dependencies, including Python (2.6/7), SCons (1.2+), numpy, LAPACK, netCDF (4+), metis (4.0.3), scotch (5.1+), VTK (5.6+), MPI2, nosetests (1.0+), epydoc (1.0+). Some of them are prerequisite, while some are optional.

There should be a supplemental system distributed with SOLVCON package to help building these dependencies. It will be very helpful for deploying SOLVCON on older OSes, e.g., RHEL 4, 5, 5.5, which are usually used by supercomputers.


cuda error

Originally reported by: David Bilyeu (BitBucket: david_b, GitHub: david_b)


Tried to run cuda test case and got this error.
(I compiled this test after commenting out this line
https://bitbucket.org/yungyuc/solvcon/src/f37462b10ca5/sandbox/cuda/SConscript#cl-37
)
This was the python error.

david@cuda-togo:cuda$ ./go run
Traceback (most recent call last):
File "./go", line 29, in
import cueuler
File "/home/david/Programs/dunadan/solvcon/sandbox/cuda/cueuler.py", line 23, in
from solvcon.gendata import AttributeDict
ImportError: No module named solvcon.gend


Reduce head memory usage

Originally reported by: Yung-Yu Chen (BitBucket: yungyuc, GitHub: yungyuc)


Currently head node load domain connectivity arrays, which is about 150MB per million elements. For 50 million elements the head node needs up to 7.5 GB of memory. This becomes a bottleneck of memory usage.

Loading of domain connectivity arrays on head node should be revised.


time based milestones

Originally reported by: David Bilyeu (BitBucket: david_b, GitHub: david_b)


I would like the ability to specify any of the two of: simulation time, dt or number of steps and then have the third automatically calculated. When I try to modify time_increment or steps_run in the driving script I get errors or werid behavior. I can post the traceback later if you need it.

I would also like to specify a printing time step rather than the number of time steps. This would be helpful when comparing the same simulation with two different meshes. This will also be useful for variable dt.


Code Verification

Originally reported by: Po-Hsien Lin (BitBucket: ericpo, GitHub: ericpo)


Hello YYC,

In gasdynb/bound_wall.c, line 246, you wrote
vmt[it][jt] = mvt[it][0]_vec[1][jt] + mat[it][1]_vec[2][jt]
is it that should change to
vmt[it][jt] = mvt[it][0]_vec[1][jt] + mvt[it][1]_vec[2][jt]

In cuse/calc_dsoln_w3.c, line347~352, you wrote
};
#ifndef CUDACC
return 0;
};
#else
};
should it change to

#ifndef __CUDACC__
};
return 0;

};

else

};

Thank you very much.

eric


scons error

Originally reported by: David Bilyeu (BitBucket: david_b, GitHub: david_b)


Error when building cuda test.
This is the error message
gcc -o libsc_gasdyn2d_c.so -shared -L/home/david/NVIDIA_GPU_Computing_SDK/C/lib -L/home/david/NVIDIA_GPU_Computing_SDK/C/common/lib/linux -L/usr/local/cuda/lib64 -L/usr/local/cuda/lib
gcc: no input files

scons -v

script: v2.0.0.final.0.r5023, 2010/06/14 22:05:46, by scons on scons-dev

I can comment out

https://bitbucket.org/yungyuc/solvcon/src/f37462b10ca5/sandbox/cuda/SConscript#cl-37

and it will compile.


Reading conflict on pvfs2

Originally reported by: Yung-Yu Chen (BitBucket: yungyuc, GitHub: yungyuc)


solvcon.rpc.create_solver() calls solvcon.io.domain.DomainIO.load_block() to load corresponding block file. DomainIO.load_block() currently will load the domain.dom file in the domain directory, which results into reading conflict for many sub-domains.

  File "/nfs/04/osu4914/opt/bin/scg", line 23, in <module>
    solvcon.go()
  File "/nfs/04/osu4914/opt/lib/python2.6/site-packages/solvcon/cmdutil.py", line 176, in go
    cmd()
  File "/nfs/04/osu4914/opt/lib/python2.6/site-packages/solvcon/command.py", line 720, in __call__
    wkr.run(('0.0.0.0', 0), DEFAULT_AUTHKEY)    # FIXME
  File "/nfs/04/osu4914/opt/lib/python2.6/site-packages/solvcon/rpc.py", line 164, in run
    self.eventloop()
  File "/nfs/04/osu4914/opt/lib/python2.6/site-packages/solvcon/rpc.py", line 120, in eventloop
    self._eventloop()
  File "/nfs/04/osu4914/opt/lib/python2.6/site-packages/solvcon/rpc.py", line 99, in _eventloop
    ret = method(*ntc.args, **ntc.kw)
  File "/nfs/04/osu4914/opt/lib/python2.6/site-packages/solvcon/rpc.py", line 264, in create_solver
    blk = dio.load_block(dirname=dirname, blkid=iblk, bcmapper=bcmap)
  File "/nfs/04/osu4914/opt/lib/python2.6/site-packages/solvcon/io/domain.py", line 425, in load_block
    return self.dmf.load_block(dirname, blkid, bcmapper)
  File "/nfs/04/osu4914/opt/lib/python2.6/site-packages/solvcon/io/domain.py", line 226, in load_block
    stream = open(dompath, 'rb')
IOError: [Errno 2] No such file or directory: '/fs/pvfs/osu4914/mesh.feanor/jcf_28mm_p66.dom/domain.dom'

Optionally use symbolic link in splitting

Originally reported by: Yung-Yu Chen (BitBucket: yungyuc, GitHub: yungyuc)


For domain decomposition (solvcon.domain), current implementation stores the original, whole block as a real file. The whole block in domain directory is exactly the same as the source block file. For large meshes, duplicated whole block costs up to tens of GBs.

There should be an option to let solvcon.io.domain.DomainIO to use symlink instead of real file for the whole block. This saves both space and time for splitting.


METIS 4 fails on memory allocation for around 40M cells

Originally reported by: Yung-Yu Chen (BitBucket: yungyuc, GitHub: yungyuc)


METIS 4 uses int to specify number of bytes when calling malloc(). Allocation fails on splitting meshes with larger than 40 million cells. The error message is as following:

$ scg mesh jcf_33mm.blk jcf_33mm_p32.dom --split=32
I/O formats are determined as: BlockIO, DomainIO.
Load jcf_33mm.blk (BlockIO) ... done. (161.375s)
Block information:
  [Block (3D): 41540592 nodes, 123869747 faces (748750 BC), 41165124 cells]
    [BC#0 "jet": 847 faces with 0 values]
    [BC#1 "upstream": 74529 faces with 0 values]
    [BC#2 "downstream": 74529 faces with 0 values]
    [BC#3 "side": 298116 faces with 0 values]
    [BC#4 "top": 150788 faces with 0 values]
    [BC#5 "wall": 149941 faces with 0 values]
  Cell groups:
    grp#0: 
    grp#1: 
    grp#2: 
  Cell volume (min, max, all): 1.03889e-05, 7.50198e-05, 1458.
Create domain ... done. (1.5974e-05s)
Partition graph into 32 parts ... Error! ***Memory allocation failed for SetUpCoarseGraph: gdata. Requested size: -1987870220 bytesAborted

The mesh is generated by using Cubit with the attached journaling file. Increasing the number of cells increases the Requested size in the error message. Further increasing the number of cells makes allocation of other arrays fails.


Solvcon Cubit

Originally reported by: David Bilyeu (BitBucket: david_b, GitHub: david_b)


I have found that it can take a long to to generate a mesh file when I use Cubit from inside Solvcon. It does not always happen but when it does it can take over a minute to generate a ~4k mesh. This the the traceback when I force Solvcon to quit via Ctrl C.

#!python
^CTraceback (most recent call last):
  File "../../scg", line 11, in <module>
    solvcon.go()
  File "/home/david/Programs/dunadan/solvcon/solvcon/cmdutil.py", line 176, in go
    cmd()
  File "/home/david/Programs/dunadan/solvcon/solvcon/command.py", line 735, in __call__
    func(submit=False, **funckw)
  File "/home/david/Programs/dunadan/solvcon/solvcon/case.py", line 383, in simu
    case.init(level=runlevel)
  File "/home/david/Programs/dunadan/solvcon/solvcon/case.py", line 514, in init
    loaded = self.load_block()
  File "/home/david/Programs/dunadan/feanor/euler.py", line 464, in load_block
    blk = super(EulerCase, self).load_block()
  File "/home/david/Programs/dunadan/solvcon/solvcon/case.py", line 589, in load_block
    obj = self.io.mesher(self)
  File "/home/david/Programs/dunadan/feanor/arrangement_euler_ASM_C16.py", line 34, in mesher
    return gn.toblock(bcname_mapper=cse.condition.bcmap)

No Vtk for final time step

Originally reported by: David Bilyeu (BitBucket: david_b, GitHub: david_b)


If the number of time steps is not an integer multiple of psteps then the final results are not saved to a vtk file.
e.g if nsteps = 249 and ssteps=20 then the final vtk write will be at the 240th time step.

I provided a simple fix for this in my fork of SOLVCON
https://bitbucket.org/david_b/solvcon_david/src/9e2198a16f9f/solvcon/hook.py#cl-441

I know that it works for a serial case but I don't know if it will work in parallel.


Speed up domain splitting

Originally reported by: Yung-Yu Chen (BitBucket: yungyuc, GitHub: yungyuc)


Time spent in solvcon.domain.Collective.build_interface() increases with the number of sub-domains to be split and mesh size. Also, for small meshes, build_interface() just takes a small fraction of time. For large meshes (~50M), build_interface() takes significant time.

Because large meshes are usually desired to be split into more sub-domains than smaller meshes, build_interface() then dominates the time of domain splitting.

Algorithm in build_interface() should be revised for speed up.


Some example could not run, such as example/elaslin/elcv2d

Originally reported by: Anonymous


when i run the script, the message as follow:

[*******@centostoy elcv2d]$ ./go run


*** Start init (level 0) elcv2d ...



*** *** Start build_domain ...


*** *** mesh file: None


*** *** *** Start create_block ...


*** *** *** Traceback (most recent call last):
File "./go", line 247, in
solvcon.go()
File "/usr/lib/python2.6/site-packages/solvcon/cmdutil.py", line 176, in go
cmd()
File "/usr/lib/python2.6/site-packages/solvcon/command.py", line 735, in call
func(submit=False, **funckw)
File "/usr/lib/python2.6/site-packages/solvcon/case.py", line 383, in simu
case.init(level=runlevel)
File "/usr/lib/python2.6/site-packages/solvcon/case.py", line 514, in init
loaded = self.load_block()
File "/usr/lib/python2.6/site-packages/solvcon/case.py", line 589, in load_block
obj = self.io.mesher(self)
File "./go", line 115, in mesher
return gn.toblock(bcname_mapper=cse.condition.bcmap)
AttributeError: 'NoneType' object has no attribute 'toblock'

Some example can run, such as example/euler/impl. Could you tell me the reason. Thank you.


Numbers of edges are wrong for hexahedron and tetrahedron

Originally reported by: Yung-Yu Chen (BitBucket: yungyuc, GitHub: yungyuc)


The numbers of edges of hexahedron and tetrahedron listed in solvcon.block are wrong. See https://bitbucket.org/yungyuc/solvcon/src/69ce88dca75a/solvcon/block.py#cl-31

A hexahedron should have 12 edges and a tetrahedron should have 6 edges. The wrongly specified edge numbers do not result into any errors in computation since the numbers are never used (nor tested).


Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.