Code Monkey home page Code Monkey logo

nrl-plasma-physics-division / turbopy Goto Github PK

View Code? Open in Web Editor NEW

This project forked from arichar6/turbopy

10.0 5.0 19.0 6.6 MB

A lightweight computational physics framework, based on the organization of turboWAVE. Implements a "Simulation, PhysicsModule, ComputeTool, Diagnostic" class hierarchy.

Home Page: https://turbopy.readthedocs.io/

License: Creative Commons Zero v1.0 Universal

Python 100.00%
hacktoberfest plasma-physics computational-physics python framework

turbopy's Introduction

turboPy

DOI PyPI version Documentation Status Build Status codecov GitHub

A lightweight computational physics framework, based on the organization of turboWAVE. Implements a "Simulation, PhysicsModule, ComputeTool" class hierarchy.

Motivation

Computational physics problems often have a common set of aspects to them that any particular numerical code will have to address. Because these aspects are common to many problems, having a framework already designed and ready to use will not only speed the development of new codes, but also enhance compatibility between codes.

Some of the most common aspects of computational physics problems are: a grid, a clock which tracks the flow of the simulation, and a set of models describing the dynamics of various quantities on the grid. Having a framework that could deal with these basic aspects of the simulation in a common way could provide great value to computational scientists by solving various numerical and class design issues that routinely arise.

This paper describes the newly developed computational framework that we have built for rapidly prototyping new physics codes. This framework, called turboPy, is a lightweight physics modeling framework based on the design of the particle-in-cell code turboWAVE. It implements a class (called Simulation) which drives the simulation and manages communication between physics modules, a class (called PhysicsModule) which handles the details of the dynamics of the various parts of the problem, and some additional classes such as a Grid class and a Diagnostic class to handle various ancillary issues that commonly arise.

More Resources

Install turboPy

  • Install: pip install turbopy

turboPy development environment

  • Create a conda environment for turboPy: conda env create -f environment.yml
  • Activate: conda activate turbopy
  • Install turboPy in editable mode (i.e. setuptools "develop mode") if you are modifying turboPy itself: pip install -e .
  • Run tests: pytest

If using pylint (which you should!) add variable-rgx=[a-z0-9_]{1,30}$ to your .pylintrc file to allow single character variable names.

Merge requests are encouraged!

Attribution

If you use turboPy for a project, first, you're awesome, thanks! 🎉

Also, we would appreciate it if you would cite turboPy. There are a few ways you can cite this project.

Cite a specific version

If you used turboPy, please cite the specific version of the code that you used. We use Zenodo to create DOIs and archive our code. You can find the DOI for the version that you used on our Zenodo page

For example, a citation for version v2023.06.09 should look like this:

A. S. Richardson, P. E. Adamson, G. Tang, A. Ostenfeld, G. T. Morgan, C. G. Sun, D. J. Watkins, O. S. Grannis, K. L. Phlips, and S. B. Swanekamp. (2023, June 09). NRL-Plasma-Physics-Division/turbopy: v2023.06.09 (Version v2023.06.09). Zenodo. https://doi.org/10.5281/zenodo.4088189

While bibtex styles vary, the above output can be created by an entry something like this:

@software{turbopy_v2023.06.09,
	author = {A. S. Richardson and P. E. Adamson and G. Tang and A. Ostenfeld and G. T. Morgan and C. G. Sun and D. J. Watkins and O. S. Grannis and K. L. Phlips and S. B. Swanekamp},
	doi = {10.5281/zenodo.4088189},
	month = {June 08},
	publisher = {Zenodo},
	title = {{NRL-Plasma-Physics-Division/turbopy: v2023.06.09}},
	url = {https://doi.org/10.5281/zenodo.4088189},
	version = {v2023.06.09},
	year = 2023,
}

Note that the author names above have been lightly edited to put them in a standard format. The Zenodo page for a specific version of the code will try to infer author names from the GitHub accounts of contributers. That author list would be fine, too, espcially as additional GitHub users contribute to turboPy.

Cite the turboPy project

If you are refering to the turboPy project as a whole, rather than a specific version of the code, you also have the option of using the Zenodo concept DOI for turboPy. This DOI always resolves to the latest version of the code.

The concept DOI for turboPy is 10.5281/zenodo.3973692.

An example citation and bibtex entry for the concept DOI would look something like this:

The turboPy Development Team. NRL-Plasma-Physics-Division/turbopy. Zenodo. https://doi.org/10.5281/zenodo.3973692

@software{turbopy_project,
	author = {The turboPy Development Team},
	doi = {10.5281/zenodo.3973692},
	publisher = {Zenodo},
	title = {{NRL-Plasma-Physics-Division/turbopy}},
	url = {https://doi.org/10.5281/zenodo.3973692},
	year = 2023,
}

Cite the turboPy paper

If you are looking for a paper to cite, rather than source code, this is the reference to use. This citation is appropriate when you are discussing the functionality of turboPy.

A. S. Richardson, D. F. Gordon, S. B. Swanekamp, I. M. Rittersdorf, P. E. Adamson, O. S. Grannis, G. T. Morgan, A. Ostenfeld, K. L. Phlips, C. G. Sun, G. Tang, and D. J. Watkins. TurboPy: A lightweight python framework for computational physics. Computer Physics Communications, 258:107607, January 2021. https://doi.org/10.1016/j.cpc.2020.107607

@article{RICHARDSON2020107607,
	author = {A. S. Richardson and D. F. Gordon and S. B. Swanekamp and I. M. Rittersdorf and P. E. Adamson and O. S. Grannis and G. T. Morgan and A. Ostenfeld and K. L. Phlips and C. G. Sun and G. Tang and D. J. Watkins},
	doi = {10.1016/j.cpc.2020.107607},
	issn = {0010-4655},
	journal = {Computer Physics Communications},
	keywords = {Framework, Physics, Computational physics, Python, Dynamic factory pattern, Resource sharing},
	month = {January},
	pages = {107607},
	title = {{TurboPy}: A lightweight python framework for computational physics},
	url = {http://www.sciencedirect.com/science/article/pii/S0010465520302897},
	volume = {258},
	year = {2021},
}

turbopy's People

Contributors

aostenfeld avatar arichar6 avatar carolinegsun avatar garethtmorgan avatar gracetangg avatar kphlips avatar littlewatkins avatar ndisner avatar owengrannis avatar padamson avatar swanekamp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

turbopy's Issues

Add last missing docstrings

Create Documentation
There are a few methods that are missing documentation.

  • __version__.py
  • DynamicFactory._factory_type_name
  • DynamicFactory._registry
  • Grid.create_interpolator.interpval
  • GridDiagnostic.diagnose
  • GridDiagnostic.finalize (side note, I don't think this method is needed)

IGNORE THIS ISSUE

####Create Test/s
This is just to check if the template automatically adds the tests label.

Create tests for turbopy

TurboPy currently lacks tests. We need unit tests and integration tests.

  • Add folder for tests
  • Create tests for the code
  • Check test coverage
  • Make sure tests are run as part of CI pipeline

Add a method to the Simulation class to display model

It could be helpful to both developers and users to have a way to quickly visualize the various PhysicsModules and Diagnostics that have been added to a Simulation (an instance of the Simulation class).

This could be achieved by creating a method for the Simulation class that loops over the physics_module list and prints the name (or class name) of each item in the list. And then does something similar for the diagnostics.

Perhaps better would be to also add a method to the PhysicsModule and Diagnostic classes, which prints specifics about the module/diagnostic. For example, each module could also print the values of its settings, and a list of variables that it is sharing.

There are many possibilities for more advanced versions of such functionality. Perhaps a tool that takes a Simulation instance and creates a flowchart (use html/js, perhaps flowchart.js) that shows the Modules, Diagnostics, and the connections between the variables.

Thoughts?

Add turboPy paper preprint to README

More of just a quality of life thing. I think it would be nice to add a link to the the preprint of the turboPy paper to the README instead of having it buried in turboPy-training. It might help with documentation and understanding the code.

Unit test naming convention

Discussion Topic: Should we have a naming convention for unit tests, and if so, what should it be?

I believe we need a naming convention for unit tests. There are many reasons for this:

  • easier to decipher what is happening when a test fails
  • forces test developer to think about exactly what they are testing for when they write a test
  • helps with organization to make sure we are writing all of the tests that we need

See this SO thread for some good thoughts. I like the "Should" naming convention.

Finish documenting Grid class

Since I documented the two Grid methods, I was wondering if it'd be ok if I finished the rest since I think I know now how the class works

Automatically share "public" attributes of PhysicsModules?

Discussion Topic: new method for sharing data between PhysicsModules

It might be interesting to think about alternative ways to share data between PhysicsModules. In particular, there's quite a bit of "boilerplate" code that needs to be written to both "share" and "accept" variables that a developer wants their PhysicsModule to share.

So here's a thought. Maybe we can use “public” and “private” attributes for turboPy. Any attribute (maybe also method?) of a custom PhysicsModule subclass could be automatically shared. I.e. anything not starting with an underscore would be made available to other PhysicsModules. E.g., a var named “electric_field” would be shared, but one named “_electric_field” would not.

This would probably have to be done in conjunction with a rewrite of the "sharing" algorithm that turboPy is currently using.

What's the deal with commits?

Question
What happens when two different commits are made to the same version of a file? I've been writing some tests for the core.py file, but as of writing this comment, @gracetangg is also writing tests, and I noticed that we are both working from the much shorter version of the test_core.py file that is on the main repo. I want to keep writing tests, but I don't know if that will create conflicts with the pull requests that have not yet been committed to the main turbopy repo.

Change name of `CVSDiagnosticOutput` class

Problem

class CSVDiagnosticOutput:

The name of this class, CVSDiagnosticOutput, is somewhat misleading. It sounds like it should be a "diagnostic", in that it inherits from the Diagnostic base class. This is not the case. It does not inherit from Diagnostic.

Rather, it is a "helper" class, in that it is meant to be used by "real" Diagnostic classes for writing data to files. This is achieved through composition.

Proposed Solution

Refactor the code by changing the name of this class to something more descriptively accurate. Perhaps CVSOutputHelper or CVSOutputUtility?

Note that this is not a "utility" class or "helper" class as those are sometimes defined. Often this means that the class "contains just static methods, it is stateless and cannot be instantiated" (from this link). And this isn't true for this class.

What do you all think?

Create parser for turboWAVE-style input files

One of the nice things about using python for this framework is the flexibility it provides for things like reading various file formats, and interoperating with other codes.

The Simulation class currently accepts problem definitions either as a python dictionary, or a string. The string is interpreted as the name of an input file in toml format.

This "input file" capability can be expanded in many ways. One obvious choice would be to somehow accept input files that use the format that was designed for turboWAVE. Here's the documentation for the turboWAVE input files.

I'm not sure what the best way to read and parse these files would be. I think one promising option is to use the parser that @dfgordon has written for turboWAVE. It is available on NPM.

  • Explore using tree-sitter-turbowave for parsing turboWAVE input files
  • Figure out how to map the object tree obtained from the parser to turboPy objects
  • ...
  • Profit!

Update docstring for `CSVDiagnosticOutput` class

In the docstring, the parameter diagnostic_size is listed as being of type int. It actually should be a tuple, which gives the size of the buffer.

diagnostic_size : int

As can be seen in the append function, the first axis of the buffer is the "time", and the second is the "space" (which can be just a single point).

def append(self, data):
    self.buffer[self.buffer_index, :] = data
    self.buffer_index += 1

So diagnostic_size is type (int, int), where the first is the number of time points, and the second is the number of spatial points.

Create integration test based on block-on-spring example app

As described in #35, we are going to be using simple turboPy apps as integration tests.

The block-on-spring example app would be a nice one to use for an integration test.

ToDo:

  • Use "block_on_spring" as the fixture "long_name" and "bos" as the "short_name"
  • Save 'good' outputs in tests/fixtures in a sub-folder using the "block_on_spring" name
  • At a minimum, create at least one fixture (bos_run) to run the app and generate output for comparison to 'good' output

Use `xarray.DataArray` class for shared data

Currently, turboPy allows the sharing of any python object. This of course is the most useful if the object is mutable, so that different PhysicsModules can all store references to the same (changing) object.

I've recently started using xarray, and its DataArray classes are very powerful. See this commit to the example app where the numpy array that was being used to store the electric field was changed to an xarray.DataArray. This allowed me to easily add metadata (like field name, units, grid coordinate name and units, etc).

It's worth discussing the possibility of using xarray.DataArray for the objects that PhysicsModules want to share with each other. This would open up a lot of possibilities, since it's easy to add custom metadata. E.g., this could make it easier to address #9 and #10.

Potential downsides would be adding another dependency (although xarray is becoming more common), and making it less flexible to share data.

Thoughts?

Create a check for test coverage

So, I'm only just learning how to check test coverage, but I think this is something we should do for our tests of turboPy. See #14

We need to:

  • Figure out how to calculate test coverage (pytest --cov?)
  • Integrate this check into the CI pipeline
  • Add a coverage badge to the readme

New Labels

Please add custom labels below that you think belong on the turbopy repo. An admin will review suggestions and add labels that seem useful.

Initial List:

  • Governance

Create pdf version of the documentation

ReadTheDocs will use sphinx to automatically create a pdf version of the documentation.

This is great. But some stuff might look odd in the pdf version, which looked fine in the html version.

This issue is just to remind us to check the pdf version, and make sure it looks good.

Add ReadTheDocs link to the readme

Once some documentation is in place, it will be automatically converted to html format and hosted at ReadTheDocs. This link should be added to the readme so that people can easily find the documentation.

Related to issue #1 and PR #12.

Implement baseline Continuous Integration

  • add some unit tests
  • connect to TravisCI
  • verify triggering of automated tests
  • add TravisCI 'build' badge to README
  • add CONTRIBUTING w/info on testing
  • initialize documentation with Sphinx
  • add minimum set of documentation
  • connect to readthedocs.org
  • verify triggering of automated update to docs/docstrings
  • update CONTRIBUTING w/info on docs

Update module docstring in turbopy.diagnostics and document CSVDiagnosticOutput

Contributes to #1

Each module should have a docstring with at least a summary line. Other sections are optional, and should be used in the same order as for documenting functions when they are appropriate:

  1. summary
  2. extended summary
  3. routine listings
  4. see also
  5. notes
  6. references
  7. examples

I think for diagnostics 1 and 2 would be good to include.

Create Documentation

The documentation for turboPy is currently a mess (my bad!) and needs to be cleaned up, created where missing, and completed. This will involve using docstrings using the numpydoc format.

Docstrings need to be added for:

  • The module
  • The classes
  • The class methods

This should tie into the documentation effort mentioned in #2, and will enable automatic creation of much of the documentation that will end up on readthedocs.

create integration tests from apps

This comment is out of date...see below for updated proposal.

To test many (if not all) of the features of turbopy, we need to have a range of turbopy apps included as fixtures in tests/fixtures. This will provide the ability to do unit and integration testing covering all of the functionality of turbopy.

I propose the following:

  • unit testing will be defined as testing only one turbopy class or method
  • integration testing will be defined as testing more than one class or method and at least one interaction
  • app fixtures will have a "long_name" and "short_name" (for example "particle_in_field" and "pif", respectively)
  • app fixtures will be included in tests/fixtures in a sub-folder using the "long_name" of the app
  • each app fixture will include a long_name.py file and a long_name.toml file
  • each class definition and Class.register command will be a fixture named short_name_class_name or short_name_class_name_register, respectively.
  • each app will have a short_name_construct_simulation_from_toml fixture to call the construct_simulation_from_toml on the long_name.toml file
  • each app will have a short_name_run fixture to run the simulation

Should we have team-wide issue/PR templates?

It seems like our issue/PR templates could be standardized across all of our repositories. Should we start a "templates" repo and make changes to templates there and then copy them into each team repo when they change?

Reference #41

Should we add explicit documentation about self.E[:] vs xarray vs etc methods?

Question
In some of the turbopy modules/apps, @arichar6 uses self.E[:] to change the numpy array value that points to the specific object id.

Should that have a explicit documentation explaining why that's used and how you could use other methods like xarray or create your own? Or would it be better to just include a comment next to those [:] cases?

My thought is if @Swanekamp has to ask about it, then others will also have questions about it (like me because I didn't remember what that did in python).

Refactor `__init__` for `Simulation` class to not accept a string filename

Currently the constructor for a Simulation object will accept a string as its input parameter. This string is interpreted as a filename for a file in toml format, which is read in and converted to a dictionary.

While this was a quick solution for setting up a simulation based on a problem definition in a toml file, I don't like this solution. It makes it unclear what parameters are needed for constructing a Simulation. It also introduces a dependency into turboPy on a library to parse toml files.

I propose that the code which reads the file be refactored into a helper function, which takes the filename as a string, and returns a Simulation instance which was initialized with the parameters from the toml file. Perhaps something like construct_from_toml. This function could then be moved outside of the core turboPy framework if we want to remove the dependancy. It will also give a pattern to follow if we want to make other types of constructors which read different styles of input file (see #8).

Add a check to Simulation which confirms that all Modules/Diagnostics have the data they need

I've run into times when I changed my Simulation by removing a Module, and then the code crashes on the first call to fundamental_cycle because a Diagnostic couldn't find the data it was trying to output to file.

This type of crash could be easier to debug by inserting a check before the main simulation loop starts. The checking function could raise a more helpful error if all the modules did not find the data they need.

Such a check would have to go through each module/diagnostic and confirm (somehow) that all of the resources that it needs were actually found. This might require a modification to the way that resource sharing works.

Perhaps at the very least something like a resource_found flag could be set in the inspect_resource function.

Update docstring for Simulation Class

This Class "owns" all the physics modules, compute tools, and diagnostics.

The docstring for the Simulation Class needs to be much more detailed. At the very least, I would like it to describe what is expected in the input_data parameter in the constructor. This is crucial to know for anyone that hopes to use turboPy.

Add test of FieldDiagnostic

The main thing left in diagnostics.py that is not tested yet is the FieldDiagnostic class. It might be possible to add this to one of the integration tests, and then have better coverage because of that.

See #14

Add Documentation to Abstract Methods?

Question
Should we be adding any sort of documentation to abstract methods like the "_factory_type_name" and "_registry" in the DynamicFactory class in core.py? I notice that there is no documentation in place and was wondering if that was by design.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.