Code Monkey home page Code Monkey logo

mud-examples's Introduction

PyPI version unit testing workflow example workflow publish workflow codecov coveralls

MUD-Examples

Examples for Existence, Uniqueness, and Convergence of Parameter Estimates with Maximal Updated Densities

Authors: Troy Butler & Michael Pilosov

Installation

pip install mud-examples

Quickstart

Generate all of the figures the way they are referenced in the paper:

mud_run_all

The above is equivalent to running all of the examples sequentially:

mud_run_inv
mud_run_lin
mud_run_ode
mud_run_pde

Usage

The mud_run_X scripts all call the same primary entrypoint, which you can call with the console script mud_examples.

Here are two examples:

mud_examples --example ode
mud_examples --example lin

and so on. (More on this later, once argparsing is better handled, they might just be entrypoints to the modules themselves rather than a central runner.py, which really only exists to compare several experiments, so perhaps it warrants renaming to reflect that).

mud-examples's People

Contributors

mathematicalmichael avatar

Watchers

 avatar  avatar

Forkers

cdelcastillo21

mud-examples's Issues

DISCUSS: What is this repo for?

As we move more functionality into mud, I want to have a clear delineation that is communicated in the README about what this repo is for, and which contributions should be in mud vs mud-examples.

One thing that feels "obvious" is that console scripts such as mud_run_all which are defined here should be defined in mud-examples and not mud.

Notebooks can be contributed here, but mud should avoid them entirely. The idea is that a folder full of figures can be generated by simply installing mud-examples and running a console script that it installed. A lot of the mess that accomplishes that will be here, whereas well-tested and well-documented code concerned with core functionality is contributed to mud.

[HIGH]: New linear example

  • convergence with respected to repeated observations of the same QoI. 100-D Map with noisy data.
  • Repeated observations -> contours will better approximate the noiseless ones
  • convex hull that encapsulates all corner-point solutions (pairwise intersections of contours) will shrink (down to a point as observations tend to infinity)
  • as such, the variance in the MUD estimates should shrink.
  • this is a sequence of QoI maps converging towards the "true" one, and as such we should see convergence in the estimate.

MUD-Paper Checklist

  • New Contour Fig (alpha is addressed)
  • New Convergence Fig
  • "Tikonov" -> "Tikhonov"
  • Section about code

[HIGH]: pde convergence plots in dim > 1

PDE should compute distance to projection instead of lam_true when dim > 1. In dim == 1, keep distance to truth.

  • plot_experiment_measurements
  • plot_experiment_equipment

[HIGH]: remaining plotting PDE to-dos

  • convergence figures need updating on y-axis limits
  • proper comparison for pde example in the above (distance to projection) (see #36)
  • y-axis limits in convergence plots should work for both pde and ode
  • some semblance of documentation in README
  • log crashes when all estimates identical. need better handling.

[BUGFIX]: inverse problem examples dont suppress plots

figures for BIP v SIP example show on screen instead of just saving quietly. I was rushing and using an environment without a DISPLAY variable, so I didn't catch it.

do the if not save, then show, and save by default in the run_inv() method.

[MED]: does fenics-fallback still work?

Does conda workflow still generate data after packaged-one available?

  • need a better system for loading packaged data other than 'data' being in the path.

  • do the conda tests validate the generation of data?

  • as a workaround, can test it with a different prefix, remove the data file, or have a special test that only runs with fenics available.

  • validate that we still can generate data if needed.

  • this would be easier if we could package up the binary for fenics...

Conversation Notes.

Contributing.

Clear expectation of using conda.... but ? where is it? How to install link, etc.
Idea. Create a diff directory where you can dump that and keep the base README clean.

Running

So, I just run mud_run_all.
Of course. But, for those not fully in this yet. Show them this is on the command-line with a prompt ($).

I like the comments

# Don't complain if tests don't hit defensive assertion code:

Is this a bug

package_dir =

What is the measured performance hit on loading scipy's distribution / dolphin?

In Poission
io & pathlib aren't used

I'm not a fan of the dir name helpers. It's vague. Though, tools, utils really isn't much better. In MVC (model-view-controllers) they're controllers. I've been using that more.

  • Logging
  • argument parser
  • lazyloader

What happens when I pass '-d f'
-- Are we verifying our input?

Prefix? :) This is inside baseball.
A prefix for the output directory. It's the output directory for users.

    parser.add_argument('-p', '--prefix',
        dest="prefix",
        help="Output filename prefix (no extension)",
        default='results',
        type=str,
        metavar="STR")

[HIGH]: validate 2D pde examples

  • runs from command line, from anywhere
  • generates its own files, a drop-in replacement to your thesis
  • all figures checked for clarity, filenames
  • runs with normal or beta (uniform = beta(1,1))
    • change formatting for beta so it replaces periods with dashes or something cleaner than dots in directory names
    • for speed, use 100 samples
  • change param_ref to -3 to align with the domain. (poissonProblem class setter needs to change, as does the model, and all other references to "3"
  • rethink current syntax * ... should it be dimension-specific variants with all the default options?
    • Can I sneak it into a run_with_default method that accomplishes this?
    • should I do the better thing and migrate argument parsing to the pde.py and ode.py files, with a shared set of options available for inheritance from a method in helpers.py perhaps? Then link console-scripts to their specific files? This would probably replace the entirety of the current runner.py unless an effort was made to support it (but why?).

* mud_examples -v --example pde --alt --bayes --num-trials 20 -m 20 100 250 500 -t 0.1 $@

[MED]: PDE Example Refactor

Feature Request

  • should be able to specify a distribution independent of a set of samples used for loading
  • three datasets should be packaged in 2D: uniform, normal with 95% of samples in (0,4), normal with 99% of samples in (0,4)
  • in 1D, same idea.
  • in 5D, just uniform
  • 1000 samples for each, 100 500 sensors maximum
  • stop inferring distribution from filename
  • get rid of prefix handling
  • be able to create MUD-1D (not just MUD-2D-alt).

Must do:

  • default to log likelihoods in mud, don't compute the evidence for the posterior. It causes divide by zero errors.

Nice to haves:

  • decouple runner from pde example, make it entirely independent
  • pde 1D probably can be separated out since it has a different set of figures
  • can we attach the geometry study to the output as well?
  • check out contents of results.pkl and decide if it's worth keeping
  • refactor the experiment-handling methods to be more transparent in what they are doing. Use dictionaries as configs?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.