Code Monkey home page Code Monkey logo

webppl's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

webppl's Issues

Pass source locations along transforms

It would be useful to pass the source code locations along in the code transforms (CPS, addressing, optimizing, etc) in order to make debugging easier.

Ignore factors outside of inference, rather than throw an error.

Is there a compelling reason to throw an error for factor statements that are hit when the program is running outside of inference? My usual interpretation of this situation has been to silently ignore the factors. Otherwise, the programmer has to tweak the program (i.e. comment out some lines) depending on whether they're just running it forward for testing or running it through an inference coroutine.

array access

Array access is a member expression with a computed key. Need to handle this in cps (currently there's an assert at cps.js line 174 that !computed).

Example:

var a = [1,2]
a[0]

Fix particle filter with rejuvenation

I can't test this right now, but it's likely not working correctly, even if current tests pass, for reasons that may be related to my rewrite using cpsForEach. To get started on resolving this issue, add assertions that check that the acceptance probability is always a valid number, not NaN.

Introduction of the 'factor' notion in Chapter 2

I'm not sure what level of readership you're aiming at, but taking probmods.org as a benchmark, it feels like towards the end of Chapter 2 you're assuming a great deal of background knowledge for a casual reader.

In particular, while the introduction of inference through the binomial function feels 'standard' and normal, this paragraph suddenly shows up after it:

"What if we wanted to adjust the above binomial computation to favor executions in which a or b was true? The factor keyword re-weights an execution by adding the given number to the log-probability of that execution. For instance:"

What 'given number' are we adding here? How much does a change of -2 correspond to? Why would we want to make that specific inference or assumption? In Church/probmods when inference was introduction the conditioning was on a pretty simple notion that 'the sum of the heads is 2 or greater'.

Performance of setTimeout

We currently use setTimeout to clear the stack... this may be bad wrt performance. Should clear stack less often or switch to a trampoline to avoid stack bloat.

Allow mutation

To allow global variables, add a store variable that gets passed down function calls after CPS. Basically this is just the transform: foo(a,k, args) => foo(s,a,k,args). Also need to add this arg to primitive functions defined in the header (should make that more flexible). In header.js inference code need to copy store at places where continuations are copied.

To allow local variables need to extend the store object to push an environment level at function calls and pop at continuation calls.

Scores of constructed ERP's

For ERPs constructed from marginalization functions (EnumerateLikelyFirst, for example), while the 'support' method of the object returned by the marginalization function returns the appropriate set, it seems that all of the 'score' return values are '-Infinity'.

Is it expected that the 'score' should simple return the negative log of the values displayed in the "Creating distribution:" output?

More concretely, should the following expression return the negative log probability of the first element?

var inference = EnumerateLikelyFirst(function,num_queue)
console.log(inference.score(inference.support()[0]))

My current attempts to do so return the proper support, but return "-Infinity" for the score, even though inference does not lead to a zero result.

Make 'arguments' work in all contexts

Currently, the varargs transform happens before trampolining (so closures introduced there are handled correctly), but after cps (so if cps introduces a closure boundary that separates the occurrence of arguments from its corresponding function top level, arguments won't work correctly). See example in e22b7e2.

Small models don't trampoline (was: Opaque requirements for MH)

MH(function() {
  return flip(0.9)
}, 4000);

fails with an error:

/Users/long/webppl/src/header.js:841
  var fw = -Math.log(oldTrace.length);
           ^
RangeError: Maximum call stack size exceeded
    at Array.map (native)
    at mhAcceptProb (/Users/long/webppl/src/header.js:842:26)
    at MH.exit (/Users/long/webppl/src/header.js:856:22)
    at exit (/Users/long/webppl/src/header.js:493:13)
    at MH.sample (/Users/long/webppl/src/header.js:824:3)
    at MH.exit (/Users/long/webppl/src/header.js:881:15)
    at exit (/Users/long/webppl/src/header.js:493:13)
    at MH.sample (/Users/long/webppl/src/header.js:824:3)
    at MH.exit (/Users/long/webppl/src/header.js:881:15)
    at exit (/Users/long/webppl/src/header.js:493:13)

(I am on the latest dev version, 8e64540)

But this modified version where we name the ERP works:

var numSamps = 4000;
MH(function() {
  var x = flip(0.9);
  return x;
}, numSamps);
* Program return value:

ERP:
    true: 0.903
    false: 0.097

Why does writing it the first way result in a stack overflow?

Support primitive functions in more contexts

This works:

var exp = function(x){return Math.exp(x)}
var xs = [1, 2, 3];
map(exp, xs)

This doesn't:

var xs = [1, 2, 3];
map(Math.exp, xs)

It should be possible to use primitive functions in all contexts where compound (webppl) functions are supported.

Don't resample last factor in particle filter

Currently, particle filtering resamples at the last factor just like at any other factor. We could reduce variance of the estimated distribution a little by not resampling at the last factor, and instead taking into account the particle weights when we build the distribution that the filter returns. This makes the particle filter correspond more directly to an importance sampler when only one factor statement is present.

Split header into multiple files

header.js is getting too long. it would be nice to split it up into several files (e.g. ERP, Enumeration, Helpers, etc). doing this naively using require doesn't work right, because the coroutine variable needs to be in scope for all these files. there is probably a good way to do this...

IfStatement

It would be convenient (though not really necessary) to be able to use if(..){...}else{...} statements in WebPPL programs. CPS for these will be pretty similar to the ternary operator (ConditionalExpression).

MH proposals from distribution with custom params

Drift propsals (i.e. proposals that are close to the current value) are often much more efficient for MH. A fairly general way to do this is to propose from the ERP being used but with different parameters that make the mean (close to) the current value.

So we can add a general interface that takes a computedParams(curVal, params) and returns some appropriate new params. For instance for gaussian it could use the curVal as mean:
function computedParams(curVal, params) {return [curVal, params[1]]}

Need to add appropriate forward/backward scores for these proposals to the MH ratio.

Add normalizationConstant attribute to (some) erps

It would be useful to be able to access the normalization constant for ERPs generated by inference algorithms. For enumeration, this would be exact, whereas for particle filtering, it is an estimate.
(We are already computing these numbers, just don't have a good way of accessing them.)

ERPs without support

Making objects using random primitives that don't have a support crashes the program with:

/<path-to-webppl>/src/header.js:339
    throw "Enumerate can only be used with ERPs that have support function.";
    ^
Enumerate can only be used with ERPs that have support function.

code taken from
http://dippl.org/examples/semanticparsing.html
(last code-block in The Parser, just before Incremental Word Building)

setting makeObj to makeObj1 and makeObj2 are fine, but not makeObj3, since
uniform does not specify support. (same issue with gaussian)

var neg = function(Q){ return function(x){return !Q(x)} }

// original verion
var makeObj1 = function(name) {
  return {name: name,
          blond: flip(0.5),
          nice: flip(0.5)}
}

// augmented with position flags
var makeObj2 = function(name) {
  return {name: name,
          pos: [flip(0.5), flip(0.5)],
          blond: flip(0.5),
          nice: flip(0.5)}
}

// augmented with position coordinates
var makeObj3 = function(name) {
  return {name: name,
          pos: [uniform(0.5), uniform(0.5)],
          blond: flip(0.5),
          nice: flip(0.5)}
}

var makeObj = makeObj3

var lexical_meaning = function(word, world) {

  var wordMeanings = {

    "blond" : {
      sem: function(obj){return obj.blond},
      syn: {dir:'L', int:'NP', out:'S'} },

    "nice" : {
      sem: function(obj){return obj.nice},
      syn: {dir:'L', int:'NP', out:'S'} },

    "Bob" : {
      sem: find(world, function(obj){return obj.name=="Bob"}),
      syn: 'NP' },

    "some" : {
      sem: function(P){
        return function(Q){return filter(filter(world, P), Q).length>0}},
      syn: {dir:'R',
            int:{dir:'L', int:'NP', out:'S'},
            out:{dir:'R',
                 int:{dir:'L', int:'NP', out:'S'},
                 out:'S'}} },

    "all" : {
      sem: function(P){
        return function(Q){return filter(filter(world, P), neg(Q)).length==0}},
      syn: {dir:'R',
            int:{dir:'L', int:'NP', out:'S'},
            out:{dir:'R',
                 int:{dir:'L', int:'NP', out:'S'},
                 out:'S'}} }
  }

  var meaning = wordMeanings[word];
  return meaning == undefined?{sem: undefined, syn: ''}:meaning;
}

// The syntaxMatch function is a simple recursion to
// check if two syntactic types are equal.
var syntaxMatch = function(s,t) {
  return !s.hasOwnProperty('dir') ? s==t :
  s.dir==t.dir & syntaxMatch(s.int,t.int) & syntaxMatch(s.out,t.out)
}

//make a list of the indexes that can (syntactically) apply.
var canApply = function(meanings,i) {
  if(i==meanings.length){
    return []
  }
  var s = meanings[i].syn
  if (s.hasOwnProperty('dir')){ //a functor
    var a = ((s.dir == 'L')?syntaxMatch(s.int, meanings[i-1].syn):false) |
            ((s.dir == 'R')?syntaxMatch(s.int, meanings[i+1].syn):false)
    if(a){return [i].concat(canApply(meanings,i+1))}
  }
  return canApply(meanings,i+1)
}

var combine_meaning = function(meanings) {
  var possibleComb = canApply(meanings,0)
  display(possibleComb)
  var i = possibleComb[randomInteger(possibleComb.length)]
  var s = meanings[i].syn
  if (s.dir == 'L') {
    var f = meanings[i].sem
    var a = meanings[i-1].sem
    var newmeaning = {sem: f(a), syn: s.out}
    return meanings.slice(0,i-1).concat([newmeaning]).concat(meanings.slice(i+1))
  }
  if (s.dir == 'R') {
    var f = meanings[i].sem
    var a = meanings[i+1].sem
    var newmeaning = {sem: f(a), syn: s.out}
    return meanings.slice(0,i).concat([newmeaning]).concat(meanings.slice(i+2))
  }
}

var combine_meanings = function(meanings){
  return meanings.length==1 ? meanings[0].sem : combine_meanings(combine_meaning(meanings))
}

var worldPrior = function(objs) {
  return [makeObj("Bob"), makeObj("Bill"), makeObj("Alice")]
}

var meaning = function(utterance, world) {
  return combine_meanings(
    filter(map(utterance.split(" "),
               function(w){return lexical_meaning(w, world)}),
           function(m){return !(m.sem==undefined)}))
}

var literalListener = function(utterance) {
  Enumerate(function(){
    var world = worldPrior()
    var m = meaning(utterance, world)
    factor(m?0:-Infinity)
    return world
  }, 100)
}

literalListener("all blond people are nice")

.gitignore node_modules/*

Someone should check that none of the node modules currently tracked by git have been modified to work with webppl; if this is the case, we can add node_modules/* to the .gitignore file

Bonus: remove reference to node_modules/* from all git commits using filter-branch

Add call/cc (etc)

Add call/cc or delimited continuations to webppl. This should be pretty easy, since the cps transform should just turn call/cc into something that passes on the continuation...

Use this to explore co-routines and, especially, stochastic futures (see Rictchie, et al, under review). Make a cool interactive procedural model control demo using this.

Cleanup examples folder

Add simpler examples and organize the current examples with better documentation so that they're more useful to newcomers.

Variational inference

Basic black box variational inference (see http://arxiv.org/pdf/1301.1299v1.pdf and http://arxiv.org/pdf/1401.0118v1.pdf) is in the codebase.

It needs to be tested and benchmarked.

There are several major performance improvements described in the papers that need to be implemented. Most important rao-blackwellization of the gradient estimates. This may require a flow analysis to determine the markov blanket of random choices.

Once everything is working, try variationally-guided PF: in the particle filter, sample new choices from the variational distribution, instead of the prior. Or possibly mix / interpolate prior and variational distribution. The idea is that variational gets you an importance sampler closer to the posterior modes, while PF helps capture the joint structure ignored by variational.

sampleWithFactor not working

The sampleWithFactor function is broken in both master and dev branches. This means the dippl.org examples using that function are currently broken as well.

Experiment with inference coroutines in webppl.

The fact that the inference (marginalization) co-routines have to be written in CPS to play nice with the transformed program code makes them hard to read and maintain. One possibility may be to write the basic control logic in WebPPL itself, so that it gets transformed by the CPS pass.

What hooks would be needed to allow the inference procedures to do what they need? Need to set co-routine and other state. Access store, address, etc. Other stuff?

We could experiment with this in a branch for the simplest algorithms....

FunctionDeclaration

I often find myself wanting to declare functions with function foo() {...} instead of with var foo = function() {...}.

This is pretty minor, but probably also pretty easy to add?

Web interface disabled following error

The web interface stochastically disables after running code with an error. The problem is fixed after restarting the browser or switching to a different one.

particle filters with variable number of sampling/factors in models

A model that involves incremental sampling and factoring of elements, where the number of elements and factors itself can vary across particles, results in the construction of a marginal distribution that has undefined as an element of its support.

The example below illustrates this problem. When the variable numl in inner is set to a constant, the the code runs as expected. However, replacing the number by a call to a stochastic function that varies the number of elements produced, causes the marginal distribution have undefined in its support.

var ds = [0,1,2,3,4,5,6,7,8,9]

var constructL = function(n,m,_l,pf) {
  var _l = _l == undefined ? [] : _l
  var pf = pf == undefined ?  0 : pf
  if (n == 0) {
    factor(-pf)
    return _l
  } else {
    var e = ds[randomInteger(ds.length)]
    var nl = _l.concat([e])
    var nf = m(nl) ? 0 : -30
    factor(nf - pf)
    return constructL(n-1,m,nl,nf)
  }
}

var inner = function(m){
  ParticleFilter(function(){
    var numl = 4 // binomial(0.7,4)+1
    var nl = constructL(numl,m)
    factor(m(nl) ? 0 : -Infinity)
    return nl
  }, 10)
}

var evenM = function(l) {
  // checks if every element in the list is even
  var f = function(a,b){ return a & b }
  var g = function(v) {return v%2 == 0}
  return reduce(function(a,b) { return f(g(a),b) }, g(l[l.length-1]), l.slice(0,-1))
}
var d = inner(evenM)
console.log(d.support([]))
print(d)

'flip' not working as expected given text?

In chapter 2 it says "There are a set of pre-defined ERPs including bernoulliERP, randomIntegerERP, etc. (Since sample(bernoulliERP, [p]) is very common it is aliased to flip(p)."

This leads me as a reader to expect you can write:

sample(flip(0.5))

Which won't work, since you'll just be trying to sample 'true'.

But the following also doesn't work, at least not on the web implementation:

sample(flip, [0.5])

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.