Code Monkey home page Code Monkey logo

pymc-resources's People

Contributors

agustinaarroyuelo avatar alexandorra avatar aloctavodia avatar arokem avatar austinrochford avatar benslack19 avatar canyon289 avatar ccaprani avatar colcarroll avatar elizavetasemenova avatar fonnesbeck avatar gladomat avatar jankawis avatar joannadiong avatar junpenglao avatar ksachdeva avatar madian44 avatar marcogorelli avatar napsternxg avatar nigeljyng avatar noblekennamer avatar oscarbranson avatar ricardov94 avatar rosgori avatar severinhatt avatar tagomatech avatar tomkealy avatar twiecki avatar uasek avatar yahrmason avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pymc-resources's Issues

resource in environment.yml

Could help to add - "git+git://github.com/arviz-devs/arviz.git@master" in the pip section? Got an arviz not installed error when running a notebook. Awesome work by the way!

Chapter 9 book != Chp_09.ipynb

It looks like Chapter 10 book == Chp_09.ipynb, Chapter 11 book == Chp_10.ipynb and ...

maybe more are misaligned but I was just looking now for the MCMC stuff from Chapter 9.

However, Chapter 9 book != Chp_08.ipynb code 8.1...looks like then its actually matching the book chapter 8.

Outline of Rethinking2 chapters

As discussed with @aloctavodia, here is an outline of chapters we're porting to PyMC3 for Rethinking 2. This way we can keep track of where we are -- let's comment below when we start porting a new chapter, and check the box when the chapter is merged.

  • Chapter 2 -- Small Worlds and Large Worlds
  • Chapter 3 -- Sampling the Imaginary
  • Chapter 4 -- Geocentric Models
  • Chapter 5 -- The Many Variables & The Spurious Waffles
  • Chapter 6 -- The Haunted DAG & The Causal Terror
  • Chapter 7 -- Ulysses' Compass (@oscarbranson)
  • Chapter 8 -- Conditional Manatees (@oscarbranson)
  • Chapter 9 -- Markov Chain Monte Carlo (@nilichen)
  • Chapter 10 -- Big Entropy and the Generalized Linear Model (@nilichen)
  • Chapter 11 -- God Spiked the Integers
  • Chapter 12 -- Monsters and Mixtures
  • Chapter 13 -- Models with Memory (@AlexAndorra)
  • Chapter 14 -- Adventures in Covariance (@AlexAndorra)
  • Chapter 15 -- Missing Data and Other Opportunities (open to claim)
  • Chapter 16 -- Generalized Linear Madness (open to claim)

BDA3 Chapter 3 discussion of Effective sample size is flawed (needs update)

Problem

The discussion of number of effective samples/effective sample size in this notebook is unclear. I could possibly help clarify it, but I don't know enough to write it all myself.

How to replicate

I was running through the notebook for Chapter 3, and examining the "Example. Pre-election polling" section. This uses a no longer available PyMC3 API function to compute n_eff. I tried changing this to use the corresponding arviz function az.stats.ess, and got the anomalous results in the following screen-grab.

image

Suggested Improvements

  1. It would be very helpful to explain what is going on here. I assume that the number of effective samples aligns with the notion of ess_bulk. I think the discussion should be tweaked to make this point. I initially assumed that the return of as.stats.ess() would correspond to the mean ESS, rather than the "bulk."

  2. Given that the notion of ESS is nowhere discussed in Chapter Three of BDA, it would be a good idea to explain it here, although I suppose the reader is able to get the "take home" message that the ESS is low and this is diagnostic of bad sampling. But we don't even say that the ESS is a diagnostic measure.

  3. Another thing is that the textual discussion uses n_eff and "number of effective samples" instead of "effective sample size," which is confusing.

Rethinking_2/Chp_08.ipynb - stats.credible_interval is not a valid rc parameter

FYI, trying to run Rethinking_2/Chp_08.ipynb I get the following error while trying to set ArViz configuration parameters:

%config Inline.figure_format = 'retina'
az.style.use('arviz-darkgrid')
az.rcParams['stats.credible_interval'] = 0.89  # set credible interval for entire notebook
az.rcParams['stats.information_criterion'] = 'waic'  # set information criterion to use in `compare`
az.rcParams['stats.ic_scale'] = 'deviance'    # set information criterion scale
np.random.seed(0)
KeyError: 'stats.credible_interval is not a valid rc parameter (see rcParams.keys() for a list of valid parameters)'

I'm using these package versions:

arviz     0.9.0
pandas    1.1.0
numpy     1.18.5
pymc3     3.9.3
CPython 3.8.1
IPython 7.17.0

Rethinking2 Chapter 11 Multinomial model 11.3, different prior for beta parameter is used

Just wanted to mention that the first model in section 11.3 is slightly different than the one described in the book (I am not talking about the justification to use a different category as the pivot). In the original STAN model the beta coefficient has a lower bound of zero, which is what forces it to be positive.

I think the original model has other issues (such as ignoring the numerical information about the income of the pivot category), but that's not related to the STAN -> PyMC translation.

Cheers

(de)centralize environments

Currently we have a global environment file. And then some of the books/folders have other environments (BCM, BDA3 and rethinking) but others do not have (BSM, rethinking2).

We should check if the requirements are correct and add watermark package.

Rethinking 2 - Chapter 4 - Code 4.61 Plots not on same canvas

The arviz plots here:
plt.scatter(d2.weight, d2.height) az.plot_hpd(weight_seq, mu_pred.T) az.plot_hpd(d2.weight, height_pred["height"]) plt.plot(weight_seq, mu_mean, "k") plt.xlabel("weight") plt.ylabel("height") plt.xlim(d2.weight.min(), d2.weight.max());
Do not output on the same figure as you have in your notebook. In my notebook everythime I call an az.plot_hdp ( changed to az.plot_hdi ) it goes into another, separate plot ! How do I show everything on top of eachother as you have ?
Screenshot from 2020-12-07 17-54-29

Splies for 2nd edition

In the second edition, ch4, splines are covered which was not in the 1se edition.

Currently I am doing :


num_knots = 5
knot_list = np.quantile(df2.year, q=np.linspace(0,1, num=num_knots))
knot_list


# Construct our B matrix (example from
# https://patsy.readthedocs.io/en/latest/spline-regression.html#general-b-splines)
B = patsy.dmatrix("bs(year, knots=knots, degree=3, include_intercept=True)",
                   data={'year': df2.year.values,
                         'knots': knot_list[1:-1]})

and then

np.random.seed(123)
with pm.Model() as m4_7:
    a = pm.Normal('a', 6, 10)
    w = pm.MvNormal('w', mu=np.zeros((1, B.shape[1])), cov=np.eye(B.shape[1]), shape=(1, B.shape[1]))
    #w = pm.MvNormal('w', np.zeros(B.shape[1]), np.eye(B.shape[1]))#, shape=B.shape[1])
    #w = pm.Normal('w', mu=0, sd=1)
    mu = a + pm.math.dot(np.asarray(B), w.T)
    sigma = pm.Exponential('sd', 1)
    T = pm.Normal('T', mu, sigma, observed=dfs.temp)
    trace = pm.sample(draws=1000, tune=1000, discard_tuned_samples=True)

However this is very very slow. Is there a quick way to do this?

Resources/Rethinking_2/Chp_04.ipynb

Resources/Rethinking_2/Chp_04.ipynb

block 4:
sns.distplot(x[1:4, :], kde_kws={"bw": 0.01}, ax=ax[0], hist=False)
sns.distplot(x[1:8, :], kde_kws={"bw": 0.01}, ax=ax[1], hist=False)
sns.distplot(x[1:, :], kde_kws={"bw": 0.01}, ax=ax[2], hist=False)

should be
sns.distplot(x[4, :], kde_kws={"bw": 0.01}, ax=ax[0], hist=False)
sns.distplot(x[8, :], kde_kws={"bw": 0.01}, ax=ax[1], hist=False)
sns.distplot(x[16, :], kde_kws={"bw": 0.01}, ax=ax[2], hist=False)

to show where the random walk ends up at the end of 4, 8, and 16 steps
rather than show each step. It makes the graphs look too heavy in the
center as it is.

Chapter 13 posterior covariance in Code 13.15

This issue is in regard to the code of book Statistical Rethinking A Bayesian Course

The calculation of the posterior covariance in Code 13.15 does not agree with the calculation shown in the book (page 396). This is because in the current implementation L_chol is calculated as an upper triangular matrix and not a lower one as in the model (m_13_1) specification. The calculation might be clearer if it's done using pm.expand_packed_triangular (as used with m_13_1).

I've run the following code in the place of Code 13.15 in the notebook to compare all three (book, current Code 13.15 implementation, and using pm.expand_packed_triangular )

plt.rcParams.update({'font.size': 14})
# from the model
chol_model = pm.expand_packed_triangular(2, trace_13_1['chol_cov'].mean(0),lower=True).eval()
cov_model = np.dot(chol_model, chol_model.T)
# from the book
sa_est = trace_13_1['sigma_cafe'].mean(0)[0]
sb_est = trace_13_1['sigma_cafe'].mean(0)[1]
rho_est = trace_13_1['Rho'].mean()
cov_ab = sa_est * sb_est * rho_est
cov_book = np.array([sa_est**2, cov_ab, cov_ab, sb_est**2]).reshape(2,2)
# compute posterior mean bivariate Gaussian
Mu_est = trace_13_1['ab'].mean(axis=0)
Chol_cov = trace_13_1['chol_cov'].mean(axis=0)
L_chol = np.zeros((2, 2))
L_chol[np.triu_indices(2, 0)] = Chol_cov
Sigma_est = np.dot(L_chol, L_chol.T)
# draw contours
_, ax = plt.subplots(1, 1, figsize=(10, 10))
for ls, cov_ in zip(['-','--',':'],[Sigma_est, cov_model, cov_book]):
    Gauss2d(Mu_est, np.asarray(cov_), [0.1, 0.3, 0.5, 0.8, 0.99], ax=ax, ls=ls)
ax.scatter(a1, b1)
ax.scatter(a2, b2, 
           facecolors='none', edgecolors='k', lw=1)
ax.plot([a1, a2], [b1, b2], 'k-', alpha=.5)
ax.set_xlabel('intercept', fontsize=14)
ax.set_ylabel('slope', fontsize=14)
ax.set_xlim(1.5, 6.1)
ax.set_ylim(-2.5, 0);

comparison

Note that I modified the code in Gauss2d to accept an argument ls to control for the linestyle of the eclipse contours:

def Gauss2d(mu, cov, ci, ax=None, ls='-'):
    """Copied from statsmodel"""
    if ax is None:
        _, ax = plt.subplots(figsize=(6, 6))

    v_, w = np.linalg.eigh(cov)
    u = w[0] / np.linalg.norm(w[0])
    angle = np.arctan(u[1]/u[0])
    angle = 180 * angle / np.pi # convert to degrees

    for level in ci:
        v = 2 * np.sqrt(v_ * chi2.ppf(level, 2)) #get size corresponding to level
        ell = Ellipse(mu[:2], v[0], v[1], 180 + angle, facecolor='None',
                      edgecolor='k',ls=ls,
                      alpha=(1-level)*.5,
                      lw=1.5)
        ell.set_clip_box(ax.bbox)
        ell.set_alpha(0.5)
        ax.add_artist(ell)
    
    return ax

resources/Rethinking_2/Chp_05.ipynb

  1. 5.13 (In [23])
  • the observation variable is 'Marriage_std', so its name should have been 'Marriage_std' instead of 'divorce_std'.
  • below is the code 5.13.
with pm.Model() as m_5_4:
    a = pm.Normal("a", 0, 0.2)
    bAM = pm.Normal("bAM", 0, 0.5)
    sigma = pm.Exponential("sigma", 1)

    mu = pm.Deterministic("mu", a + bAM * data["MedianAgeMarriage_std"])

    marriage_std = pm.Normal(
        "divorce_std", mu=mu, sigma=sigma, observed=data["Marriage_std"].values
    )
    prior_samples = pm.sample_prior_predictive()
    m_5_4_trace = pm.sample()
  1. 5.15 (In [25])
  • this cell is about posterior predictive check for the observation variable 'divorce_rate_std' whose model number is m_5_3 (as indicated m5.3 in the book)
  • so, 5_4 in the whole cell should be replaced by 5_3
  • and 'divorce_std' must be replaced by 'divorce_rate_std' as defined in model m_5_3
  • below is 5.15
# We can skip most of the code with the posterior predictive plot functionality in pymc3
with m_5_4:
    m_5_4_ppc = pm.sample_posterior_predictive(
        m_5_4_trace, var_names=["mu", "divorce_std"], samples=1000
    )

mu_mean = m_5_4_ppc["mu"].mean(axis=0)
mu_hpd = az.hpd(m_5_4_ppc["mu"], credible_interval=0.89)

D_sim = m_5_4_ppc["divorce_std"].mean(axis=0)
D_PI = az.hpd(m_5_4_ppc["divorce_std"], credible_interval=0.89)

Hope I am making a correct suggestion.

Many thanks for sharing the wonderful pymc3 codes for the book

Rethinking2 - Ch5 - Inconsistant Notebook Outputs

I'm quite new to pymc3 so apologies if this is a dumb question.

I'm going through the code examples for the Statistical Rethinking 2 book and I noticed that the plots within the repo are not the same as the plots I get when I run the notebook.

I made sure to install the conda enviroment from the yml within this repo.

pymc3                     3.9.2
arviz                     0.9.0

Code 5.3

Within the repo
image

What I get
image

Edit
I realized that my notebook was creating 4 chains instead of 2 chains. Now I'm looking into what chains are. I'll close this issue.

Chapter3: posterior_grid_approx function

Should the function posterior_grid_approx have the prior as

prior = np.repeat(1, grid_points) # uniform
The solution has the prior as

prior = np.repeat(5, grid_points) # uniform

Invalid syntax in BCM/CaseStudies/MemoryRetention.ipynb

When running it I get

  File "<ipython-input-3-9efceedc199d>", line 6
    1 = np.ma.masked_values([18, 18, 16, 13, 9, 6, 4, 4, 4, -999,
    ^
SyntaxError: cannot assign to literal

indeed, cell 3 assigned a numpy array to...the integer 1 ๐Ÿคฃ

Will look into it

Rethinking_2: trace_to_dataframe() deprecation, move to InferenceData

trace_to_dataframe() in PyMC3 to save traces is currently implemented in Rethinking_2 notebooks (e.g. Chp_04). But the function is planned for deprecation, with Arviz being the intended package to save traces. As per this comment by @AlexAndorra, Arviz's InferenceData format is a superior replacement to this function as it can handle multidimensional data natively, with associated names instead of axis number alone. In future, all instances of trace_to_dataframe() in the Notebooks would need to be updated.

I attempted to update the following code section in Chp_04. I think the code below is part of the solution, but I can't work out how to get the covariance estimates needed for this section. I'm happy for a Developer to be assigned to fix this. Or I could be assigned and attempt to fix it if Alex or a fellow Dev could provide guidance, since I am new to Bayes and PyMC3. Happy to discuss, and thanks for the support!

# Code 4.32 
with pm.Model() as model:ย 
ย ย   pm_data = az.from_pymc3(trace=trace_4_1)

Rethinking_2 Chp_4: Patsy usage for B-Spline

In section 4.74, the following code block is used:

from patsy import dmatrix
B = dmatrix(
    "bs(year, knots=knots, degree=3, include_intercept=True) -1",
    {"year": d2.year.values, "knots": knot_list[1:-1]},
)
  • I'm confused why knot_list[1:-1] is used; not explained in official textbook (to my knowledge) or in code block.
  • Not clear on why -1 was used in the formula-like string provided.

From Patsy official documentation:
https://patsy.readthedocs.io/en/latest/API-reference.html#patsy.bs

The - 1 is because one degree of freedom will be taken by the intercept; alternatively, you could leave the intercept term out of your model and use bs(x, num_bins, degree=0, include_intercept=True)

However, I've tried both variants, noting that they produce different results.
Could you describe what effect the declaration of B is supposed to have?

Trouble importing pymc3

First of all, thanks for this wonderful package and for the effort of adapting the R code in those bayesian books to Python.

I am trying to use the Rethinking_2 code examples. I have cloned the full repo and then I have followed the instructions:

conda env create -f environment.yml
source activate stat-rethink2-pymc3

But then, when opening the jupyter notebook of the second chapter, the first cell fails when trying to to the imports:

import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
import scipy.stats as stats

I have the following error:

WARNING (aesara.configdefaults): g++ not available, if using conda: `conda install m2w64-toolchain`
WARNING (aesara.configdefaults): g++ not detected ! Aesara will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Aesara flags cxx to an empty string.
WARNING (aesara.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-1-75554ce6e5eb> in <module>
      2 import matplotlib.pyplot as plt
      3 import numpy as np
----> 4 import pymc3 as pm
      5 import scipy.stats as stats

D:\Python\envs\stat-rethink2-pymc3\lib\site-packages\pymc3\__init__.py in <module>
     39 __set_compiler_flags()
     40 
---> 41 from pymc3 import gp, ode, sampling
     42 from pymc3.aesaraf import *
     43 from pymc3.backends import load_trace, save_trace

D:\Python\envs\stat-rethink2-pymc3\lib\site-packages\pymc3\gp\__init__.py in <module>
     14 
     15 from pymc3.gp import cov, mean, util
---> 16 from pymc3.gp.gp import TP, Latent, LatentKron, Marginal, MarginalKron, MarginalSparse

D:\Python\envs\stat-rethink2-pymc3\lib\site-packages\pymc3\gp\gp.py in <module>
     23 import pymc3 as pm
     24 
---> 25 from pymc3.distributions import draw_values
     26 from pymc3.gp.cov import Constant, Covariance
     27 from pymc3.gp.mean import Zero

D:\Python\envs\stat-rethink2-pymc3\lib\site-packages\pymc3\distributions\__init__.py in <module>
     13 #   limitations under the License.
     14 
---> 15 from pymc3.distributions import shape_utils, timeseries, transforms
     16 from pymc3.distributions.bart import BART
     17 from pymc3.distributions.bound import Bound

D:\Python\envs\stat-rethink2-pymc3\lib\site-packages\pymc3\distributions\timeseries.py in <module>
     19 from scipy import stats
     20 
---> 21 from pymc3.distributions import distribution, multivariate
     22 from pymc3.distributions.continuous import Flat, Normal, get_tau_sigma
     23 from pymc3.distributions.shape_utils import to_tuple

D:\Python\envs\stat-rethink2-pymc3\lib\site-packages\pymc3\distributions\distribution.py in <module>
     45     to_tuple,
     46 )
---> 47 from pymc3.model import (
     48     ContextMeta,
     49     FreeRV,

D:\Python\envs\stat-rethink2-pymc3\lib\site-packages\pymc3\model.py in <module>
     41 from pymc3.blocking import ArrayOrdering, DictToArrayBijection
     42 from pymc3.exceptions import ImputationWarning
---> 43 from pymc3.math import flatten_list
     44 from pymc3.util import WithMemoization, get_transformed_name, get_var_name, hash_key
     45 from pymc3.vartypes import continuous_types, discrete_types, isgenerator, typefilter

D:\Python\envs\stat-rethink2-pymc3\lib\site-packages\pymc3\math.py in <module>
     79 
     80 from aesara.tensor.nlinalg import det, matrix_dot, matrix_inverse, trace
---> 81 from aesara.tensor.nnet import sigmoid
     82 from scipy.linalg import block_diag as scipy_block_diag
     83 

ImportError: cannot import name 'sigmoid' from 'aesara.tensor.nnet' (D:\Python\envs\stat-rethink2-pymc3\lib\site-packages\aesara\tensor\nnet\__init__.py)

I don't know what should I do. Thanks in advance.

WAIC score in chapter 10 in Statistical Rethinking

If you read the comments on code 10.24 and 10.29, you could think "there is something wrong here, is pymc3 the origin of this error?" Well, there could be many questions about this, but there are at least two ways to confirm if that WAIC score is right.

The first one is to go to this page, download the file Ch._10_Counting_and_Classification.html and look for waic(b10.6, b10.7). The result is

##                  WAIC     SE
## b10.6         1001.40 331.52
## b10.7         1048.61 328.79
## b10.6 - b10.7  -47.21 166.11

I suppose this is good, it's a confirmation of the score given by pymc3.

The second way is to use Stan. How? In R, follow this code. This WAIC score is for m10.6

            Estimate          SE
elpd_waic   -493.2          163.0
p_waic       110.6          39.9
waic         986.4          326.0

You could do the same with m10.7 or m10.8 or m10.9, if you use the package rethinking, those WAIC scores are wrong. The scores given by pymc3 are correct, so... don't worry.

Chp 12 exercises missing last 2 problems

As discussed in #43 here is a reminder to complete this notebook with answers to last two problems once the issue with Ordered Logistic is solved. I'll gladly do it
PyMCheers!

Rethinking2: Chapter 4, New B-Splines Section

I am attempting to re-create the figures from Section 4.5.2 on Splines from Richard McElreath's 2nd edition of his book (details of the password found on the github page -- sorry, since it's not mine to give away, I only feel comfortable pointing to his repo where he tells you how to access the book).

So far, for what works, I have:

import numpy as np
import pandas as pd
import pymc3 as pm
import patsy

%matplotlib inline
import matplotlib.pyplot as plt

# I downloaded the cherry_blossoms.csv data from the main package website
#### Code Block 4.72 ####
df = pd.read_csv('../Data/cherry_blossoms.csv', sep=';', header=0)
df.describe()
# Note, I am being lazy and not trying to create histograms of each of the columns

#### Code Block 4.73 ####
# First, drop any rows where temp was nan
df2 = df[~df.temp.isna()]
num_knots = 15
knot_list = np.quantile(df2.year, q=np.linspace(0,1, num=num_knots))
knot_list

#### Code Block 4.74 ####
# Construct our B matrix (example from
# https://patsy.readthedocs.io/en/latest/spline-regression.html#general-b-splines)
B = patsy.dmatrix("bs(year, knots=knots, degree=3, include_intercept=True)",
                   data={'year': df2.year.values,
                         'knots': knot_list[1:-1]})
B.shape

#### Code Block 4.75 ####
# Plot each of the lines (note, I am skipping the first "intercept" column, which
# the book also skips -- truthfully, I don't know why he had "bs" include an intercept,
# when he later (code block 4.76) seems to use a model that seems to define its own
# intercept "a" term
plt.figure(figsize=(12,3))
for i in range(1, B.shape[1]):
    plt.plot(df2.year, B[:, i])
plt.plot(knot_list, [1 for x in knot_list], '+')
plt.xlabel('year')
plt.ylabel('basis value')
plt.show()

Now, to get to my error/ troubles. I am trying to re-create Code block 4.76. Here is what I have:

# Set the seed for reproducibility
np.random.seed(123)
with pm.Model() as m4_7:
    a = pm.Normal('a', 6, 10)
    w = pm.MvNormal('w', mu=np.zeros((1, B.shape[1])), cov=np.eye(B.shape[1]), shape=(1, B.shape[1]))
    #w = pm.MvNormal('w', np.zeros(B.shape[1]), np.eye(B.shape[1]))#, shape=B.shape[1])
    #w = pm.Normal('w', mu=0, sd=1)
    mu = a + pm.math.dot(B, w.T)
    sigma = pm.Exponential(1)
    T = pm.Normal('T', mu, sigma, observed=df2.temp)
    trace = pm.sample(draws=1000, tune=1000, discard_tuned_samples=True)

However, when I do that, I get a blank AssertionError:

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-77-9e58e06afc81> in <module>
      6     #w = pm.MvNormal('w', np.zeros(B.shape[1]), np.eye(B.shape[1]))#, shape=B.shape[1])
      7     #w = pm.Normal('w', mu=0, sd=1)
----> 8     mu = a + pm.math.dot(B, w.T)
      9     sigma = pm.Exponential(1)
     10     T = pm.Normal('T', mu, sigma, observed=df2.temp)

~/miniconda3/envs/srt/lib/python3.7/site-packages/theano/tensor/basic.py in dot(a, b)
   6077 
   6078     """
-> 6079     a, b = as_tensor_variable(a), as_tensor_variable(b)
   6080 
   6081     if a.ndim == 0 or b.ndim == 0:

~/miniconda3/envs/srt/lib/python3.7/site-packages/theano/tensor/basic.py in as_tensor_variable(x, name, ndim)
    192 
    193     try:
--> 194         return constant(x, name=name, ndim=ndim)
    195     except TypeError:
    196         try:

~/miniconda3/envs/srt/lib/python3.7/site-packages/theano/tensor/basic.py in constant(x, name, ndim, dtype)
    230 
    231     """
--> 232     x_ = scal.convert(x, dtype=dtype)
    233 
    234     bcastable = [d == 1 for d in x_.shape]

~/miniconda3/envs/srt/lib/python3.7/site-packages/theano/scalar/basic.py in convert(x, dtype)
    282             if x_.size == 0 and not hasattr(x, 'dtype'):
    283                 x_ = np.asarray(x, dtype=config.floatX)
--> 284     assert type(x_) in [np.ndarray, np.memmap]
    285     return x_
    286 

AssertionError: 

I've tried a few other ways of encoding pm.MvNormal or the parameter "w", as best as I could figure from the documentation, but I keep getting errors. So, for my question -- what can I do to get this working? Am I missing something, or is trying to do this kind of matrix multiplication not supported?

Thanks for your help!
PS, my conda environment is here. Also, if you decide to update the code block examples for chapter 4 here and want to re-use any of my code above, feel free!

BDA3 Chapter 5

I think that the reproduction of Table 5.3 and Figure 5.8b should be with posterior samples. Currently is with posterior predictive samples.

Softmax implementation in chapter 10 is wrong?

I wonder if the softmax regressions implementation in chapter 10 is right. In Code 10.57 it reads:

with pm.Model() as m_10_16:
    b = pm.Normal('b', 0., 5.)
    s2 = b*2
    s3 = b*3
    p_ = pm.math.stack([b, s2, s3])
    obs = pm.Categorical('career', p=pm.math.sigmoid(p_), observed=career)
    trace_10_16 = pm.sample(1000, tune=1000)

but I think this should be:

    p_ = pm.math.stack([0, s2, s3])
    obs = pm.Categorical('career', p=tt.nnet.softmax(p_), observed=career)

Otherwise, the model is over-parametrized, isn't it? You just need 2 categories to infer the respective probabilities of all 3.
Also, I think the link function should be softmax, not sigmoid (this is also the case in Code 10.58).

I can send a PR if these are indeed mistakes. Yours truly ๐Ÿ˜‰

Chp 11 exercises missing 11H5

As discussed in #43 here is a reminder to complete this notebook with answers to 11H5 once the issue with Ordered Logistic is solved. I'll gladly do it
PyMCheers!

WARNING No module named 'mkl'

I was looking at the notebook for chapter III in the binder server, and when I tried to run the first cell, I got a warning regarding mkl. The full traceback is copied below. Maybe this could lead to numerical discrepancies when running code blocks on binder.

WARNING (theano.configdefaults): install mkl with `conda install mkl-service`: No module named 'mkl'

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
/srv/conda/lib/python3.6/configparser.py in _unify_values(self, section, vars)
   1137         try:
-> 1138             sectiondict = self._sections[section]
   1139         except KeyError:

KeyError: 'blas'

During handling of the above exception, another exception occurred:

NoSectionError                            Traceback (most recent call last)
/srv/conda/lib/python3.6/site-packages/theano/configparser.py in fetch_val_for_key(key, delete_key)
    167         try:
--> 168             return theano_cfg.get(section, option)
    169         except ConfigParser.InterpolationError:

/srv/conda/lib/python3.6/configparser.py in get(self, section, option, raw, vars, fallback)
    780         try:
--> 781             d = self._unify_values(section, vars)
    782         except NoSectionError:

/srv/conda/lib/python3.6/configparser.py in _unify_values(self, section, vars)
   1140             if section != self.default_section:
-> 1141                 raise NoSectionError(section)
   1142         # Update with the entry specific variables

NoSectionError: No section: 'blas'

During handling of the above exception, another exception occurred:

KeyError                                  Traceback (most recent call last)
/srv/conda/lib/python3.6/site-packages/theano/configparser.py in __get__(self, cls, type_, delete_key)
    327                 val_str = fetch_val_for_key(self.fullname,
--> 328                                             delete_key=delete_key)
    329                 self.is_default = False

/srv/conda/lib/python3.6/site-packages/theano/configparser.py in fetch_val_for_key(key, delete_key)
    171     except (ConfigParser.NoOptionError, ConfigParser.NoSectionError):
--> 172         raise KeyError(key)
    173 

KeyError: 'blas.ldflags'

During handling of the above exception, another exception occurred:

ModuleNotFoundError                       Traceback (most recent call last)
/srv/conda/lib/python3.6/site-packages/theano/configdefaults.py in check_mkl_openmp()
   1249     try:
-> 1250         import mkl
   1251         if '2018' in mkl.get_version_string():

ModuleNotFoundError: No module named 'mkl'

During handling of the above exception, another exception occurred:

RuntimeError                              Traceback (most recent call last)
<ipython-input-1-2bcb85e206c0> in <module>()
----> 1 import pymc3 as pm
      2 import numpy as np
      3 import pandas as pd
      4 import matplotlib.pyplot as plt
      5 import seaborn as sns

/srv/conda/lib/python3.6/site-packages/pymc3/__init__.py in <module>()
      3 
      4 from .blocking import *
----> 5 from .distributions import *
      6 from .external import *
      7 from .glm import *

/srv/conda/lib/python3.6/site-packages/pymc3/distributions/__init__.py in <module>()
----> 1 from . import timeseries
      2 from . import transforms
      3 
      4 from .continuous import Uniform
      5 from .continuous import Flat

/srv/conda/lib/python3.6/site-packages/pymc3/distributions/timeseries.py in <module>()
----> 1 import theano.tensor as tt
      2 from theano import scan
      3 
      4 from pymc3.util import get_variable_name
      5 from .continuous import get_tau_sd, Normal, Flat

/srv/conda/lib/python3.6/site-packages/theano/__init__.py in <module>()
    122 from theano.printing import pprint, pp
    123 
--> 124 from theano.scan_module import (scan, map, reduce, foldl, foldr, clone,
    125                                 scan_checkpoints)
    126 

/srv/conda/lib/python3.6/site-packages/theano/scan_module/__init__.py in <module>()
     39 __contact__ = "Razvan Pascanu <r.pascanu@gmail>"
     40 
---> 41 from theano.scan_module import scan_opt
     42 from theano.scan_module.scan import scan
     43 from theano.scan_module.scan_checkpoints import scan_checkpoints

/srv/conda/lib/python3.6/site-packages/theano/scan_module/scan_opt.py in <module>()
     58 
     59 import theano
---> 60 from theano import tensor, scalar
     61 from theano.tensor import opt, get_scalar_constant_value, Alloc, AllocEmpty
     62 from theano import gof

/srv/conda/lib/python3.6/site-packages/theano/tensor/__init__.py in <module>()
     15 from theano.tensor import opt
     16 from theano.tensor import opt_uncanonicalize
---> 17 from theano.tensor import blas
     18 from theano.tensor import blas_scipy
     19 from theano.tensor import blas_c

/srv/conda/lib/python3.6/site-packages/theano/tensor/blas.py in <module>()
    153 from theano.scalar import bool as bool_t
    154 from theano.tensor import basic as T
--> 155 from theano.tensor.blas_headers import blas_header_text
    156 from theano.tensor.blas_headers import blas_header_version
    157 from theano.tensor.opt import in2out, local_dimshuffle_lift

/srv/conda/lib/python3.6/site-packages/theano/tensor/blas_headers.py in <module>()
    985 
    986 
--> 987 if not config.blas.ldflags:
    988     _logger.warning('Using NumPy C-API based implementation for BLAS functions.')
    989 

/srv/conda/lib/python3.6/site-packages/theano/configparser.py in __get__(self, cls, type_, delete_key)
    330             except KeyError:
    331                 if callable(self.default):
--> 332                     val_str = self.default()
    333                 else:
    334                     val_str = self.default

/srv/conda/lib/python3.6/site-packages/theano/configdefaults.py in default_blas_ldflags()
   1445         if res:
   1446             if 'mkl' in res:
-> 1447                 check_mkl_openmp()
   1448             return res
   1449 

/srv/conda/lib/python3.6/site-packages/theano/configdefaults.py in check_mkl_openmp()
   1260 you set this flag and don't set the appropriate environment or make
   1261 sure you have the right version you *will* get wrong results.
-> 1262 """)
   1263 
   1264 

RuntimeError: 
Could not import 'mkl'.  Either install mkl-service with conda or set
MKL_THREADING_LAYER=GNU in your environment for MKL 2018.

If you have MKL 2017 install and are not in a conda environment you
can set the Theano flag blas.check_openmp to False.  Be warned that if
you set this flag and don't set the appropriate environment or make
sure you have the right version you *will* get wrong results.

Ch 3 plots

I have been working through Statistical Rethinking & have found your notebooks very helpful! I was able to get some of the intervals of defined boundaries & intervals of defined mass plots built (section 3.2.1 and 3.2.2), specifically figure 3.2. Is this something of interest in a pull request?

Thanks

Implementing exercise 6.3.3 of Bayesian Cognitive Modeling

Thanks for providing the pymc3 code to this book. It's been a lot of fun going through it!

I'm trying to modify the code to the twenty questions model to use the Rasch model of question accuracy for exercise 6.3.3, but I'm not sure how to do this, and googling has been little help. It seems like something like this should be possible....

with pm.Model() as model3:
    # prior
    pi = pm.Beta('pi', alpha=1, beta=1, shape=Np)
    qj = pm.Beta('qj', alpha=1, beta=1, shape=Nq)
    # accuracy prior
#     theta = pm.Deterministic('theta', tt.outer(pi, qj))
    pi_min_qj = tt.sub.outer(pi, qj)
    theta = pm.Deterministic('theta', tt.exp(pi_min_qj) / (1+tt.exp(pi_min_qj)))
    # observed
    kij = pm.Bernoulli('kij', p=theta, observed=k)
    
    trace3=pm.sample(3000, tune=1000)

But of course there is no tt.sub.outer. Any help is much appreciated.

EDIT:
Actually, I think this does the trick?

with pm.Model() as model3:
    # prior

    pi = pm.Beta('pi', alpha=1, beta=1, shape=(Np, 1))
    qj = pm.Beta('qj', alpha=1, beta=1, shape=(1, Nq))

    # accuracy prior
    theta = pm.Deterministic('theta', tt.exp(pi-qj) / (1+tt.exp(pi-qj)))
    
    # observed
    kij = pm.Bernoulli('kij', p=theta, observed=k)
    
    trace3=pm.sample(3000, tune=1000)

Add black formatter to .pre-commit.config

xref this comment #118 (comment)

If you're not running pre-commit checks during CI (there's still lots of notebooks that need updating), then even just putting the desired checks in .pre-commit-config.yaml might be a good idea so that for any new notebooks which are added, contributors can simply run

pre-commit run --files  <notebooks>

without having anything other than pre-commit


I can take this on later this week if you like

12.34 rethinking

Iโ€™m confused why the trace of a_actor is multiplied by a_actor_sims? From the book I was under the impression we were to simply replace trace of a_actor with the sims version and average over it. But then again Iโ€™m just learning :) . Can you please give me a hint why the code is as it is here?

Implement NB formatting pre-commit checks in CI

This issue is up for grabs

That'd be great to implement here the automatic formatting checks we do on NBs on the main PyMC3 repo.
This means using using isort and black via the nbqa package to get a consistent codebase and allow maintainers to focus on substance instead of style while reviewing PRs.

Ideally, we would have the same process as outlined in the PyMC style guide and could therefore refer contributors to it.

Tagging you here @MarcoGorelli, because you know about this, so if you feel like implementing this, that'd be awesome!

6.26 small mistake

There is a minor mistake in 6.26. It should be calculating the % probability as area under normal < 0.

Screen Shot 2019-05-07 at 2 21 25 AM

Amazon reviewer had an issue with SR book python code

One Amazon reviewer had an issue with Statistical Rethinking book python code. I didn't have a chance to confirm this. Here's the review:
https://www.amazon.com/gp/customer-reviews/RFZEN0ZWPL2ZA/ref=cm_cr_getr_d_rvw_ttl?ie=UTF8&ASIN=1482253445

--- Excerpt from it ---

Edit: starting in chapter 6, the Jupiter notebooks use pm.compare to compare models. this doesn't work for me, yet. I followed the instructions for installing the notebooks into a conda environment but there appears to be something wrong. The rest of the python code has run without a glitch.

Edit2: the pm.compare code works if you do the following:
Change :
comp_df = pm.compare(traces=[trace_12_4, trace_12_5],
models=[m_12_4, m_12_5], method='pseudo-BMA')
To:
comp_df = pm.compare({m_12_4: trace_12_4, m_12_5: trace_12_5}, method='pseudo-BMA')
pm.compare expects its first argument to be a dictionary. This fix worked every time I tried it.


Thanks

Trouble installing environment

I'm having trouble installing the environment.

$ conda env create -f environment.yml
Warning: you have pip-installed dependencies in your environment file, but you do not list pip itself as one of your conda dependencies. Conda may not use the correct pip to install your packages, and they may end up in the wrong place. Please add an explicit pip dependency. I'm adding one for you, but still nagging you.

There were some more pip errors, and then when I tried to activate the environment, I got:

Could not find conda environment: stat-rethink-pymc3
You can list all discoverable environments with conda info --envs.

Should I try to install PyMC3 in my deafult environment instead?

Rethinking_2 Chp_4: Different pymc3 output between notebook and local run

Hi there. I ran the following code sections in my IDE and obtained different output to the output in the Jupyter notebook.

To Reproduce
For example, I ran:

d = pd.read_csv("Data/Howell1.csv", sep=";", header=0)
d2 = d[d.age >= 18]

# Code 4.27
with pm.Model() as m4_1:
    mu = pm.Normal("mu", mu=178, sd=20)
    sigma = pm.Uniform("sigma", lower=0, upper=50)
    height = pm.Normal("height", mu=mu, sd=sigma, observed=d2.height)
with m4_1:
    trace_4_1 = pm.sample(1000, tune=1000)

# Code 4.29
az.summary(trace_4_1, round_to=2, kind="stats")

And obtained the following output, compared to the output in the notebook. The decimal place values differ between them:

# Output: notebook
	mean 	sd 	hpd_5.5% 	hpd_94.5%
mu 	154.62 	0.41 	153.96 	155.25
sigma 	7.77 	0.29 	7.30 	8.23

# Output: mine
         mean    sd  hdi_3%  hdi_97%
mu     154.60  0.41  153.84   155.38
sigma    7.76  0.31    7.18     8.32

There were also numeric differences in the decimal place values for the variance-covariance matrix, variances, and the correlation matrix:

# Code 4.32 output
# variance-covariance: notebook
               mu 	    sigma
mu     	 0.178473 	-0.007866
sigma 	-0.007866 	0.087795
# variance-covariance: mine
             mu     sigma
mu     0.171058 -0.002112
sigma -0.002112  0.093520

# Code 4.33 output
# variances: notebook
array([0.17847295, 0.08779492])
# variances: mine
array([0.17105813, 0.09351962])

# Code 4.33 output
# correlations: notebook
	          mu    	sigma
mu   	1.000000 	-0.062842
sigma 	-0.062842 	1.000000
# correlations: mine
             mu     sigma
mu     1.000000 -0.016702
sigma -0.016702  1.000000

The differences in numeric output don't seem to be due to rounding errors, so I'm not sure what might explain those differences. Would appreciate if you had any pointers.

I'm a Python user but new to Bayesian analysis and the pymc3 package. Thanks very much for porting all the Rethinking R code to Python. The explanations and examples are great!

Python:
Python 3.8.5 (default, Sep 4 2020, 07:30:14)
[GCC 7.3.0] on linux
pymc3 3.9.3, arviz 0.10.0

4.61

In 4.61 what does this mean?:

sample_ppc returns values corresponding to the input values (weights in this example). Because the weights are not ordered if we use them with the fill_between function we will get a mess. For that reason in the following cell we order the weights and the predicted heights

BCM/ParameterEstimation/Binomial.ipynb

This is just a small thing about the last cell of BCM/ParameterEstimation/Binomial.ipynb added as a part of Note from Junpeng Lao.

The length of divergent below was 4000 when I printed its size, which corresponded to 1000 samples of 4 traces. So I guess divperc should have been 357/4000 = 9.125% instead of 35.7%.

# display the total number and percentage of divergences
divergent = trace6_["diverging"]
print("Number of divergences: %d" % divergent.nonzero()[0].size)
divperc = divergent.nonzero()[0].size * 100 / len(trace6_)
print("Percentage of divergent samples: %.2f" % divperc)

# scatter plot between theta and N
# for the identifcation of the problematic neighborhoods in parameter space
theta_tr = trace6_["theta"]
totaln_tr = trace6_["TotalN"]
plt.figure(figsize=(6, 6))
plt.scatter(totaln_tr[divergent == 0], theta_tr[divergent == 0], color="r", alpha=0.05)
plt.scatter(totaln_tr[divergent == 1], theta_tr[divergent == 1], color="g", alpha=0.5);

Many thanks for sharing the pymc3 codes.

TypeError on import TypeError: data type "float128" not understood

On Windows, Anaconda 4.8.3. Created conda environment with conda env create -f environment.yml inside Rethinking_2 directory.

Doing import pymc3 gives the following error

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-1-75554ce6e5eb> in <module>
      2 import matplotlib.pyplot as plt
      3 import numpy as np
----> 4 import pymc3 as pm
      5 import scipy.stats as stats

~\Miniconda3\envs\stat-rethink2-pymc3\lib\site-packages\pymc3\__init__.py in <module>
     49 
     50 from .blocking import *
---> 51 from .distributions import *
     52 from .distributions import transforms
     53 from .glm import *

~\Miniconda3\envs\stat-rethink2-pymc3\lib\site-packages\pymc3\distributions\__init__.py in <module>
     13 #   limitations under the License.
     14 
---> 15 from . import timeseries
     16 from . import transforms
     17 from . import shape_utils

~\Miniconda3\envs\stat-rethink2-pymc3\lib\site-packages\pymc3\distributions\timeseries.py in <module>
     20 
     21 from pymc3.util import get_variable_name
---> 22 from .continuous import get_tau_sigma, Normal, Flat
     23 from .shape_utils import to_tuple
     24 from . import multivariate

~\Miniconda3\envs\stat-rethink2-pymc3\lib\site-packages\pymc3\distributions\continuous.py in <module>
     30 from .special import log_i0
     31 from ..math import invlogit, logit, logdiffexp
---> 32 from .dist_math import (
     33     alltrue_elemwise, betaln, bound, gammaln, i0e, incomplete_beta, logpow,
     34     normal_lccdf, normal_lcdf, SplineWrapper, std_cdf, zvalue,

~\Miniconda3\envs\stat-rethink2-pymc3\lib\site-packages\pymc3\distributions\dist_math.py in <module>
     35 f = floatX
     36 c = - .5 * np.log(2. * np.pi)
---> 37 _beta_clip_values = {
     38     dtype: (np.nextafter(0, 1, dtype=dtype), np.nextafter(1, 0, dtype=dtype))
     39     for dtype in ["float16", "float32", "float64", "float128"]

~\Miniconda3\envs\stat-rethink2-pymc3\lib\site-packages\pymc3\distributions\dist_math.py in <dictcomp>(.0)
     36 c = - .5 * np.log(2. * np.pi)
     37 _beta_clip_values = {
---> 38     dtype: (np.nextafter(0, 1, dtype=dtype), np.nextafter(1, 0, dtype=dtype))
     39     for dtype in ["float16", "float32", "float64", "float128"]
     40 }

TypeError: data type "float128" not understood

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.