Code Monkey home page Code Monkey logo

hardhat's Introduction

hardhat

Codecov test coverage R-CMD-check

Introduction

hardhat is a developer focused package designed to ease the creation of new modeling packages, while simultaneously promoting good R modeling package standards as laid out by the set of opinionated Conventions for R Modeling Packages.

hardhat has four main goals:

  • Easily, consistently, and robustly preprocess data at fit time and prediction time with mold() and forge().

  • Provide one source of truth for common input validation functions, such as checking if new data at prediction time contains the same required columns used at fit time.

  • Provide extra utility functions for additional common tasks, such as adding intercept columns, standardizing predict() output, and extracting valuable class and factor level information from the predictors.

  • Reimagine the base R preprocessing infrastructure of stats::model.matrix() and stats::model.frame() using the stricter approaches found in model_matrix() and model_frame().

The idea is to reduce the burden of creating a good modeling interface as much as possible, and instead let the package developer focus on writing the core implementation of their new model. This benefits not only the developer, but also the user of the modeling package, as the standardization allows users to build a set of “expectations” around what any modeling function should return, and how they should interact with it.

Installation

You can install the released version of hardhat from CRAN with:

install.packages("hardhat")

And the development version from GitHub with:

# install.packages("pak")
pak::pak("tidymodels/hardhat")

Learning more

To learn about how to use hardhat, check out the vignettes:

  • vignette("mold", "hardhat"): Learn how to preprocess data at fit time with mold().

  • vignette("forge", "hardhat"): Learn how to preprocess new data at prediction time with forge().

  • vignette("package", "hardhat"): Learn how to use mold() and forge() to help in creating a new modeling package.

You can also watch Max Kuhn discuss how to use hardhat to build a new modeling package from scratch at the XI Jornadas de Usuarios de R conference here.

Contributing

This project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

hardhat's People

Contributors

davisvaughan avatar emilhvitfeldt avatar hfrick avatar juliasilge avatar marlycormar avatar simonpcouch avatar topepo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hardhat's Issues

Add tests for more complex interaction specifications

These are all correct but should be tested

library(hardhat)
library(gapminder)

# year + continent + year:continent
mold(year ~ year*continent, gapminder)
#> $predictors
#> # A tibble: 1,704 x 10
#>     year continentAfrica continentAmeric… continentAsia continentEurope
#>    <dbl>           <dbl>            <dbl>         <dbl>           <dbl>
#>  1  1952               0                0             1               0
#>  2  1957               0                0             1               0
#>  3  1962               0                0             1               0
#>  4  1967               0                0             1               0
#>  5  1972               0                0             1               0
#>  6  1977               0                0             1               0
#>  7  1982               0                0             1               0
#>  8  1987               0                0             1               0
#>  9  1992               0                0             1               0
#> 10  1997               0                0             1               0
#> # … with 1,694 more rows, and 5 more variables: continentOceania <dbl>,
#> #   `year:continentAmericas` <dbl>, `year:continentAsia` <dbl>,
#> #   `year:continentEurope` <dbl>, `year:continentOceania` <dbl>
#> 
#> $outcomes
#> # A tibble: 1,704 x 1
#>     year
#>    <int>
#>  1  1952
#>  2  1957
#>  3  1962
#>  4  1967
#>  5  1972
#>  6  1977
#>  7  1982
#>  8  1987
#>  9  1992
#> 10  1997
#> # … with 1,694 more rows
#> 
#> $preprocessor
#> Formula Preprocessor: 
#>  
#> # Predictors: 2 
#>   # Outcomes: 1 
#>    Intercept: FALSE 
#>   Indicators: TRUE 
#> 
#> $offset
#> NULL

# basically year + continent
mold(year ~ year*continent - year:continent, gapminder)
#> $predictors
#> # A tibble: 1,704 x 6
#>     year continentAfrica continentAmeric… continentAsia continentEurope
#>    <dbl>           <dbl>            <dbl>         <dbl>           <dbl>
#>  1  1952               0                0             1               0
#>  2  1957               0                0             1               0
#>  3  1962               0                0             1               0
#>  4  1967               0                0             1               0
#>  5  1972               0                0             1               0
#>  6  1977               0                0             1               0
#>  7  1982               0                0             1               0
#>  8  1987               0                0             1               0
#>  9  1992               0                0             1               0
#> 10  1997               0                0             1               0
#> # … with 1,694 more rows, and 1 more variable: continentOceania <dbl>
#> 
#> $outcomes
#> # A tibble: 1,704 x 1
#>     year
#>    <int>
#>  1  1952
#>  2  1957
#>  3  1962
#>  4  1967
#>  5  1972
#>  6  1977
#>  7  1982
#>  8  1987
#>  9  1992
#> 10  1997
#> # … with 1,694 more rows
#> 
#> $preprocessor
#> Formula Preprocessor: 
#>  
#> # Predictors: 2 
#>   # Outcomes: 1 
#>    Intercept: FALSE 
#>   Indicators: TRUE 
#> 
#> $offset
#> NULL

# year, continent, pop, all 2nd ord interact
mold(year ~ (year+continent+pop)^2, gapminder)
#> $predictors
#> # A tibble: 1,704 x 16
#>     year continentAfrica continentAmeric… continentAsia continentEurope
#>    <dbl>           <dbl>            <dbl>         <dbl>           <dbl>
#>  1  1952               0                0             1               0
#>  2  1957               0                0             1               0
#>  3  1962               0                0             1               0
#>  4  1967               0                0             1               0
#>  5  1972               0                0             1               0
#>  6  1977               0                0             1               0
#>  7  1982               0                0             1               0
#>  8  1987               0                0             1               0
#>  9  1992               0                0             1               0
#> 10  1997               0                0             1               0
#> # … with 1,694 more rows, and 11 more variables: continentOceania <dbl>,
#> #   pop <dbl>, `year:continentAmericas` <dbl>, `year:continentAsia` <dbl>,
#> #   `year:continentEurope` <dbl>, `year:continentOceania` <dbl>,
#> #   `year:pop` <dbl>, `continentAmericas:pop` <dbl>,
#> #   `continentAsia:pop` <dbl>, `continentEurope:pop` <dbl>,
#> #   `continentOceania:pop` <dbl>
#> 
#> $outcomes
#> # A tibble: 1,704 x 1
#>     year
#>    <int>
#>  1  1952
#>  2  1957
#>  3  1962
#>  4  1967
#>  5  1972
#>  6  1977
#>  7  1982
#>  8  1987
#>  9  1992
#> 10  1997
#> # … with 1,694 more rows
#> 
#> $preprocessor
#> Formula Preprocessor: 
#>  
#> # Predictors: 3 
#>   # Outcomes: 1 
#>    Intercept: FALSE 
#>   Indicators: TRUE 
#> 
#> $offset
#> NULL

# year + year:continent
mold(pop ~ year + continent %in% year, gapminder)
#> $predictors
#> # A tibble: 1,704 x 6
#>     year `year:continent… `year:continent… `year:continent…
#>    <dbl>            <dbl>            <dbl>            <dbl>
#>  1  1952                0                0             1952
#>  2  1957                0                0             1957
#>  3  1962                0                0             1962
#>  4  1967                0                0             1967
#>  5  1972                0                0             1972
#>  6  1977                0                0             1977
#>  7  1982                0                0             1982
#>  8  1987                0                0             1987
#>  9  1992                0                0             1992
#> 10  1997                0                0             1997
#> # … with 1,694 more rows, and 2 more variables:
#> #   `year:continentEurope` <dbl>, `year:continentOceania` <dbl>
#> 
#> $outcomes
#> # A tibble: 1,704 x 1
#>         pop
#>       <int>
#>  1  8425333
#>  2  9240934
#>  3 10267083
#>  4 11537966
#>  5 13079460
#>  6 14880372
#>  7 12881816
#>  8 13867957
#>  9 16317921
#> 10 22227415
#> # … with 1,694 more rows
#> 
#> $preprocessor
#> Formula Preprocessor: 
#>  
#> # Predictors: 2 
#>   # Outcomes: 1 
#>    Intercept: FALSE 
#>   Indicators: TRUE 
#> 
#> $offset
#> NULL

Created on 2019-02-16 by the reprex package (v0.2.1.9000)

Offsets

mold() currently allows for offsets in the formula method directly ~ offset(Sepal.Length) but you don't get them back at all. We should return the offset as a slot in the return value from mold() as a tibble with 1 column, .offset. Extract them in bake_terms_() with model.offset() if they exist.

In forge(), we could do the same thing.

The actual preprocessor for the terms method should store an offset = FALSE indicator to know whether or not we need to look for it in forge()

indicators = FALSE behavior

These should all throw warnings of some kind.

Maybe when checking the formula RHS with indicators = FALSE, we should look for only + and names, and warn about anything else! (rather than special casing everything)

library(hardhat)
library(gapminder)
gapminder <- gapminder[1:5,]
mold(year ~ year*continent, gapminder, indicators = FALSE)
#> $predictors
#> # A tibble: 5 x 2
#>    year continent
#>   <int> <fct>    
#> 1  1952 Asia     
#> 2  1957 Asia     
#> 3  1962 Asia     
#> 4  1967 Asia     
#> 5  1972 Asia     
#> 
#> $outcomes
#> # A tibble: 5 x 1
#>    year
#>   <int>
#> 1  1952
#> 2  1957
#> 3  1962
#> 4  1967
#> 5  1972
#> 
#> $preprocessor
#> Formula Preprocessor: 
#>  
#> # Predictors: 2 
#>   # Outcomes: 1 
#>    Intercept: FALSE 
#>   Indicators: FALSE 
#> 
#> $offset
#> NULL
mold(year ~ year*continent - year, gapminder, indicators = FALSE)
#> $predictors
#> # A tibble: 5 x 2
#>    year continent
#>   <int> <fct>    
#> 1  1952 Asia     
#> 2  1957 Asia     
#> 3  1962 Asia     
#> 4  1967 Asia     
#> 5  1972 Asia     
#> 
#> $outcomes
#> # A tibble: 5 x 1
#>    year
#>   <int>
#> 1  1952
#> 2  1957
#> 3  1962
#> 4  1967
#> 5  1972
#> 
#> $preprocessor
#> Formula Preprocessor: 
#>  
#> # Predictors: 2 
#>   # Outcomes: 1 
#>    Intercept: FALSE 
#>   Indicators: FALSE 
#> 
#> $offset
#> NULL
mold(year ~ (year+continent+pop)^2, gapminder, indicators = FALSE)
#> $predictors
#> # A tibble: 5 x 3
#>    year continent      pop
#>   <int> <fct>        <int>
#> 1  1952 Asia       8425333
#> 2  1957 Asia       9240934
#> 3  1962 Asia      10267083
#> 4  1967 Asia      11537966
#> 5  1972 Asia      13079460
#> 
#> $outcomes
#> # A tibble: 5 x 1
#>    year
#>   <int>
#> 1  1952
#> 2  1957
#> 3  1962
#> 4  1967
#> 5  1972
#> 
#> $preprocessor
#> Formula Preprocessor: 
#>  
#> # Predictors: 3 
#>   # Outcomes: 1 
#>    Intercept: FALSE 
#>   Indicators: FALSE 
#> 
#> $offset
#> NULL

Created on 2019-02-16 by the reprex package (v0.2.1.9000)

Add documentation (to package-template?) on case weights

They aren't supported directly by mold(), but would be up to the modeler to use correctly. Maybe we can provide a validation function to ensure they are valid looking case weights (integer-ish, same length as x, etc)

linear_regression <- function(x, y, case_weights) {
  x <- mold(x, y)
  xx <- linear_regression_impl(x = x$predictors, y = x$outcomes, case_weights)
  linear_reg_obj(xx, pre = x$preprocessor)
}

predict.line_reg_obj <- function(object, new_data) {
  new_data <- forge(object$pre, new_data)
  ...
  spruce_()
}

spruce_conf_int()

What are the inputs and outputs?

What about classifcation vs regression? Different number of output columns

mold() arguments

Rather than mold() with indicators and intercept, those args should only be in the engine as they are properties of the engine to begin with.

So mold(x, data) would use the default engine with intercept=FALSE but if you want to use an intercept you'd do mold(x, data, engine = default_formula_engine(intercept = TRUE))

Generalize

What if we exported all of the new_preprocessor() functions, and their engines? And then standardized the preprocessor engines to all have a mold() and forge() function attached? Basically how the default engine has process() right now. This would let us export the functionality of what bake_terms_engine() does for us in a clean way, wrapped up in preprocessor$engine$forge() that should have standard args across engines

Add a test for nested inline offsets

Just to show this is the same as base R, these are not recognized

# not recognized as offset! good!
library(gapminder)
mf <- model.frame(country ~ log(offset(year)), gapminder)
attr(mf, "terms")
#> country ~ log(offset(year))
#> attr(,"variables")
#> list(country, log(offset(year)))
#> attr(,"factors")
#>                   log(offset(year))
#> country                           0
#> log(offset(year))                 1
#> attr(,"term.labels")
#> [1] "log(offset(year))"
#> attr(,"order")
#> [1] 1
#> attr(,"intercept")
#> [1] 1
#> attr(,"response")
#> [1] 1
#> attr(,".Environment")
#> <environment: R_GlobalEnv>
#> attr(,"predvars")
#> list(country, log(offset(year)))
#> attr(,"dataClasses")
#>           country log(offset(year)) 
#>          "factor"         "numeric"
head(model.matrix(terms(mf), mf))
#>   (Intercept) log(offset(year))
#> 1           1          7.576610
#> 2           1          7.579168
#> 3           1          7.581720
#> 4           1          7.584265
#> 5           1          7.586804
#> 6           1          7.589336
model.offset(mf)
#> NULL

Created on 2019-02-16 by the reprex package (v0.2.1.9000)

Maybe don't allow for `type =` flexibility

It would greatly simplify some things, and make it straightforward to add a run_model_matrix = FALSE arg to the formula method of mold() if we always returned a tibble.

It would also generally clean up the mold() fn call. And things would generally be more type stable for the developer, at the cost of some performance loss if the user passes in a matrix to mold(x = <matrix>) and then the developer wants that back as a matrix. (Then again, they wouldn't have to use mold() if they didn't want to, and we could export the add_intercept_column() fn if we wanted to)

mold(formula, dummy = TRUE) for factors

Tree based methods might want to use the formula method but not expand factors to dummies (straight up factors, or interactions with factors), but they might still want purely numeric interactions.

Should predictor classes be held onto?

If a matrix is used, store the column names as all numeric. The new_data can be a data frame or matrix so we would still need to validate data frame input.

We could use .MFclass() because it collapses integer/double together and has special handling for matrices.

What about outcome classes? For preprocess(outcome = TRUE) we might need to validate these too.

Do we really need forge_impl()?

It would get around the non-exported generic extensibility problem if we just moved it into forge()

Also, does forge() need to be generic? Error catching should be done by $clean()

prepare and preprocess aren't type stable wrt the outcome

prepare() can return a vector or matrix or data frame depending on the preprocessor and whether or not we are doing multivariate. preprocess() returns a data.frame for formula method, or tibble for recipes which is a little better. Do we need a arg for outcome type?

extract_info()

Takes in a data frame or matrix and returns an info list with names, levels, classes. This is the easiest way to expose this part to developers

mold() with dots doesn't remove the LHS

library(hardhat)

mold(Species ~ ., iris)
#> $predictors
#> # A tibble: 150 x 7
#>    Sepal.Length Sepal.Width Petal.Length Petal.Width Speciessetosa
#>           <dbl>       <dbl>        <dbl>       <dbl>         <dbl>
#>  1          5.1         3.5          1.4         0.2             1
#>  2          4.9         3            1.4         0.2             1
#>  3          4.7         3.2          1.3         0.2             1
#>  4          4.6         3.1          1.5         0.2             1
#>  5          5           3.6          1.4         0.2             1
#>  6          5.4         3.9          1.7         0.4             1
#>  7          4.6         3.4          1.4         0.3             1
#>  8          5           3.4          1.5         0.2             1
#>  9          4.4         2.9          1.4         0.2             1
#> 10          4.9         3.1          1.5         0.1             1
#> # … with 140 more rows, and 2 more variables: Speciesversicolor <dbl>,
#> #   Speciesvirginica <dbl>
#> 
#> $outcomes
#> # A tibble: 150 x 1
#>    Species
#>    <fct>  
#>  1 setosa 
#>  2 setosa 
#>  3 setosa 
#>  4 setosa 
#>  5 setosa 
#>  6 setosa 
#>  7 setosa 
#>  8 setosa 
#>  9 setosa 
#> 10 setosa 
#> # … with 140 more rows
#> 
#> $engine
#> Formula Engine: 
#>  
#> # Predictors: 5 
#>   # Outcomes: 1 
#>    Intercept: FALSE 
#>   Indicators: TRUE 
#> 
#> $extras
#> $extras$offset
#> NULL

Created on 2019-03-01 by the reprex package (v0.2.1.9000)

standard helpers for output lists

returned from forge-process-predictors and forge-process-outcomes function: (output would get directly assigned to either predictors/outcomes so the name isn't that important)

list(
    engine = engine,
    output = list(
      data = data,
      extras = NULL
    )
  )

returned from forge-process functions:

list(
    engine = engine,
    predictors = .predictors,
    outcomes = .outcomes
  )

returned from forge-clean

list(
    engine = engine,
    new_data = new_data
  )

returned from mold-process-predictors/outcomes:

list(
    engine = engine,
    output = list(
      data = data,
      info = info,
      extras = NULL
    )
  )

returned from mold-process:

list(
    engine = engine,
    predictors = predictors,
    outcomes = outcomes
  )

returned from mold-clean

list(
    engine = engine,
    data = data
  )

returned from mold-clean-xy

list(
    engine = engine,
    x = x,
    y = y
  )

new_data factors with a subset of original factor levels

Posted from slack:

So here is a question for something forge() could do. Say in the fit function you had a factor f and it has levels c("a", "b", "c")

Then you went to predict 1 new value, and it had that same f factor predictor, but it just happened to only have levels "a".

I don’t think you want forge() to fail here, but I also don’t think you want it to do…nothing.

We have all of the information required to recode that factor using factor(<new_data_factor>, levels = <original_levels>) and then pass it along if required.

Does this seem sensible? Note that we still warn and coerce new factor levels to NA, but this is when the factor has a subset of the originals

new_default_formula_engine() helpers

So we need default_formula_engine()

This might mean we need the constructor to have the full argument set again, and can be subclassed. This would simplify refresh_engine()

forge(outcomes= TRUE) when using the XY method

Since I add a default column name, .outcome, to y when it is converted to a tibble, I could just have forge() look for a column named .outcome to be there in new_data. This would require 0 extra effort on my part, it would do this already if I didn’t prematurely error out.

This would also allow the user to pass in a data frame for y (where you obviously know the column name for the outcome) and then request outcomes to be processed in forge(). (You currently can’t do this because that also goes through the XY method)

The only extra thing I would do is have a special check in place if forge(outcomes = TRUE) is requested, and the ".outcome" column doesn’t exist in new_data. It would make it very clear that the user passed a vector to y and that vector was given the name .outcome so that is what forge() is looking for.

Data first functions

scream(new_data, preprocessor, outcome) not scream(preprocessor, new_data, outcome)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.