Code Monkey home page Code Monkey logo

jtools's Introduction

jtools

CRAN_Status_Badge Total Downloads CII Best Practices R-CMD-check AppVeyor Build Status codecov License: GPL v3

This package consists of a series of functions created by the author (Jacob) to automate otherwise tedious research tasks. At this juncture, the unifying theme is the more efficient presentation of regression analyses. There are a number of functions for other programming and statistical purposes as well. Support for the survey package’s svyglm objects as well as weighted regressions is a common theme throughout.

Notice: As of jtools version 2.0.0, all functions dealing with interactions (e.g., interact_plot(), sim_slopes(), johnson_neyman()) have been moved to a new package, aptly named interactions.

Installation

For the most stable version, simply install from CRAN.

install.packages("jtools")

If you want the latest features and bug fixes then you can download from Github. To do that you will need to have devtools installed if you don’t already:

install.packages("devtools")

Then install the package from Github.

devtools::install_github("jacob-long/jtools")

To see what features are on the roadmap, check the issues section of the repository, especially the “enhancement” tag. Closed issues may be of interest, too, since they may be fixed in the Github version but not yet submitted to CRAN.

Usage

Here’s a synopsis of the current functions in the package:

Console regression summaries (summ())

summ() is a replacement for summary() that provides the user several options for formatting regression summaries. It supports glm, svyglm, and merMod objects as input as well. It supports calculation and reporting of robust standard errors via the sandwich package.

Basic use:

data(movies)
fit <- lm(metascore ~ budget + us_gross + year, data = movies)
summ(fit)
#> MODEL INFO:
#> Observations: 831 (10 missing obs. deleted)
#> Dependent Variable: metascore
#> Type: OLS linear regression 
#> 
#> MODEL FIT:
#> F(3,827) = 26.23, p = 0.00
#> R² = 0.09
#> Adj. R² = 0.08 
#> 
#> Standard errors: OLS
#> --------------------------------------------------
#>                      Est.     S.E.   t val.      p
#> ----------------- ------- -------- -------- ------
#> (Intercept)         52.06   139.67     0.37   0.71
#> budget              -0.00     0.00    -5.89   0.00
#> us_gross             0.00     0.00     7.61   0.00
#> year                 0.01     0.07     0.08   0.94
#> --------------------------------------------------

It has several conveniences, like re-fitting your model with scaled variables (scale = TRUE). You have the option to leave the outcome variable in its original scale (transform.response = TRUE), which is the default for scaled models. I’m a fan of Andrew Gelman’s 2 SD standardization method, so you can specify by how many standard deviations you would like to rescale (n.sd = 2).

You can also get variance inflation factors (VIFs) and partial/semipartial (AKA part) correlations. Partial correlations are only available for OLS models. You may also substitute confidence intervals in place of standard errors and you can choose whether to show p values.

summ(fit, scale = TRUE, vifs = TRUE, part.corr = TRUE, confint = TRUE, pvals = FALSE)
#> MODEL INFO:
#> Observations: 831 (10 missing obs. deleted)
#> Dependent Variable: metascore
#> Type: OLS linear regression 
#> 
#> MODEL FIT:
#> F(3,827) = 26.23, p = 0.00
#> R² = 0.09
#> Adj. R² = 0.08 
#> 
#> Standard errors: OLS
#> ------------------------------------------------------------------------------
#>                      Est.    2.5%   97.5%   t val.    VIF   partial.r   part.r
#> ----------------- ------- ------- ------- -------- ------ ----------- --------
#> (Intercept)         63.01   61.91   64.11   112.23                            
#> budget              -3.78   -5.05   -2.52    -5.89   1.31       -0.20    -0.20
#> us_gross             5.28    3.92    6.64     7.61   1.52        0.26     0.25
#> year                 0.05   -1.18    1.28     0.08   1.24        0.00     0.00
#> ------------------------------------------------------------------------------
#> 
#> Continuous predictors are mean-centered and scaled by 1 s.d.

Cluster-robust standard errors:

data("PetersenCL", package = "sandwich")
fit2 <- lm(y ~ x, data = PetersenCL)
summ(fit2, robust = "HC3", cluster = "firm")
#> MODEL INFO:
#> Observations: 5000
#> Dependent Variable: y
#> Type: OLS linear regression 
#> 
#> MODEL FIT:
#> F(1,4998) = 1310.74, p = 0.00
#> R² = 0.21
#> Adj. R² = 0.21 
#> 
#> Standard errors: Cluster-robust, type = HC3
#> -----------------------------------------------
#>                     Est.   S.E.   t val.      p
#> ----------------- ------ ------ -------- ------
#> (Intercept)         0.03   0.07     0.44   0.66
#> x                   1.03   0.05    20.36   0.00
#> -----------------------------------------------

Of course, summ() like summary() is best-suited for interactive use. When it comes to sharing results with others, you want sharper output and probably graphics. jtools has some options for that, too.

LaTeX-, Word-, and RMarkdown-friendly regression summary tables (export_summs())

For tabular output, export_summs() is an interface to the huxtable package’s huxreg() function that preserves the niceties of summ(), particularly its facilities for robust standard errors and standardization. It also concatenates multiple models into a single table.

fit <- lm(metascore ~ log(budget), data = movies)
fit_b <- lm(metascore ~ log(budget) + log(us_gross), data = movies)
fit_c <- lm(metascore ~ log(budget) + log(us_gross) + runtime, data = movies)
coef_names <- c("Budget" = "log(budget)", "US Gross" = "log(us_gross)",
                "Runtime (Hours)" = "runtime", "Constant" = "(Intercept)")
export_summs(fit, fit_b, fit_c, robust = "HC3", coefs = coef_names)
Model 1 Model 2 Model 3
Budget -2.43 \*\*\* -5.16 \*\*\* -6.70 \*\*\*
(0.44)    (0.62)    (0.67)   
US Gross         3.96 \*\*\* 3.85 \*\*\*
        (0.51)    (0.48)   
Runtime (Hours)                 14.29 \*\*\*
                (1.63)   
Constant 105.29 \*\*\* 81.84 \*\*\* 83.35 \*\*\*
(7.65)    (8.66)    (8.82)   
N 831        831        831       
R2 0.03     0.09     0.17    
Standard errors are heteroskedasticity robust. \*\*\* p \< 0.001; \*\* p \< 0.01; \* p \< 0.05.

In RMarkdown documents, using export_summs() and the chunk option results = 'asis' will give you nice-looking tables in HTML and PDF output. Using the to.word = TRUE argument will create a Microsoft Word document with the table in it.

Plotting regression summaries (plot_coefs() and plot_summs())

Another way to get a quick gist of your regression analysis is to plot the values of the coefficients and their corresponding uncertainties with plot_summs() (or the closely related plot_coefs()). Like with export_summs(), you can still get your scaled models and robust standard errors.

coef_names <- coef_names[1:3] # Dropping intercept for plots
plot_summs(fit, fit_b, fit_c, robust = "HC3", coefs = coef_names)

And since you get a ggplot object in return, you can tweak and theme as you wish.

Another way to visualize the uncertainty of your coefficients is via the plot.distributions argument.

plot_summs(fit_c, robust = "HC3", coefs = coef_names, plot.distributions = TRUE)

These show the 95% interval width of a normal distribution for each estimate.

plot_coefs() works much the same way, but without support for summ() arguments like robust and scale. This enables a wider range of models that have support from the broom package but not for summ().

Plotting model predictions (effect_plot())

Sometimes the best way to understand your model is to look at the predictions it generates. Rather than look at coefficients, effect_plot() lets you plot predictions across values of a predictor variable alongside the observed data.

effect_plot(fit_c, pred = runtime, interval = TRUE, plot.points = TRUE)
#> Using data movies from global environment. This could cause incorrect results if movies has been altered since the model was fit.
#> You can manually provide the data to the "data =" argument.

#> Warning: Removed 10 rows containing missing values (geom_point).

And a new feature in version 2.0.0 lets you plot partial residuals instead of the raw observed data, allowing you to assess model quality after accounting for effects of control variables.

effect_plot(fit_c, pred = runtime, interval = TRUE, partial.residuals = TRUE)
#> Using data movies from global environment. This could cause incorrect results if movies has been altered since the model was fit.
#> You can manually provide the data to the "data =" argument.

Categorical predictors, polynomial terms, (G)LM(M)s, weighted data, and much more are supported.

Other stuff

There are several other things that might interest you.

  • gscale(): Scale and/or mean-center data, including svydesign objects
  • scale_mod() and center_mod(): Re-fit models with scaled and/or mean-centered data
  • wgttest() and pf_sv_test(), which are combined in weights_tests(): Test the ignorability of sample weights in regression models
  • svycor(): Generate correlation matrices from svydesign objects
  • theme_apa(): A mostly APA-compliant ggplot2 theme
  • theme_nice(): A nice ggplot2 theme
  • add_gridlines() and drop_gridlines(): ggplot2 theme-changing convenience functions
  • make_predictions(): an easy way to generate hypothetical predicted data from your regression model for plotting or other purposes.

Details on the arguments can be accessed via the R documentation (?functionname). There are now vignettes documenting just about everything you can do as well.

Contributing

I’m happy to receive bug reports, suggestions, questions, and (most of all) contributions to fix problems and add features. I prefer you use the Github issues system over trying to reach out to me in other ways. Pull requests for contributions are encouraged. If you are considering writing up a bug fix or new feature, please check out the contributing guidelines.

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.

License

This package is licensed under the GPLv3 license or any later version.

jtools's People

Contributors

alanocallaghan avatar dmurdoch avatar jacob-long avatar ngreifer avatar nickch-k avatar victorhartman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

jtools's Issues

Adjusting axis in johnson neyman plots?

Hello all,

I'm loving jtools and just had a (hopefully) quick question about how to plot a three-way interaction and adjust the axis on the graph. I was using sim_slopes with jnplot = TRUE to depict my three-way interaction and region of significance... but I wondered how to adjust the axes?

I know this can be down in the johnson_neyman function (but that didn't allow a three-way interaction graph from my read). Is there a simple argument to add to my sim_slopes call to change things? I tried ylim = c(-3, 3) at the end of the call... but that didn't work. Other thoughts?

Any advice is greatly appreciated!

Thanks much!
Jamie.

Facilitate annotating `plot_coefs`

I'd like to make it possible for users to put written annotations on plot_coefs/plot_summs output. I'm thinking about putting the slope, maybe test statistic, standard error, p-value, confidence interval in any combination.

My first thought is to use glue and let users customize by providing a string a la huxtable::huxreg. So as a default the string would be something like "{estimate} ({standard.error}), t = {statistic}". That part will be easier to implement than the actual plotting of the text.

Install had non-zero exit status

Hello, I installed 'jtools' and it worked once, but when installing 'ggstance' to use the plot_summ command, 'jtools' will no longer load and I received error as: "Warning in install.packages : installation of package ‘jtools’ had non-zero exit status"

I have also tried installing with devtools as: devtools::install_github("jacob-long/jtools") and still receive the same error!

Any help you can provide is greatly appreciated!

Jon

Better knitr output for single summ outputs

export_summs is not what I'm thinking here, but instead something with the information density of summ's interactive output with the polish of huxtable/kable/kableExtra in outputted documents.

The main difficulty is figuring out how to marry the model metadata (N, DV, model fit, etc.) to the coefficient table(s) since they will have varying numbers of columns.

Allow scale_mod options in summ

I'd like it if summ() could have the argument binary.inputs that is in scale_mod so I can choose how I want to scale my binary predictors. As of now, I don't think it's possible to scale them in summ, though it is possible in scale_mod, which seems a little asymmetrical.

Johnson-Neyman technique with multilevel equations

Hi,

I happy to meet this amazing library to run Johnson-Neyman makes easily.

However, I have a trouble to run in multilevel equations.

Here is the pseudo code:

  library('lme4')
  modJN1 <- glmer(IMP ~ ACQ + DEF + (ACQ+DEF|teamid) + RADIC + INCRE +
                    RADIC:ACQ + RADIC:DEF + INCRE:ACQ + INCRE:DEF,
                  data = rawSEM_recursive, REML = T)
  sim_slopes(modJN1, pred = RADIC, modx = ACQ, jnplot = TRUE)

As the result, I got the negative hessian error messages. If I have to make a Johnson-Neyman with Multilevel equations, What I have to do?

Best,
Seongho

Adaptation of `plot_summs` to logistic regression - show odds ration instead of estimates

Lately, I've used the function to plot logistic regression, but instead of using the estimates themselves, I wanted to use the odds ratio (i.e., exp(coef)) which is some sense is reasonable when you want to illustrate "the meaning" of coefficients and not the coefficients.

This required me to manually change the labels of the x-axis using breaks and labels in scale_x_continuous(...). Perhaps this is a reasonable feature to add.

Argument "at" in effect_plot

Thanks a lot for the package. Super nice graphs with a minimum of effort - that is nice.
I ran a logistic regression and find the effect plot really helpful. As I understand it, the mean for concinnous and the reference category for categorical variables is used.

I wanted to set the values of some control variables with the argument "at" and created the named list. I do get the same results though (the variables I want to change are factor btw) and it seems "at" is ignored.

data(bacteria)
l_mod <- glm(y ~ trt + week, data = bacteria, family = binomial)
summ(l_mod)

effect_plot(l_mod, pred = trt, centered = "none", interval = TRUE, y.label = "% testing positive")
ex  <- list(week = 11)
effect_plot(l_mod, pred = trt, centered = "none", at = ex, interval = TRUE, y.label = "% testing positive")

export_summs robust expects logical

I am trying to use export_summs to print out the results of an ordinary linear regression with HC1 robust standard errors...

results = lm(Sepal.Width ~ Species + Sepal.Length, iris)
# summ(results, robust="HC1") # for reference
export_summs(results, robust="HC1")

... which returns the following errors:

Error in if (scale == TRUE) { : missing value where TRUE/FALSE needed
In addition: Warning message:
In all(robust) : coercing argument of type 'character' to logical

When I input export_summs(results, robust=TRUE), it gives me the HC3 errors. I also tried to first input set_summ_default(robust="HC1") but this doesn't get the HC1 errors too.

When I input export_summs(results, robust.type="HC1"), I am able to get HC1 and the following (harmless) errors:

Warning messages:
1: The robust.type argument is deprecated. Please specify the type as the value for the 'robust' argument instead.
2: The robust.type argument is deprecated. Please specify the type as the value for the 'robust' argument instead.

If it helps, I'm getting this issue with jtools v.2.0.1 on R v.3.6.0 and v3.5.2.

Additional models to support with summ

My general philosophy goes like this:

  • The model's output needs to be relatively predictable (this is why I have not supported lavaan, which can be endlessly complicated and used for very different purposes)
  • The model should be regression or similar — summ will not handle other kinds of input, like data.frames or the like. skimr is a package that does those things well.
  • summ should be able to offer added value above and beyond summary

With that said, models I definitely plan to support are:

  • lme

Still thinking about/auditing:

  • brmsfit — worried about variation in output due to wide variety of options, unsure if summ can add benefit since refitting models isn't feasible.
  • stanreg — less concern about variation than with brmsfit, but "added value" concern remains
  • polr — Need to look more closely at the interface, make sure I know enough to make a good summary. Need to think about how to plot predictions from these models (same goes for ordinal package models), but that isn't essential.

Checklist of models I plan to add barring complications as I implement them (and outside contributors may feel free to do a pull request for one of these):

  • lme (others in nlme?)
  • glmmTMB

Support for quantreg package

Fantastic package, I was wondering how much wrangling would be needed to support quantile regression models generated by the quantreg package?

The output is in a very similar format to the stats regression functions, but there's obviously something stopping it from getting all the way:

library(quantreg)
library(jtools)

qrModel = rq(Income ~ Frost + Illiteracy + Murder, data = as.data.frame(state.x77))

summ(qrModel)

Error in `[.data.frame`(coefs, , c("std.error", "statistic", "p.value")) : 
  undefined columns selected
Error in summ.default(qrModel) : 
  Could not find a way to extract coefficients via broom::tidy or coeftest.

changing the range of x-axis in plot_summs

Dear all,

I'm new to jtools, but really like this package. I'm wondering if we set by ourselves the range of x-axis when we use plot_summs to report our regression results. We have results for two different samples, but would like to compare them visually, so it would be great if the two plots can have the same scale on the x-axis. Is there any way to do it? If not, any other packages can do such things? Thanks!

Implement argument validation

Very few of the functions check to see if the arguments given by the user satisfy the requirements laid out in the documentation. In many cases, the user will only learn by having one of the underlying functions fail and throw an error that may not make the source of the problem clear.

Add sim_slopes to have label functionality

Following the vignette:
library(survey)
library(jtools)
data(api)
dstrat<-svydesign(id=~1,strata=~stype, weights=~pw, data=apistrat, fpc=~fpc)
regmodel3 <- survey::svyglm(api00 ~ avg.ed * growth * enroll, design = dstrat)
sim_slopes(regmodel3, pred = growth, modx = avg.ed, mod2 = enroll,
jnplot = TRUE, mod2.labels=c("test", "test1", "test2"))

Would be great to be able to label those as well! :) Sorry for all the issues, just trying to use your package for some visualizations for publications and want it to be clean looking!

Rename Facets in 3 Way Interactions

Is there a way to rename the facets of interact_plot?

data <- mtcars
model3<- lm(data=data, mpg ~ wt * hp * drat)
jtools::interact_plot(model3, pred = "wt",
modx = "hp", mod2="drat", jnplot="true",
x.label="X Label", y.label="Y Label",
legend.main="Legend Title")

It's kind of clunky to have it say "Mean of drat - 1 SD" "Mean of drat" "Mean of drat + 1 SD"
example

Also - at least for me, trying to add : mod2.labels=c("Test1", "Test2", "Test3") doesn't help.

Adjusted pseudo r squared mcfadden

I suggest including the adjusted pseudo r squared mcfadden in the statistics reported of export_summs. Currently, pseudo r squared mcfadden is reported if pseudo.r.squared.mcfadden is called in statistics

Rmarkdown pdf does not compile

<
Mac OS; latest versions RStudio/R

Poisson glm model.

Call jtools functions summ() and effect_plot().

All chunks returns expected output in Console.

Knitting to PDF fails.

/Applications/RStudio.app/Contents/MacOS/pandoc/pandoc +RTS -K512m -RTS Enrolled.utf8.md --to latex --from markdown+autolink_bare_uris+ascii_identifiers+tex_math_single_backslash --output Enrolled.tex --template /Library/Frameworks/R.framework/Versions/3.5/Resources/library/rmarkdown/rmd/latex/default-1.17.0.2.tex --highlight-style tango --latex-engine pdflatex --variable graphics=yes --variable 'compact-title:yes'
output file: Enrolled.knit.md

This is pdfTeX, Version 3.14159265-2.6-1.40.19 (TeX Live 2018) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
! Package inputenc Error: Unicode character χ (U+3C7)
(inputenc) not set up for use with LaTeX.

Try other LaTeX engines instead (e.g., xelatex) if you are using pdflatex. For R Markdown users, see https://bookdown.org/yihui/rmarkdown/pdf-document.html
Error: Failed to compile Enrolled.tex. See https://yihui.name/tinytex/r/#debugging for debugging tips. See Enrolled.log for more info.
Execution halted

Have done all suggested debugging
Enrolled.Rmd.txt

Knitting successful to MS word.

If chunks with jtools function calls set to eval = F, PDF compiles.

Data files attached.

.Rmd file attached in txt format.

Nathan
ForNathanFig1eligible_20190418.xlsx
ForNathanFig1enrolled_20190418.xlsx

Enrolled.Rmd.txt

Vignette for LaTeX?

I'm trying to produce a regression table for use in creating a paper with LaTeX. My model fits are all inside a list, which export_summs() deals with just fine. Help tells me to look up the documentation for latex tables with the huxtable package, but it is not straightforward to see what to do there at all. Is it possible to have a vignette that illustrates the export of LaTeX regression tables (including those with multiple models)?

Extract r squared?

summ() works great for my svyglm models, but I'd like to extract the R^2 statistics which I see printed to the console but I can't seem to find in the object.

Handle multi-column dependent variables in prediction functions

This seems to be mainly/only relevant for binomial family GLMs where the left side of the formula includes both a trials and successes column. As it is, I believe make_predictions uses one or the other's variable name while the values themselves are probabilities.

Dynamic predictor in effect_plot and interact_plot

It'd be nice if it were possible to pass in the predictor for effect_plot and interact_plot as a string variable, since one might want to look at the effects of each variable in a model without having to explicitly type out each predictor.

Something like

for(var in colnames(df)){
    effect_plot(model,pred=var)
}

Unexpected behaviour writing plot_summs to PDF

I seem to be unable to write plot_summs to PDF in a similar way to how I did pre-1.0

The following works when copy and pasted into the command line, but when run from a script seems to return an empty file.

states <- as.data.frame(state.x77)
fit1 <- lm(Income ~ Frost, data = states)
pdf('test.pdf')
plot_summs(fit1)
dev.off()

panelR summary and tables

I tried to create tables from models generated through the package panelR and it did not work.

Is there plans to integrate this package with the panelR in order to generate regression tables?

Thank you in advance

lmer output

Hi there!

Just saw your package today and have been really enjoying it. I use lmer (from lme4), and the only way I can get summ() to output is if I put "pvals=FALSE". Is there any way you could implement P values into the output (perhaps in the same vein as lmerTest)? Thanks again, this is an amazing package!

Adam

saving johnson neyman plot

Hello! I am trying to save a a JN plot in high resolution for a poster using ggsave but I saw that this error occurs when trying to use ggsave:

Error in UseMethod("grid.draw") :
no applicable method for 'grid.draw' applied to an object of class "johnson_neyman"

Do you recommend another way to save the plot so it is not blurry for the poster?

My other question is whether it is possible to modify the JN plots using theme()? I am trying to make the numbers of the axis in Arial and bolded. I am able to do so for interact_plot but not for the JN plots.

Thank you!

effects from 2 models in 1 plots

Hi Jacob,

Is there a way to plot the effects of two models on one plot? For example, I have one model that is positive_affect ~ age, and one that is positive_affect~age*well-being. Is there a way to overlay the effect of age with and without controlling for well-being in one plot?

Thanks,
Daisy

Handle offsets, transformed data with j_summ standardization/centering

Problem: I extract model frame, which has different names than what are called in the original model call.

Example: formula is output ~ log(input). Model frame has columns output and log(input). When I call update with modified model frame given as data frame, lm or glm looks for a variable called input so it can apply the log function to it.

Formula environment causing error

See this example:

X <- rnorm(100)
Y <- rnorm(100)
F <- function(f) summ(glm(f))
F(Y ~ X)

## Error: object of type 'symbol' is not subsettable 

The traceback is:

10. notnull(x$formula) 
9. formula.default(object, env = baseenv()) 
8. formula(object, env = baseenv()) 
7. as.formula(old) 
6. update.formula(call$formula, formula) 
5. j_update(object, formula = form, weights = .weights, offset = .offset, 
    data = frame) 
4. pR2(model) 
3. summ.glm(glm(f)) 
2. summ(glm(f)) 
1. F(Y ~ X) 

I think this is occurring because the environment in which you are evaluating the formula when decomposing the glm object is not the same environment with the data. Note that this occurs even when a data set is included in glm. This isn't a huge problem but points to a possible underlying issue. The way around it is to pass the glm object to F() rather than just the formula.

summ error with start values in glm

Sorry to be giving you more to work on. summ gives a bad error when staring values are supplied to glm(). This is required when using glm with some links. It seems to require start values of length 1, even when there are two parameters in the model. summary() works fine, though. Setting only one start value (appropriately) yields an error in glm.

data("mpg", package = "ggplot2")
fit <- glm(cty ~ cyl, data = mpg, start = c(1,1))
jtools::summ(fit)
#> Error in glm.fit(x = structure(c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, : length of 'start' should equal 1 and correspond to initial coefs for "(Intercept)"
summary(fit) #works
#> 
#> Call:
#> glm(formula = cty ~ cyl, data = mpg, start = c(1, 1))
#> 
#> Deviance Residuals: 
#>     Min       1Q   Median       3Q      Max  
#> -5.8785  -1.6225   0.1215   1.3775  14.1215  
#> 
#> Coefficients:
#>             Estimate Std. Error t value Pr(>|t|)    
#> (Intercept)  29.3904     0.6268   46.89   <2e-16 ***
#> cyl          -2.1280     0.1027  -20.72   <2e-16 ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> (Dispersion parameter for gaussian family taken to be 6.380225)
#> 
#>     Null deviance: 4220.3  on 233  degrees of freedom
#> Residual deviance: 1480.2  on 232  degrees of freedom
#> AIC: 1101.7
#> 
#> Number of Fisher Scoring iterations: 2
fit <- glm(cty ~ cyl, data = mpg, start = c(1))
#> Error in glm.fit(x = structure(c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, : length of 'start' should equal 2 and correspond to initial coefs for c("(Intercept)", "cyl")

Created on 2019-05-31 by the reprex package (v0.3.0)

Effect_plot question

Hello,
I would like to overlay three different plots I created with effect_plot, predicting the effect of three different glms. Is it possible? I am trying to add one effect_plot to the other and I obtain: 'I don't know how to add effect_plot.....to a plot' (me neither my dear R).
Can anybody help?

Error in plot_summs function in v1.0.0

The latest update to jtools (v.1.0.0) now creates error when trying to use the plot_summs function. For instance, this simple example demonstrates the issue:

require('jtools')
set.seed(1234)
y <- rbinom(n = 1000, size = 1, prob = 0.4)  
x <- rbinom(n = 1000, size = 1, prob = 0.5)  
df <- data.frame(x, y)  
model1 <- glm(y ~ x, family = binomial(link = "logit"), data = df)  
plot_summs(model1, exp = TRUE)

> Error in stop_wrap("Install the ggstance package to use the plot_coefs function.") : 
  Install the ggstance package to use the plot_coefs function. 

Installing ggstance solves the problem on my OSX machine but not a colleague's PC running windows, which then errors with the following:

> Error in getS3method("tidy", method_stub, envir = getNamespace("broom")) : 
  unused argument (envir = getNamespace("broom"))

(Un)reinstalling broom does not solve the issue.

Pseudo R2 in export_summs

Hi,

We have tried many permutations of export_sums, statistics="all", lists, etc., to get an export with the pseudo-r2 from a GLM model, but have had no luck. I including pseudo-r2 possible with export_summs?

Clustered Standard Errors

I am using the interact_plot function. I wish to use clustered standard errors (or other alternative standard errors in general), rather than the default SE in the "lm" function, in computing the confidence intervals. Is this doable?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.