This is a collection of tools that the author (Jacob) has written for the purpose of more efficiently understanding and sharing the results of (primarily) regression analyses. There are a number of functions focused specifically on the interpretation and presentation of interactions. Just about everything supports models from the survey package.
This package consists of a series of functions created by the author
(Jacob) to automate otherwise tedious research tasks. At this juncture,
the unifying theme is the more efficient presentation of regression
analyses. There are a number of functions for visualizing and doing
inference for interaction terms. Support for the
svyglm objects as well as weighted regressions is a common theme
Note: This is beta software. Bugs are possible, both in terms of code-breaking errors and more pernicious errors of mistaken computation.
For the most stable version, simply install from CRAN.
If you want the latest features and bug fixes then you can download from
Github. To do that you will need to have
devtools installed if you
Then install the package from Github.
You should also check out the
dev branch of this
repository for the latest and greatest changes, but also the latest and
greatest bugs. To see what features are on the roadmap, check the issues
section of the repository, especially the “enhancement” tag.
Here’s a synopsis of the current functions in the package:
summ is a replacement for
summary that provides the user several
options for formatting regression summaries. It supports
merMod objects as input as well. It supports calculation
and reporting of robust standard errors via the
fit <- lm(mpg ~ hp + wt, data = mtcars)summ(fit)
#> MODEL INFO: #> Observations: 32 #> Dependent Variable: mpg #> Type: OLS linear regression #> #> MODEL FIT: #> F(2,29) = 69.21, p = 0.00 #> R² = 0.83 #> Adj. R² = 0.81 #> #> Standard errors: OLS #> Est. S.E. t val. p #> (Intercept) 37.23 1.60 23.28 0.00 *** #> hp -0.03 0.01 -3.52 0.00 ** #> wt -3.88 0.63 -6.13 0.00 ***
It has several conveniences, like re-fitting your model with scaled
scale = TRUE). You have the option to leave the outcome
variable in its original scale (
transform.response = TRUE), which is
the default for scaled models. I’m a fan of Andrew Gelman’s 2 SD
standardization method, so you can specify by how many standard
deviations you would like to rescale (
n.sd = 2).
You can also get variance inflation factors (VIFs) and partial/semipartial (AKA part) correlations. Partial correlations are only available for OLS models. You may also substitute confidence intervals in place of standard errors and you can choose whether to show p values.
summ(fit, scale = TRUE, vifs = TRUE, part.corr = TRUE, confint = TRUE, pvals = FALSE)
#> MODEL INFO: #> Observations: 32 #> Dependent Variable: mpg #> Type: OLS linear regression #> #> MODEL FIT: #> F(2,29) = 69.21, p = 0.00 #> R² = 0.83 #> Adj. R² = 0.81 #> #> Standard errors: OLS #> Est. 2.5% 97.5% t val. VIF partial.r part.r #> (Intercept) 20.09 19.15 21.03 43.82 <NA> <NA> <NA> #> hp -2.18 -3.44 -0.91 -3.52 1.77 -0.55 -0.27 #> wt -3.79 -5.06 -2.53 -6.13 1.77 -0.75 -0.47 #> #> Continuous predictors are mean-centered and scaled by 1 s.d.
Cluster-robust standard errors:
data("PetersenCL", package = "sandwich")fit2 <- lm(y ~ x, data = PetersenCL)summ(fit2, robust = "HC3", cluster = "firm")
#> MODEL INFO: #> Observations: 5000 #> Dependent Variable: y #> Type: OLS linear regression #> #> MODEL FIT: #> F(1,4998) = 1310.74, p = 0.00 #> R² = 0.21 #> Adj. R² = 0.21 #> #> Standard errors: Cluster-robust, type = HC3 #> Est. S.E. t val. p #> (Intercept) 0.03 0.07 0.44 0.66 #> x 1.03 0.05 20.36 0.00 ***
summary is best-suited for interactive use.
When it comes to sharing results with others, you want sharper output
and probably graphics.
jtools has some options for that, too.
First, for tabular output,
export_summs is an interface to the
huxreg function that preserves the niceties of
summ, particularly its facilities for robust standard errors and
standardization. It also concatenates multiple models into a single
fit <- lm(mpg ~ hp + wt, data = mtcars)fit_b <- lm(mpg ~ hp + wt + disp, data = mtcars)fit_c <- lm(mpg ~ hp + wt + disp + drat, data = mtcars)coef_names <- c("Horsepower" = "hp", "Weight (tons)" = "wt","Displacement" = "disp", "Rear axle ratio" = "drat","Constant" = "(Intercept)")export_summs(fit, fit_b, fit_c, scale = TRUE, transform.response = TRUE, coefs = coef_names)
Rear axle ratio
*** p < 0.001; ** p < 0.01; * p < 0.05.
In RMarkdown documents, using
export_summs and the chunk option
results = 'asis' will give you nice-looking tables in HTML and PDF
output. Using the
to.word = TRUE argument will create a Microsoft Word
document with the table in it.
Another way to get a quick gist of your regression analysis is to plot
the values of the coefficients and their corresponding uncertainties
plot_summs (or the closely related
made some slight changes to
ggplot2 geoms to make everything look
nice; and like with
export_summs, you can still get your scaled models
and robust standard errors.
coef_names <- coef_names[1:4] # Dropping intercept for plotsplot_summs(fit, fit_b, fit_c, scale = TRUE, robust = "HC3", coefs = coef_names)
And since you get a
ggplot object in return, you can tweak and theme
as you wish.
Another way to visualize the uncertainty of your coefficients is via the
plot_summs(fit_c, scale = TRUE, robust = "HC3", coefs = coef_names, plot.distributions = TRUE)
These show the 95% interval width of a normal distribution for each estimate.
plot_coefs works much the same way, but without support for
scale. This enables a wider range of
models that have support from the
broom package but not for
Unless you have a really keen eye and good familiarity with both the underlying mathematics and the scale of your variables, it can be very difficult to look at the ouput of regression model that includes an interaction and actually understand what the model is telling you.
This package contains several means of aiding understanding and doing statistical inference with interactions.
The “classic” way of probing an interaction effect is to calculate the slope of the focal predictor at different values of the moderator. When the moderator is binary, this is especially informative—e.g., what is the slope for men vs. women? But you can also arbitrarily choose points for continuous moderators.
With that said, the more statistically rigorous way to explore these effects is to find the Johnson-Neyman interval, which tells you the range of values of the moderator in which the slope of the predictor is significant vs. nonsignificant at a specified alpha level.
sim_slopes function will by default find the Johnson-Neyman
interval and tell you the predictor’s slope at specified values of the
moderator; by default either both values of binary predictors or the
mean and the mean +/- one standard deviation for continuous moderators.
fiti <- lm(mpg ~ hp * wt, data = mtcars)sim_slopes(fiti, pred = hp, modx = wt, jnplot = TRUE)
#> JOHNSON-NEYMAN INTERVAL #> #> When wt is OUTSIDE the interval [3.69, 5.90], the slope of hp is p < #> .05. #> #> Note: The range of observed values of wt is [1.51, 5.42]
#> SIMPLE SLOPES ANALYSIS #> #> Slope of hp when wt = 4.20 (+ 1 SD): #> Est. S.E. t val. p #> -0.00 0.01 -0.31 0.76 #> #> Slope of hp when wt = 3.22 (Mean): #> Est. S.E. t val. p #> -0.03 0.01 -4.07 0.00 #> #> Slope of hp when wt = 2.24 (- 1 SD): #> Est. S.E. t val. p #> -0.06 0.01 -5.66 0.00
The Johnson-Neyman plot can really help you get a handle on what the
interval is telling you, too. Note that you can look at the
Johnson-Neyman interval directly with the
The above all generalize to three-way interactions, too.
This function plots two- and three-way interactions using
a similar interface to the aforementioned
sim_slopes function. Users
can customize the appearance with familiar
ggplot2 commands. It
supports several customizations, like confidence intervals.
interact_plot(fiti, pred = hp, modx = wt, interval = TRUE)
You can also plot the observed data for comparison:
interact_plot(fiti, pred = hp, modx = wt, plot.points = TRUE)
The function also supports categorical moderators—plotting observed data in these cases can reveal striking patterns.
fitiris <- lm(Petal.Length ~ Petal.Width * Species, data = iris)interact_plot(fitiris, pred = Petal.Width, modx = Species, plot.points = TRUE)
You may also combine the plotting and simple slopes functions by using
probe_interaction, which calls both functions simultaneously.
Categorical by categorical interactions can be investigated using the
There are several other things that might interest you.
effect_plot: Plot predicted lines from regression models without interactions
gscale: Scale and/or mean-center data, including
center_mod: Re-fit models with scaled and/or mean-centered data
pf_sv_test, which are combined in
weights_tests: Test the ignorability of sample weights in regression models
svycor: Generate correlation matrices from
theme_apa: A mostly APA-compliant
ggplot2theme-changing convenience functions
plot_predictions: a direct interface to the internals of
effect_plotwith some added options
Details on the arguments can be accessed via the R documentation
?functionname). There are now vignettes documenting just about
everything you can do as well.
I’m happy to receive bug reports, suggestions, questions, and (most of all) contributions to fix problems and add features. I prefer you use the Github issues system over trying to reach out to me in other ways. Pull requests for contributions are encouraged.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.
The source code of this package is licensed under the MIT License.
This is a minor release.
plot_predictionshad an incorrect default value for
interval, causing an error if you used the default arguments with
make_predictions. The default is now
effect_plotwould have errors when the models included covariates (not involved in the interaction, if any) that were non-numeric. That has been corrected. (#41)
FALSE) were not handled by the plotting functions appropriately, causing them to be treated as numeric. They are now preserved as logical. (#40).
sim_slopesgave inaccurate results when factor moderators did not have treatment coding (
"contr.treatment") but are now recoded to treatment coding.
summoutput in RMarkdown documents is now powered by
kableExtra, which (in my opinion) offers more attractive HTML output and seems to have better luck with float placement in PDF documents. Your mileage may vary.
rmdformatsrather than the base
huxtable) will now have conditional namespace registration for users of R 3.6. This shouldn't have much effect on end users.
This release was initally intended to be a bugfix release, but enough other things came up to make it a minor release.
broomupdate when using
plot_coefsarising from the latest update to
interact_plotno longer errors if there are missing observations in the original data and quantiles are requested.
summ.merMod, the default p-value calculation is now via the Satterthwaite method if you have
lmerTestinstalled. The old default, Kenward-Roger, is used by request or when
pbkrtestis installed but not
lmerTest. It now calculates a different degrees of freedom for each predictor and also calculates a variance-covariance matrix for the model, meaning the standard errors are adjusted as well. It is not the default largely because the computation takes too long for too many models.
johnson_neymannow allows you to specify your own critical t value if you are using some alternate method to calculate it.
johnson_neymannow allows you to specify the range of moderator values you want to plot as well as setting a title.
sim_slopesin a way similar to
modx.values = "plus-minus"). [#31]
plot_summsnow supports facetting the coefficients based on user-specified groupings. See
summvariants now have pretty output in RMarkdown documents if you have the
huxtablepackage installed. This can be disabled with the chunk option
render = 'normal_print'.
pred.valuesin place of
predvals. Don't go running to change your code, though; those old argument names will still work, but these new ones are clearer and preferred in new code.
sim_slopesobjects. Just save your
sim_slopescall to an object and call the
plotfunction on that object to see what happens. Basically, it's
huxtableinstalled, you can now call
sim_slopesobject to get a publication-style table. The interface is comparable to
This release has several big changes embedded within, side projects that needed a lot of work to implement and required some user-facing changes. Overall these are improvements, but in some edge cases they could break old code. The following sections are divided by the affected functions. Some of the functions are discussed in more than one section.
These functions no longer re-fit the inputted model to center covariates, impose labels on factors, and so on. This generally has several key positives, including
lmmodels, 60% for
svyglm, and 80% for
merModin my testing). The speed gains increase as the models become more complicated and the source data become larger.
log) in the formula, the function would previously would have a lot of trouble and usually have errors. Now this is supported, provided you input the data used to fit the model via the
dataargument. You'll receive a warning if the function thinks this is needed to work right.
As noted, there is a new
data argument for these functions. You do not
normally need to use this if your model is fit with a
y ~ x + z type of
formula. But if you start doing things like
y ~ factor(x) + z, then
you need to provide the source data frame. Another benefit is that this
allows for fitting polynomials with
effect_plot or even interactions with
interact_plot. For instance, if my model was fit using
this kind of formula ---
y ~ poly(x, 2) + z --- I could then plot the
predicted curve with
effect_plot(fit, pred = x, data = data) substituting
fit with whatever my model is called and
data with whatever data frame
I used is called.
There are some possible drawbacks for these changes. One is that no longer are
factor predictors supported in
even two-level ones. This worked before by coercing
them to 0/1 continuous variables and re-fitting the model. Since the model is
no longer re-fit, this can't be done. To work around it, either transform the
predictor to numeric before fitting the model or use
two-level factor covariates are no longer centered and are simply
set to their reference value.
Robust confidence intervals: Plotting robust standard errors for compatible
models (tested on
glm). Just use the
robust argument like you would
Preliminary support for confidence intervals for
merMod models: You may
now get confidence intervals when using
merMod objects as input to the
plotting functions. Of importance, though, is the uncertainty is only for
the fixed effects. For now, a warning is printed. See the next section for
another option for
merMod confidence intervals.
Rug plots in the margins: So-called "rug" plots can be included in the
margins of the plots for any of these functions. These show tick marks for
each of the observed data points, giving a non-obtrusive impression of the
distribution of the
pred variable and (optionally) the dependent variable.
See the documentation for
effect_plot and the
Facet by the
modx variable: Some prefer to visualize the predicted lines
on separate panes, so that is now an option available via the
argument. You can also use
plot.points with this, though the division into
groups is not straightforward is the moderator isn't a factor. See the
documentation for more on how that is done.
plot_predictions: New tools for advanced plotting
To let users have some more flexibility,
jtools now lets users directly
access the (previously internal) functions that make
interact_plot work. This should make it easier to tailor the
outputs for specific needs. Some features may be implemented for these functions
only to keep the
_plot functions from getting any more complicated than they
The simplest use of the two functions is to use
make_predictions just like
cat_plot. The difference is, of
make_predictions only makes the data that would be used for
plotting. The resulting
predictions object has both the predicted and original
data as well as some attributes describing the arguments used. If you pass
this object to
plot_predictions with no further arguments, it should do
exactly what the corresponding
_plot function would do. However, you might
want to do something entirely different using the predicted data which is part
of the reason these functions are separate.
One such feature specific to
make_predictions is bootstrap confidence
You may no longer use these tools to scale the models. Use
the resulting object, and use that as your input to the functions if you want
All these tools have a new default
centered argument. They are now set to
centered = "all", but
"all" no longer means what it used to. Now it refers
to all variables not included in the interaction, including the dependent
variable. This means that in effect, the default option does the same thing
that previous versions did. But instead of having that occur when
centered = NULL, that's what
centered = "all" means. There is no
NULL option any longer. Note that with
sim_slopes, the focal predictor
pred) will now be centered --- this only affects the conditional intercept.
This function now supports categorical (factor) moderators, though there is no option for Johnson-Neyman intervals in these cases. You can use the significance of the interaction term(s) for inference about whether the slopes differ at each level of the factor when the moderator is a factor.
You may now also pass arguments to
summ, which is used internally to calculate
standard errors, p values, etc. This is particularly useful if you are using
merMod model for which the
pbkrtest-based p value calculation is too
The interface has been changed slightly, with the actual numbers always provided
data argument. There is no
x argument and instead a
to which you can provide variable names. The upshot is that it now fits much
better into a piping workflow.
The entire function has gotten an extensive reworking, which in some cases should result in significant speed gains. And if that's not enough, just know that the code was an absolute monstrosity before and now it's not.
There are two new functions that are wrappers around
center, which call
gscale but with
n.sd = 1 in the first case and
center.only = TRUE in the latter case.
Tired of specifying your preferred configuration every time you use
Now, many arguments will by default check your options so you can set your
own defaults. See
?set_summ_defaults for more info.
Rather than having separate
summ function now uses
transform.response to collectively cover those
bases. Whether the response is centered or scaled depends on the
robust.type argument is deprecated. Now, provide the type of robust
estimator directly to
robust. For now, if
robust = TRUE, it defaults to
"HC3" with a warning. Better is to provide the argument directly, e.g.,
robust = "HC3".
robust = FALSE is still fine for using OLS/MLE standard
summ.merMod previously offered an
odds.ratio argument, that has been renamed to
exp (short for exponentiate)
to better express the quantity.
vifs now works when there are factor variables in the model.
One of the first bugs
summ ever had occurred when the function was given
a rank-deficient model. It is not straightforward to detect, especially since
I need to make a space for an almost empty row in the outputted table. At long
last, this release can handle such models gracefully.
Like the rest of R, when
summ rounded your output, items rounded exactly to
zero would be treated as, well, zero. But this can be misleading if the
original value was actually negative. For instance, if
digits = 2 and a
-0.003, the value printed to the console was
suggesting a zero or slightly positive value when in fact it was the
opposite. This is a limitation of the
trunc) function. I've
now changed it so the zero-rounded value retains its sign.
summ.merMod now calculates pseudo-R^2 much, much faster. For only modestly
complex models, the speed-up is roughly 50x faster. Because of how much faster
it now is and how much less frequently it throws errors or prints cryptic
messages, it is now calculated by default. The confidence interval calculation
is now "Wald" for these models (see
confint.merMod for details) rather than
"profile", which for many models can take a very long time and sometimes does
not work at all. This can be toggled with the
summ.svyglm now will calculate pseudo-R^2 for quasibinomial and
quasipoisson families using the value obtained from refitting them as
binomial/poisson. For now, I'm not touching AIC/BIC for such models
because the underlying theory is a bit different and the implementation
summ.lm now uses the t-distribution for finding critical values for
confidence intervals. Previously, a normal approximation was used.
summ.default method has been removed. It was becoming an absolute terror
to maintain and I doubted anyone found it useful. It's hard to provide the
value added for models of a type that I do not know (robust errors don't
always apply, scaling doesn't always work, model fit statistics may not make
sense, etc.). Bug me if this has really upset things for you.
One new model type has been supported:
rq models from the
Please feel free to provide feedback for the output and support of these models.
To better reflect the capabilities of these functions (not restricted to
objects), they have been renamed. The old names will continue to work to
preserve old code.
center.response now default to
reflect the fact that only OLS models can support transformations of the
dependent variable in that way.
There is a new
vars = argument for
scale_mod that allows you to only apply
scaling to whichever variables are included in that character vector.
I've also implemented a neat technical fix that allows the updated model to itself be updated while not also including the actual raw data in the model call.
A variety of fixes and optimizations have been added to these functions.
Now, by default, there are two confidence intervals plotted, a thick line
representing (with default settings) the 90% interval and a thinner line
for the 95% intervals. You can set
NULL to get rid of
the thicker line.
plot_summs, you can also set per-model
summ arguments by providing
the argument as a vector (e.g.,
robust = c(TRUE, FALSE)). Length 1 arguments
are applied to all models.
plot_summs will now also support models not
summ by just passing those models to
summ on them.
Another new option is
point.shape, similar to the model plotting functions.
This is most useful for when you are planning to distribute your output in
grayscale or to colorblind audiences (although the new default color scheme is
meant to be colorblind friendly, it is always best to use another visual cue).
The coolest is the new
plot.distributions argument, which if TRUE will plot
normal distributions to even better convey the uncertainty. Of course, you
should use this judiciously if your modeling or estimation approach doesn't
produce coefficient estimates that are asymptotically normally distributed.
Inspiration comes from https://twitter.com/BenJamesEdwards/status/979751070254747650.
broom's interface for Bayesian methods is inconsistent, so I've
hacked together a few tweaks to make
stanreg models work with
You'll also notice vertical gridlines on the plots, which I think/hope will
be useful. They are easily removable (see
drop_x_gridlines()) with ggplot2's
built-in theming options.
Changes here are not too major. Like
plot_summs, you can now provide
unsupported model types to
export_summs and they are just passed through
huxreg. You can also provide different arguments to
summ on a per-model
basis in the way described under the
plot_summs heading above.
There are some tweaks to the model info (provided by
glance). Most prominent
merMod models, for which there is now a separate N for each grouping
theme_apaplus new functions
New arguments have been added to
remove.y.gridlines, both of which are
TRUE by default. APA hates giving
hard and fast rules, but the norm is that gridlines should be omitted unless
they are crucial for interpretation.
theme_apa is also now a "complete"
theme, which means specifying further options via
theme will not revert
theme_apa's changes to the base theme.
Behind the scenes the helper functions
are used, which do what they sound like they do. To avoid using the arguments
to those functions, you can also use
drop_y_gridlines which are wrappers around the
more general functions.
pf_sv_test --- now handle missing data
in a more sensible and consistent way.
There is a new default qualitative palette, based on Color Universal Design
(designed to be readable by the colorblind) that looks great to all. There are
several other new palette choices as well. These are all documented at
crayon package as a backend, console output is now formatted
jtools functions for better readability on supported systems.
Feedback on this is welcome since this might look better or worse in
This release is limited to dealing with the
huxtable package's temporary
removal from CRAN, which in turn makes this package out of compliance with
CRAN policies regarding dependencies on non-CRAN packages.
Look out for
jtools 1.0.0 coming very soon!
sim_slopeswere both encountering errors with
merModinput. Thanks to Seongho Bae for reporting these issues and testing out development versions.
export_summshad an extra space (e.g.,
( 1)) due to changes in
huxtable. The defaults are now just single numbers.
TRUE. It was reporting
alpha * 2in the legend, but now it is accurate again.
johnson_neymannow handles multilevel models from
Jonas Kunst helpfully pointed out some odd behavior of
factor moderators. No longer should there be occasions in which you have two
different legends appear. The linetype and colors also should now be consistent
whether there is a second moderator or not. For continuous moderators, the
darkest line should also be a solid line and it is by default the highest
value of the moderator.
export_summs, but that has been fixed.
cat_plotby providing a vector of colors (any format that
ggplot2accepts) for the
summthat formats the output in a way that lines up the decimal points. It looks great.
This may be the single biggest update yet. If you downloaded from CRAN, be sure to check the 0.8.1 update as well.
New features are organized by function.
control.fdroption is added to control the false discovery rate, building on new research. This makes the test more conservative but less likely to be a Type 1 error.
line.thicknessargument has been added after Heidi Jacobs pointed out that it cannot be changed after the fact.
sim_slopesfor 3-way interactions is much-improved.
alpha = .05the critical test statistic was always 1.96. Now, the residual degrees of freedom are used with the t distribution. You can do it the old way by setting
df = "normal"or any arbitrary number.
plot.points(see 0.8.1 for more). You can now plot observed data with 3-way interactions.
mod2valsspecification has been added:
"terciles". This splits the observed data into 3 equally sized groups and chooses as values the mean of each of those groups. This is especially good for skewed data and for second moderators.
linearity.checkoption for two-way interactions. This facets by each level of the moderator and lets you compare the fitted line with a loess smoothed line to ensure that the interaction effect is roughly linear at each level of the (continuous) moderator.
plot.points = TRUE.
jitterargument added for those using
plot.points. If you don't want the points jittered, you can set
jitter = 0. If you want more or less, you can play with the value until it looks right. This applies to
pbkrtestare slowing things down.
r.squaredis now set to FALSE by default.
plot_summs: A graphic counterpart to
export_summs, which was introduced in
the 0.8.0 release. This plots regression coefficients to help in visualizing
the uncertainty of each estimate and facilitates the plotting of nested models
alongside each other for comparison. This allows you to use
like robust standard errors and scaling with this type of plot that you could
otherwise create with some other packages.
plot_coefs: Just like
plot_summs, but no special
summ features. This
allows you to use models unsupported by
summ, however, and you can provide
summ objects to plot the same model with different
summ argument alongside
cat_plot: This was a long time coming. It is a complementary function to
interact_plot, but is designed to deal with interactions between
categorical variables. You can use bar plots, line plots, dot plots, and
box and whisker plots to do so. You can also use the function to plot the effect
of a single categorical predictor without an interaction.
Thanks to Kim Henry who reported a bug with
johnson_neyman in the case that
there is an interval, but the entire interval is outside of the plotted area:
When that happened, the legend wrongly stated the plotted line was
Besides that bugfix, some new features:
johnson_neymanfails to find the interval (because it doesn't exist), it no longer quits with an error. The output will just state the interval was not found and the plot will still be created.
interact_plothas been added. Previously, if the moderator was a factor, you would get very nicely colored plotted points when using
plot.points = TRUE. But if the moderator was continuous, the points were just black and it wasn't very informative beyond examining the main effect of the focal predictor. With this update, the plotted points for continous moderators are shaded along a gradient that matches the colors used for the predicted lines and confidence intervals.
Not many user-facing changes since 0.7.4, but major refactoring internally should speed things up and make future development smoother.
This function outputs regression models supported by summ in table formats useful for RMarkdown output as well as specific options for exporting to Microsoft Word files. This is particularly helpful for those wanting an efficient way to export regressions that are standardized and/or use robust standard errors.
The documentation for j_summ has been reorganized such that each supported
model type has its own, separate documentation.
?j_summ will now just give you
links to each supported model type.
More importantly, j_summ will from now on be referred to as, simply, summ. Your old code is fine; j_summ will now be an alias for summ and will run the same underlying code. Documentation will refer to the summ function, though. That includes the updated vignette.
One new feature for summ.lm:
part.corr = TRUEargument for a linear model, partial and semipartial correlations for each variable are reported.
More tweaks to summ.merMod:
pbkrtestpackage is installed. If it is, p values are calculated based on the Kenward-Roger degrees of freedom calculation and printed. Otherwise, p values are not shown by default with lmer models. P values are shown with glmer models, since that is also the default behavior of
r.squaredoption, which for now is FALSE by default. It adds runtime since it must fit a null model for comparison and sometimes this also causes convergence issues.
Returning to CRAN!
A very strange bug on CRAN's servers was causing jtools updates to silently fail when I submitted updates; I'd get a confirmation that it passed all tests, but a LaTeX error related to an Indian journal I cited was torpedoing it before it reached CRAN servers.
The only change from 0.7.0 is fixing that problem, but if you're a CRAN user you will want to flip through the past several releases as well to see what you've missed.
Bug fix release:
A lot of changes!
More goodies for users of interact_plot:
Other feature changes: