This is a collection of tools that the author (Jacob) has written for the purpose of more efficiently understanding and sharing the results of (primarily) regression analyses. There are a number of functions focused specifically on the interpretation and presentation of interactions. Just about everything supports models from the survey package.
This package consists of a series of functions created by the author
(Jacob) to automate otherwise tedious research tasks. At this juncture,
the unifying theme is the more efficient presentation of regression
analyses. There are a number of functions for visualizing and doing
inference for interaction terms. Support for the survey
package’s
svyglm
objects as well as weighted regressions is a common theme
throughout.
Note: This is beta software. Bugs are possible, both in terms of codebreaking errors and more pernicious errors of mistaken computation.
For the most stable version, simply install from CRAN.
install.packages("jtools")
If you want the latest features and bug fixes then you can download from
Github. To do that you will need to have devtools
installed if you
don’t already:
install.packages("devtools")
Then install the package from Github.
devtools::install_github("jacoblong/jtools")
You should also check out the
dev
branch of this
repository for the latest and greatest changes, but also the latest and
greatest bugs. To see what features are on the roadmap, check the issues
section of the repository, especially the “enhancement” tag.
Here’s a synopsis of the current functions in the package:
summ
, plot_summs
, export_summs
)summ
is a replacement for summary
that provides the user several
options for formatting regression summaries. It supports glm
,
svyglm
, and merMod
objects as input as well. It supports calculation
and reporting of robust standard errors via the sandwich
package.
Basic use:
fit < lm(mpg ~ hp + wt, data = mtcars)summ(fit)
#> MODEL INFO:
#> Observations: 32
#> Dependent Variable: mpg
#> Type: OLS linear regression
#>
#> MODEL FIT:
#> F(2,29) = 69.21, p = 0.00
#> R² = 0.83
#> Adj. R² = 0.81
#>
#> Standard errors: OLS
#> Est. S.E. t val. p
#> (Intercept) 37.23 1.60 23.28 0.00 ***
#> hp 0.03 0.01 3.52 0.00 **
#> wt 3.88 0.63 6.13 0.00 ***
It has several conveniences, like refitting your model with scaled
variables (scale = TRUE
). You have the option to leave the outcome
variable in its original scale (transform.response = TRUE
), which is
the default for scaled models. I’m a fan of Andrew Gelman’s 2 SD
standardization method, so you can specify by how many standard
deviations you would like to rescale (n.sd = 2
).
You can also get variance inflation factors (VIFs) and partial/semipartial (AKA part) correlations. Partial correlations are only available for OLS models. You may also substitute confidence intervals in place of standard errors and you can choose whether to show p values.
summ(fit, scale = TRUE, vifs = TRUE, part.corr = TRUE, confint = TRUE, pvals = FALSE)
#> MODEL INFO:
#> Observations: 32
#> Dependent Variable: mpg
#> Type: OLS linear regression
#>
#> MODEL FIT:
#> F(2,29) = 69.21, p = 0.00
#> R² = 0.83
#> Adj. R² = 0.81
#>
#> Standard errors: OLS
#> Est. 2.5% 97.5% t val. VIF partial.r part.r
#> (Intercept) 20.09 19.15 21.03 43.82 <NA> <NA> <NA>
#> hp 2.18 3.44 0.91 3.52 1.77 0.55 0.27
#> wt 3.79 5.06 2.53 6.13 1.77 0.75 0.47
#>
#> Continuous predictors are meancentered and scaled by 1 s.d.
Clusterrobust standard errors:
data("PetersenCL", package = "sandwich")fit2 < lm(y ~ x, data = PetersenCL)summ(fit2, robust = "HC3", cluster = "firm")
#> MODEL INFO:
#> Observations: 5000
#> Dependent Variable: y
#> Type: OLS linear regression
#>
#> MODEL FIT:
#> F(1,4998) = 1310.74, p = 0.00
#> R² = 0.21
#> Adj. R² = 0.21
#>
#> Standard errors: Clusterrobust, type = HC3
#> Est. S.E. t val. p
#> (Intercept) 0.03 0.07 0.44 0.66
#> x 1.03 0.05 20.36 0.00 ***
Of course, summ
like summary
is bestsuited for interactive use.
When it comes to sharing results with others, you want sharper output
and probably graphics. jtools
has some options for that, too.
First, for tabular output, export_summs
is an interface to the
huxtable
package’s huxreg
function that preserves the niceties of
summ
, particularly its facilities for robust standard errors and
standardization. It also concatenates multiple models into a single
table.
fit < lm(mpg ~ hp + wt, data = mtcars)fit_b < lm(mpg ~ hp + wt + disp, data = mtcars)fit_c < lm(mpg ~ hp + wt + disp + drat, data = mtcars)coef_names < c("Horsepower" = "hp", "Weight (tons)" = "wt","Displacement" = "disp", "Rear axle ratio" = "drat","Constant" = "(Intercept)")export_summs(fit, fit_b, fit_c, scale = TRUE, transform.response = TRUE, coefs = coef_names)
Model 1 
Model 2 
Model 3 

Horsepower 
0.36 ** 
0.35 * 
0.40 ** 
(0.10) 
(0.13) 
(0.13) 

Weight (tons) 
0.63 *** 
0.62 ** 
0.56 ** 
(0.10) 
(0.17) 
(0.18) 

Displacement 
0.02 
0.08 

(0.21) 
(0.22) 

Rear axle ratio 
0.16 

(0.12) 

Constant 
0.00 
0.00 
0.00 
(0.08) 
(0.08) 
(0.08) 

N 
32 
32 
32 
R2 
0.83 
0.83 
0.84 
*** p < 0.001; ** p < 0.01; * p < 0.05. 
In RMarkdown documents, using export_summs
and the chunk option
results = 'asis'
will give you nicelooking tables in HTML and PDF
output. Using the to.word = TRUE
argument will create a Microsoft Word
document with the table in it.
Another way to get a quick gist of your regression analysis is to plot
the values of the coefficients and their corresponding uncertainties
with plot_summs
(or the closely related plot_coefs
). jtools
has
made some slight changes to ggplot2
geoms to make everything look
nice; and like with export_summs
, you can still get your scaled models
and robust standard errors.
coef_names < coef_names[1:4] # Dropping intercept for plotsplot_summs(fit, fit_b, fit_c, scale = TRUE, robust = "HC3", coefs = coef_names)
And since you get a ggplot
object in return, you can tweak and theme
as you wish.
Another way to visualize the uncertainty of your coefficients is via the
plot.distributions
argument.
plot_summs(fit_c, scale = TRUE, robust = "HC3", coefs = coef_names, plot.distributions = TRUE)
These show the 95% interval width of a normal distribution for each estimate.
plot_coefs
works much the same way, but without support for summ
arguments like robust
and scale
. This enables a wider range of
models that have support from the broom
package but not for summ
.
Unless you have a really keen eye and good familiarity with both the underlying mathematics and the scale of your variables, it can be very difficult to look at the ouput of regression model that includes an interaction and actually understand what the model is telling you.
This package contains several means of aiding understanding and doing statistical inference with interactions.
The “classic” way of probing an interaction effect is to calculate the slope of the focal predictor at different values of the moderator. When the moderator is binary, this is especially informative—e.g., what is the slope for men vs. women? But you can also arbitrarily choose points for continuous moderators.
With that said, the more statistically rigorous way to explore these effects is to find the JohnsonNeyman interval, which tells you the range of values of the moderator in which the slope of the predictor is significant vs. nonsignificant at a specified alpha level.
The sim_slopes
function will by default find the JohnsonNeyman
interval and tell you the predictor’s slope at specified values of the
moderator; by default either both values of binary predictors or the
mean and the mean +/ one standard deviation for continuous moderators.
fiti < lm(mpg ~ hp * wt, data = mtcars)sim_slopes(fiti, pred = hp, modx = wt, jnplot = TRUE)
#> JOHNSONNEYMAN INTERVAL
#>
#> When wt is OUTSIDE the interval [3.69, 5.90], the slope of hp is p <
#> .05.
#>
#> Note: The range of observed values of wt is [1.51, 5.42]
#> SIMPLE SLOPES ANALYSIS
#>
#> Slope of hp when wt = 4.20 (+ 1 SD):
#> Est. S.E. t val. p
#> 0.00 0.01 0.31 0.76
#>
#> Slope of hp when wt = 3.22 (Mean):
#> Est. S.E. t val. p
#> 0.03 0.01 4.07 0.00
#>
#> Slope of hp when wt = 2.24 ( 1 SD):
#> Est. S.E. t val. p
#> 0.06 0.01 5.66 0.00
The JohnsonNeyman plot can really help you get a handle on what the
interval is telling you, too. Note that you can look at the
JohnsonNeyman interval directly with the johnson_neyman
function.
The above all generalize to threeway interactions, too.
This function plots two and threeway interactions using ggplot2
with
a similar interface to the aforementioned sim_slopes
function. Users
can customize the appearance with familiar ggplot2
commands. It
supports several customizations, like confidence intervals.
interact_plot(fiti, pred = hp, modx = wt, interval = TRUE)
You can also plot the observed data for comparison:
interact_plot(fiti, pred = hp, modx = wt, plot.points = TRUE)
The function also supports categorical moderators—plotting observed data in these cases can reveal striking patterns.
fitiris < lm(Petal.Length ~ Petal.Width * Species, data = iris)interact_plot(fitiris, pred = Petal.Width, modx = Species, plot.points = TRUE)
You may also combine the plotting and simple slopes functions by using
probe_interaction
, which calls both functions simultaneously.
Categorical by categorical interactions can be investigated using the
cat_plot
function.
There are several other things that might interest you.
effect_plot
: Plot predicted lines from regression models without
interactionsgscale
: Scale and/or meancenter data, including svydesign
objectsscale_mod
and center_mod
: Refit models with scaled and/or
meancentered datawgttest
and pf_sv_test
, which are combined in weights_tests
:
Test the ignorability of sample weights in regression modelssvycor
: Generate correlation matrices from svydesign
objectstheme_apa
: A mostly APAcompliant ggplot2
themeadd_gridlines
and drop_gridlines
: ggplot2
themechanging
convenience functionsmake_predictions
and plot_predictions
: a direct interface to the
internals of interact_plot
, cat_plot
, and effect_plot
with
some added optionsDetails on the arguments can be accessed via the R documentation
(?functionname
). There are now vignettes documenting just about
everything you can do as well.
I’m happy to receive bug reports, suggestions, questions, and (most of all) contributions to fix problems and add features. I prefer you use the Github issues system over trying to reach out to me in other ways. Pull requests for contributions are encouraged.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.
The source code of this package is licensed under the MIT License.
This is a minor release.
plot_predictions
had an incorrect default value for interval
, causing
an error if you used the default arguments with make_predictions
. The default
is now FALSE
. (#39)interact_plot
, cat_plot
, and effect_plot
would have errors when the
models included covariates (not involved in the interaction, if any) that
were nonnumeric. That has been corrected. (#41)TRUE
or FALSE
) were not handled by
the plotting functions appropriately, causing them to be treated as numeric.
They are now preserved as logical. (#40).sim_slopes
gave inaccurate results when factor moderators did not have
treatment coding ("contr.treatment"
) but are now recoded to treatment
coding.summ
output in RMarkdown documents is now powered by kableExtra
, which
(in my opinion) offers more attractive HTML output and seems to have better
luck with float placement in PDF documents. Your mileage may vary.rmdformats
rather than the base rmarkdown
template.tidy
and
glance
from broom
, knit_print
from knitr
, as_huxtable
from huxtable
)
will now have conditional namespace registration for users of R 3.6. This
shouldn't have much effect on end users.This release was initally intended to be a bugfix release, but enough other things came up to make it a minor release.
broom
update
when using export_summs
and plot_coefs
.plot_coefs
arising from the latest update to ggplot2
.export_summs
output for glm
models.
[#36]interact_plot
no longer errors if there are missing observations in
the original data and quantiles are requested.summ.merMod
, the default pvalue calculation is now via the
Satterthwaite method if you have lmerTest
installed. The old default,
KenwardRoger, is used by request or when pbkrtest
is installed but not
lmerTest
. It now calculates a different degrees of freedom for each
predictor and also calculates a variancecovariance matrix for the model,
meaning the standard errors are adjusted as well. It is not the default
largely because the computation takes too long for too many models.johnson_neyman
now allows you to specify your own critical t value
if you are using some alternate method to calculate it.johnson_neyman
now allows you to specify the range of moderator values
you want to plot as well as setting a title.sim_slopes
in a way similar to
interact_plot
. [#35]interact_plot
(e.g., when modx.values = "plusminus"
). [#31]plot_coefs
/plot_summs
now supports facetting the coefficients
based on userspecified groupings. See ?plot_summs
for details.summ
variants now have pretty output in RMarkdown documents if you
have the huxtable
package installed. This can be disabled with the chunk
option render = 'normal_print'
.modx.values
, mod2.values
, and
pred.values
in place of modxvals
, mod2vals
, and predvals
. Don't
go running to change your code, though; those old argument names will
still work, but these new ones are clearer and preferred in new code.plot
method for sim_slopes
objects. Just save
your sim_slopes
call to an object and call the plot
function on that
object to see what happens. Basically, it's plot_coefs
for sim_slopes
.huxtable
installed, you can now call as_huxtable
on a sim_slopes
object to get a publicationstyle table. The interface
is comparable to export_summs
.This release has several big changes embedded within, side projects that needed a lot of work to implement and required some userfacing changes. Overall these are improvements, but in some edge cases they could break old code. The following sections are divided by the affected functions. Some of the functions are discussed in more than one section.
interact_plot
, cat_plot
, and effect_plot
These functions no longer refit the inputted model to center covariates, impose labels on factors, and so on. This generally has several key positives, including
lm
models, 60% for svyglm
,
and 80% for merMod
in my testing). The speed gains increase as the models
become more complicated and the source data become larger.log
) in the formula,
the function would previously would have a lot of trouble and usually
have errors. Now this is supported, provided you input the data used to fit
the model via the data
argument. You'll receive a warning if the function
thinks this is needed to work right.As noted, there is a new data
argument for these functions. You do not
normally need to use this if your model is fit with a y ~ x + z
type of
formula. But if you start doing things like y ~ factor(x) + z
, then
you need to provide the source data frame. Another benefit is that this
allows for fitting polynomials with effect_plot
or even interactions with
polynomials with interact_plot
. For instance, if my model was fit using
this kind of formula  y ~ poly(x, 2) + z
 I could then plot the
predicted curve with effect_plot(fit, pred = x, data = data)
substituting
fit
with whatever my model is called and data
with whatever data frame
I used is called.
There are some possible drawbacks for these changes. One is that no longer are
factor predictors supported in interact_plot
and effect_plot
,
even twolevel ones. This worked before by coercing
them to 0/1 continuous variables and refitting the model. Since the model is
no longer refit, this can't be done. To work around it, either transform the
predictor to numeric before fitting the model or use cat_plot
. Relatedly,
twolevel factor covariates are no longer centered and are simply
set to their reference value.
Robust confidence intervals: Plotting robust standard errors for compatible
models (tested on lm
, glm
). Just use the robust
argument like you would
for sim_slopes
or summ
.
Preliminary support for confidence intervals for merMod
models: You may
now get confidence intervals when using merMod
objects as input to the
plotting functions. Of importance, though, is the uncertainty is only for
the fixed effects. For now, a warning is printed. See the next section for
another option for merMod
confidence intervals.
Rug plots in the margins: Socalled "rug" plots can be included in the
margins of the plots for any of these functions. These show tick marks for
each of the observed data points, giving a nonobtrusive impression of the
distribution of the pred
variable and (optionally) the dependent variable.
See the documentation for interact_plot
and effect_plot
and the
rug
/rug.sides
arguments.
Facet by the modx
variable: Some prefer to visualize the predicted lines
on separate panes, so that is now an option available via the facet.modx
argument. You can also use plot.points
with this, though the division into
groups is not straightforward is the moderator isn't a factor. See the
documentation for more on how that is done.
make_predictions
and plot_predictions
: New tools for advanced plottingTo let users have some more flexibility, jtools
now lets users directly
access the (previously internal) functions that make effect_plot
, cat_plot
,
and interact_plot
work. This should make it easier to tailor the
outputs for specific needs. Some features may be implemented for these functions
only to keep the _plot
functions from getting any more complicated than they
already are.
The simplest use of the two functions is to use make_predictions
just like
you would effect_plot
/interact_plot
/cat_plot
. The difference is, of
course, that make_predictions
only makes the data that would be used for
plotting. The resulting predictions
object has both the predicted and original
data as well as some attributes describing the arguments used. If you pass
this object to plot_predictions
with no further arguments, it should do
exactly what the corresponding _plot
function would do. However, you might
want to do something entirely different using the predicted data which is part
of the reason these functions are separate.
One such feature specific to make_predictions
is bootstrap confidence
intervals for merMod
models.
You may no longer use these tools to scale the models. Use scale_mod
, save
the resulting object, and use that as your input to the functions if you want
scaling.
All these tools have a new default centered
argument. They are now set to
centered = "all"
, but "all"
no longer means what it used to. Now it refers
to all variables not included in the interaction, including the dependent
variable. This means that in effect, the default option does the same thing
that previous versions did. But instead of having that occur when
centered = NULL
, that's what centered = "all"
means. There is no
NULL
option any longer. Note that with sim_slopes
, the focal predictor
(pred
) will now be centered  this only affects the conditional intercept.
sim_slopes
This function now supports categorical (factor) moderators, though there is no option for JohnsonNeyman intervals in these cases. You can use the significance of the interaction term(s) for inference about whether the slopes differ at each level of the factor when the moderator is a factor.
You may now also pass arguments to summ
, which is used internally to calculate
standard errors, p values, etc. This is particularly useful if you are using
a merMod
model for which the pbkrtest
based p value calculation is too
timeconsuming.
gscale
The interface has been changed slightly, with the actual numbers always provided
as the data
argument. There is no x
argument and instead a vars
argument
to which you can provide variable names. The upshot is that it now fits much
better into a piping workflow.
The entire function has gotten an extensive reworking, which in some cases should result in significant speed gains. And if that's not enough, just know that the code was an absolute monstrosity before and now it's not.
There are two new functions that are wrappers around gscale
: standardize
and center
, which call gscale
but with n.sd = 1
in the first case and
with center.only = TRUE
in the latter case.
summ
Tired of specifying your preferred configuration every time you use summ
?
Now, many arguments will by default check your options so you can set your
own defaults. See ?set_summ_defaults
for more info.
Rather than having separate scale.response
and center.response
arguments,
each summ
function now uses transform.response
to collectively cover those
bases. Whether the response is centered or scaled depends on the scale
and
center
arguments.
The robust.type
argument is deprecated. Now, provide the type of robust
estimator directly to robust
. For now, if robust = TRUE
, it defaults to
"HC3"
with a warning. Better is to provide the argument directly, e.g.,
robust = "HC3"
. robust = FALSE
is still fine for using OLS/MLE standard
errors.
Whereas summ.glm
, summ.svyglm
, and summ.merMod
previously offered an
odds.ratio
argument, that has been renamed to exp
(short for exponentiate)
to better express the quantity.
vifs
now works when there are factor variables in the model.
One of the first bugs summ
ever had occurred when the function was given
a rankdeficient model. It is not straightforward to detect, especially since
I need to make a space for an almost empty row in the outputted table. At long
last, this release can handle such models gracefully.
Like the rest of R, when summ
rounded your output, items rounded exactly to
zero would be treated as, well, zero. But this can be misleading if the
original value was actually negative. For instance, if digits = 2
and a
coefficient was 0.003
, the value printed to the console was 0.00
,
suggesting a zero or slightly positive value when in fact it was the
opposite. This is a limitation of the round
(and trunc
) function. I've
now changed it so the zerorounded value retains its sign.
summ.merMod
now calculates pseudoR^2 much, much faster. For only modestly
complex models, the speedup is roughly 50x faster. Because of how much faster
it now is and how much less frequently it throws errors or prints cryptic
messages, it is now calculated by default. The confidence interval calculation
is now "Wald" for these models (see confint.merMod
for details) rather than
"profile", which for many models can take a very long time and sometimes does
not work at all. This can be toggled with the conf.method
argument.
summ.glm
/summ.svyglm
now will calculate pseudoR^2 for quasibinomial and
quasipoisson families using the value obtained from refitting them as
binomial/poisson. For now, I'm not touching AIC/BIC for such models
because the underlying theory is a bit different and the implementation
more challenging.
summ.lm
now uses the tdistribution for finding critical values for
confidence intervals. Previously, a normal approximation was used.
The summ.default
method has been removed. It was becoming an absolute terror
to maintain and I doubted anyone found it useful. It's hard to provide the
value added for models of a type that I do not know (robust errors don't
always apply, scaling doesn't always work, model fit statistics may not make
sense, etc.). Bug me if this has really upset things for you.
One new model type has been supported: rq
models from the quantreg
package.
Please feel free to provide feedback for the output and support of these models.
scale_lm
and center_lm
are now scale_mod
/center_mod
To better reflect the capabilities of these functions (not restricted to lm
objects), they have been renamed. The old names will continue to work to
preserve old code.
However, scale.response
and center.response
now default to FALSE
to
reflect the fact that only OLS models can support transformations of the
dependent variable in that way.
There is a new vars =
argument for scale_mod
that allows you to only apply
scaling to whichever variables are included in that character vector.
I've also implemented a neat technical fix that allows the updated model to itself be updated while not also including the actual raw data in the model call.
plot_coefs
and plot_summs
A variety of fixes and optimizations have been added to these functions.
Now, by default, there are two confidence intervals plotted, a thick line
representing (with default settings) the 90% interval and a thinner line
for the 95% intervals. You can set inner_ci_level
to NULL
to get rid of
the thicker line.
With plot_summs
, you can also set permodel summ
arguments by providing
the argument as a vector (e.g., robust = c(TRUE, FALSE)
). Length 1 arguments
are applied to all models. plot_summs
will now also support models not
accepted by summ
by just passing those models to plot_coefs
without
using summ
on them.
Another new option is point.shape
, similar to the model plotting functions.
This is most useful for when you are planning to distribute your output in
grayscale or to colorblind audiences (although the new default color scheme is
meant to be colorblind friendly, it is always best to use another visual cue).
The coolest is the new plot.distributions
argument, which if TRUE will plot
normal distributions to even better convey the uncertainty. Of course, you
should use this judiciously if your modeling or estimation approach doesn't
produce coefficient estimates that are asymptotically normally distributed.
Inspiration comes from https://twitter.com/BenJamesEdwards/status/979751070254747650.
Minor fixes: broom
's interface for Bayesian methods is inconsistent, so I've
hacked together a few tweaks to make brmsfit
and stanreg
models work with
plot_coefs
.
You'll also notice vertical gridlines on the plots, which I think/hope will
be useful. They are easily removable (see drop_x_gridlines()
) with ggplot2's
builtin theming options.
export_summs
Changes here are not too major. Like plot_summs
, you can now provide
unsupported model types to export_summs
and they are just passed through
to huxreg
. You can also provide different arguments to summ
on a permodel
basis in the way described under the plot_summs
heading above.
There are some tweaks to the model info (provided by glance
). Most prominent
is for merMod
models, for which there is now a separate N for each grouping
factor.
theme_apa
plus new functions add_gridlines
, drop_gridlines
New arguments have been added to theme_apa
: remove.x.gridlines
and
remove.y.gridlines
, both of which are TRUE
by default. APA hates giving
hard and fast rules, but the norm is that gridlines should be omitted unless
they are crucial for interpretation. theme_apa
is also now a "complete"
theme, which means specifying further options via theme
will not revert
theme_apa
's changes to the base theme.
Behind the scenes the helper functions add_gridlines
and drop_gridlines
are used, which do what they sound like they do. To avoid using the arguments
to those functions, you can also use add_x_gridlines
/add_y_gridlines
or
drop_x_gridlines
/drop_y_gridlines
which are wrappers around the
more general functions.
weights_tests
 wgttest
and pf_sv_test
 now handle missing data
in a more sensible and consistent way.
There is a new default qualitative palette, based on Color Universal Design
(designed to be readable by the colorblind) that looks great to all. There are
several other new palette choices as well. These are all documented at
?jtools_colors
Using the crayon
package as a backend, console output is now formatted
for most jtools
functions for better readability on supported systems.
Feedback on this is welcome since this might look better or worse in
certain editors/setups.
This release is limited to dealing with the huxtable
package's temporary
removal from CRAN, which in turn makes this package out of compliance with
CRAN policies regarding dependencies on nonCRAN packages.
Look out for jtools
1.0.0 coming very soon!
Bugfixes:
johnson_neyman
and sim_slopes
were both encountering errors with
merMod
input. Thanks to Seongho Bae for reporting these issues and testing
out development versions.gscale
.export_summs
had an extra space (e.g., ( 1)
)
due to changes in huxtable
. The defaults are now just single numbers.Bugfix:
control.fdr
was TRUE
.
It was reporting alpha * 2
in the legend, but now it is accurate again.Feature update:
johnson_neyman
now handles multilevel models from lme4
.Bugfix update:
Jonas Kunst helpfully pointed out some odd behavior of interact_plot
with
factor moderators. No longer should there be occasions in which you have two
different legends appear. The linetype and colors also should now be consistent
whether there is a second moderator or not. For continuous moderators, the
darkest line should also be a solid line and it is by default the highest
value of the moderator.
Other fixes:
huxtable
broke export_summs
, but that has been fixed.Feature updates:
interact_plot
and cat_plot
by
providing a vector of colors (any format that ggplot2
accepts) for the
color.class
argument.summ
that formats the output in a way that
lines up the decimal points. It looks great.This may be the single biggest update yet. If you downloaded from CRAN, be sure to check the 0.8.1 update as well.
New features are organized by function.
johnson_neyman:
control.fdr
option is added to control the false discovery rate,
building on new research. This makes the test more conservative but less likely
to be a Type 1 error.line.thickness
argument has been added after Heidi Jacobs pointed out
that it cannot be changed after the fact.sim_slopes
for 3way
interactions is muchimproved.alpha = .05
the critical
test statistic was always 1.96. Now, the residual degrees of freedom are used
with the t distribution. You can do it the old way by setting df = "normal"
or any arbitrary number.interact_plot:
plot.points
(see 0.8.1 for more). You can now plot
observed data with 3way interactions.modxvals
and mod2vals
specification has been added:
"terciles"
. This splits the observed data into 3 equally sized groups and
chooses as values the mean of each of those groups. This is especially good
for skewed data and for second moderators.linearity.check
option for twoway interactions. This facets by each
level of the moderator and lets you compare the fitted line with a loess
smoothed line to ensure that the interaction effect is roughly linear at each
level of the (continuous) moderator.plot.points = TRUE
.jitter
argument added for those using plot.points
. If you don't want
the points jittered, you can set jitter = 0
. If you want more or less, you
can play with the value until it looks right. This applies to effect_plot
as
well.summ:
r.squared
or pbkrtest
are slowing things down. r.squared
is now set to FALSE by
default.New functions!
plot_summs
: A graphic counterpart to export_summs
, which was introduced in
the 0.8.0 release. This plots regression coefficients to help in visualizing
the uncertainty of each estimate and facilitates the plotting of nested models
alongside each other for comparison. This allows you to use summ
features
like robust standard errors and scaling with this type of plot that you could
otherwise create with some other packages.
plot_coefs
: Just like plot_summs
, but no special summ
features. This
allows you to use models unsupported by summ
, however, and you can provide
summ
objects to plot the same model with different summ
argument alongside
each other.
cat_plot
: This was a long time coming. It is a complementary function to
interact_plot
, but is designed to deal with interactions between
categorical variables. You can use bar plots, line plots, dot plots, and
box and whisker plots to do so. You can also use the function to plot the effect
of a single categorical predictor without an interaction.
Thanks to Kim Henry who reported a bug with johnson_neyman
in the case that
there is an interval, but the entire interval is outside of the plotted area:
When that happened, the legend wrongly stated the plotted line was
nonsignificant.
Besides that bugfix, some new features:
johnson_neyman
fails to find the interval (because it doesn't exist),
it no longer quits with an error. The output will just state the interval was
not found and the plot will still be created.interact_plot
has been
added. Previously, if the moderator was a factor, you would get very nicely
colored plotted points when using plot.points = TRUE
. But if the moderator
was continuous, the points were just black and it wasn't very informative beyond
examining the main effect of the focal predictor. With this update, the
plotted points for continous moderators are shaded along a gradient that matches
the colors used for the predicted lines and confidence intervals.Not many userfacing changes since 0.7.4, but major refactoring internally should speed things up and make future development smoother.
Bugfixes:
Enhancements:
Important bugfix:
New function: export_summs
.
This function outputs regression models supported by summ in table formats useful for RMarkdown output as well as specific options for exporting to Microsoft Word files. This is particularly helpful for those wanting an efficient way to export regressions that are standardized and/or use robust standard errors.
The documentation for j_summ has been reorganized such that each supported
model type has its own, separate documentation. ?j_summ
will now just give you
links to each supported model type.
More importantly, j_summ will from now on be referred to as, simply, summ. Your old code is fine; j_summ will now be an alias for summ and will run the same underlying code. Documentation will refer to the summ function, though. That includes the updated vignette.
One new feature for summ.lm:
part.corr = TRUE
argument for a linear model, partial and
semipartial correlations for each variable are reported.More tweaks to summ.merMod:
pbkrtest
package
is installed. If it is, p values are calculated based on the KenwardRoger
degrees of freedom calculation and printed. Otherwise, p values are not
shown by default with lmer models. P values are shown with glmer models, since
that is also the default behavior of lme4
.r.squared
option, which for now is FALSE by default. It adds
runtime since it must fit a null model for comparison and sometimes this also
causes convergence issues.Returning to CRAN!
A very strange bug on CRAN's servers was causing jtools updates to silently fail when I submitted updates; I'd get a confirmation that it passed all tests, but a LaTeX error related to an Indian journal I cited was torpedoing it before it reached CRAN servers.
The only change from 0.7.0 is fixing that problem, but if you're a CRAN user you will want to flip through the past several releases as well to see what you've missed.
New features:
Bug fix:
Bug fix release:
A lot of changes!
New functions:
Enhancements:
Bug fixes:
More goodies for users of interact_plot:
Other feature changes:
Bug fixes: