Task view: Psychometric Models and Methods

Last updated on 2021-11-08 by Patrick Mair

Psychometrics is concerned with theory and techniques of psychological measurement. Psychometricians have also worked collaboratively with those in the field of statistics and quantitative methods to develop improved ways to organize, analyze, and scale corresponding data. Since much functionality is already contained in base R and there is considerable overlap between tools for psychometry and tools described in other views, particularly in SocialSciences, we only give a brief overview of packages that are closely related to psychometric methodology.

Please let me know if I have omitted something of importance, or if a new package or function should be mentioned here.

Item Response Theory (IRT):

  • The eRm package fits extended Rasch models, i.e. the ordinary Rasch model for dichotomous data (RM), the linear logistic test model (LLTM), the rating scale model (RSM) and its linear extension (LRSM), the partial credit model (PCM) and its linear extension (LPCM) using conditional ML estimation. Missing values are allowed.
  • The package ltm also fits the simple RM. Additionally, functions for estimating Birnbaum's 2- and 3-parameter models based on a marginal ML approach are implemented as well as the graded response model for polytomous data, and the linear multidimensional logistic model.
  • The mirt estimates dichotomous and polytomous response data using unidimensional and multidimensional latent trait models under the IRT paradigm. Exploratory and confirmatory models can be estimated with quadrature (EM) or stochastic (MHRM) methods. Confirmatory bi-factor and two-tier analyses are available for modeling item testlets. Multiple group analysis and mixed effects designs also are available for detecting differential item functioning and modeling item and person covariates.
  • TAM fits unidimensional and multidimensional item response models and also includes multifaceted models, latent regression models and options for drawing plausible values.
  • PLmixed fits (generalized) linear mixed models (GLMM) with factor structures.
  • MLCIRTwithin provides a flexible framework for the estimation of discrete two-tier IRT models for the analysis of dichotomous and ordinal polytomous item responses.
  • IRTShiny provides an interactive shiny application for IRT analysis.
  • Some additional uni- and multidimensional item response models (especially for locally dependent item responses) and some exploratory methods (DETECT, LSDM, model-based reliability) are included in sirt.
  • The pcIRT estimates the multidimensional polytomous Rasch model and the Mueller's continuous rating scale model.
  • An implementation of the partial credit model with response styles is given in the PCMRS.
  • MultiLCIRT estimates IRT models under (1) multidimensionality assumption, (2) discreteness of latent traits, (3) binary and ordinal polytomous items.
  • Conditional maximum likelihood estimation via the EM algorithm and information-criterion-based model selection in binary mixed Rasch models are implemented in the psychomix package. The mixRasch package estimates mixture Rasch models, including the dichotomous Rasch model, the rating scale model, and the partial credit model.
  • The PP package includes estimation of (MLE, WLE, MAP, EAP, ROBUST) person parameters for the 1,2,3,4-PL model and the GPCM (generalized partial credit model). The parameters are estimated under the assumption that the item parameters are known and fixed. The package is useful e.g. in the case that items from an item pool/item bank with known item parameters are administered to a new population of test-takers and an ability estimation for every test-taker is needed.
  • The equateIRT package computes direct, chain and average (bisector) equating coefficients with standard errors using Item Response Theory (IRT) methods for dichotomous items. equateMultiple can be used for equating of multiple forms using IRT methods.
  • kequate implements the kernel method of test equating using the CB, EG, SG, NEAT CE/PSE and NEC designs, supporting gaussian, logistic and uniform kernels and unsmoothed and pre-smoothed input data.
  • The EstCRM package calibrates the parameters for Samejima's Continuous IRT Model via EM algorithm and Maximum Likelihood. It allows to compute item fit residual statistics, to draw empirical 3D item category response curves, to draw theoretical 3D item category response curves, and to generate data under the CRM for simulation studies.
  • The difR package contains several traditional methods to detect DIF in dichotomously scored items. Both uniform and non-uniform DIF effects can be detected, with methods relying upon item response models or not. Some methods deal with more than one focal group.
  • The package lordif provides a logistic regression framework for detecting various types of DIF.
  • DIFplus allows users to implement extensions of the Mantel-Haenszel DIF detection procedures in the presence of multilevel data.
  • DIFlasso implements a penalty approach to differential item functioning in Rasch models. It can handle settings with multiple (metric) covariates.
  • GPCMlasso provides a function to detect DIF in generalized partial credit models (GPCM).
  • DIFtree performs recursive partitioning for simultaneous selection of items and variables that induce DIF in dichotomous or polytomous items.
  • DIFboost can be used for DIF detection in Rasch models by boosting techniques.
  • A set of functions to perform differential item and item functioning analyses is implemented in the DFIT package. It includes functions to use the Monte Carlo item parameter replication (IPR) approach for obtaining the associated statistical significance tests cut-off points.
  • The difNLR package uses nonlinear regression to estimate DIF.
  • The catR package allows for computarized adaptive testing using IRT methods.
  • The mirtCAT package provides tools to generate an HTML interface for creating adaptive and non-adaptive educational and psychological tests using the shiny package. Suitable for applying unidimensional and multidimensional computerized adaptive tests using IRT methodology and for creating simple questionnaires forms to collect response data directly in R.
  • xxIRT is implementation of related to IRT and computer-based testing.
  • The package plRasch computes maximum likelihood estimates and pseudo-likelihood estimates of parameters of Rasch models for polytomous (or dichotomous) items and multiple (or single) latent traits. Robust standard errors for the pseudo-likelihood estimates are also computed.
  • The irtplay package fits unidimensional IRT models to mixture of dichotomous and polytomous models using MML with EM-algorithm.
  • Explicit calculation (not estimation) of Rasch item parameters (dichotomous and polytomous) by means of a pairwise comparison approach can be done using the pairwise package.
  • Multilevel Rasch models can be estimated using lme4, nlme, and MCMCglmm with crossed or partially crossed random effects. GLMMRR adds some flexibility in terms of link functions, whereas ordinal can be used for polytomous models. An infrastructure for estimating tree-structured item response models of the GLMM family using lme4 is provided in irtrees.
  • Nonparametric IRT analysis can be computed by means if the mokken package. It includes an automated item selection algorithm, and various checks of model assumptions.
  • Nonparametric IRT for nonmonotonic IRFs of proximity data can be fitted using the mudfold package.
  • RaschSampler allows the construction of exact Rasch model tests by generating random zero-one matrices with given marginals.
  • Statistical power simulation for testing the Rasch model based on a three-way ANOVA design with mixed classification can be carried out using pwrRasch.
  • The irtProb package is designed to estimate multidimensional subject parameters (MLE and MAP) such as personal pseudo-guessing, personal fluctuation, personal inattention. These supplemental parameters can be used to assess person fit, to identify misfit type, to generate misfitting response patterns, or to make correction while estimating the proficiency level considering potential misfit at the same time.
  • Tools to assess model fit and identify misfitting items for Rasch models and PCMS are implemented in iarm. It includes item fit statistics, ICCs, item-restscore association, conditional likelihood ratio tests, assessment of measurement error, estimates of the reliability and test targeting.
  • cacIRT computes classification accuracy and consistency under Item Response Theory. Implements total score and latent trait IRT methods as well as total score kernel-smoothed methods.
  • The package irtoys provides a simple common interface to the estimation of item parameters in IRT models for binary responses with three different programs (ICL, BILOG-MG, and ltm, and a variety of functions useful with IRT models.
  • The CDM estimates several cognitive diagnosis models (DINA, DINO, GDINA, RRUM, LCDM, pGDINA, mcDINA), the general diagnostic model (GDM) and structured latent class analysis (SLCA).
  • Gaussian ordination, related to logistic IRT and also approximated as maximum likelihood estimation through canonical correspondence analysis is implemented in various forms in the package VGAM.
  • emIRT provides various EM-algorithms IRT models (binary and ordinal responses, along with dynamic and hierarchical models).
  • immer implements some item response models for multiple ratings, including the hierarchical rater model and a wrapper function to the commercial FACETS program.
  • The latdiag package produces commands to drive the dot program from graphviz to produce a graph useful in deciding whether a set of binary items might have a latent scale with non-crossing ICCs.
  • The purpose of the rpf package is to factor out logic and math common to IRT fitting, diagnostics, and analysis. It is envisioned as core support code suitable for more specialized IRT packages to build upon.
  • WrightMap provides graphical tools for plotting item-person maps.
  • irtDemo includes a collection of shiny applications to demonstrate or to explore fundamental IRT concepts. ifaTools is a shiny interface to IRT with OpenMx.
  • IRT utility functions described in the Baker/Kim book are included in birtr.
  • Convenience functions to use and automate IRT modeling for judgement data are implemented in jrt.
  • The conquestr package allows users to call ACER ConQuest from within R.

Correspondence Analysis (CA), Optimal Scaling:

  • The package ca comprises two parts, one for simple correspondence analysis and one for multiple and joint correspondence analysis.
  • Simple and canonical CA are provided by the package anacor, including confidence ellipsoids. It allows for different scaling methods such as standard scaling, Benzecri scaling, centroid scaling, and Goodman scaling.
  • Homogeneity analysis aka multiple CA and various Gifi extensions can be computed by means of the Gifi package, which replaces homals. This package includes various other optimal scaling methods such as Morals (monotone regression), Princals (nonlinear PCA), Overals (nonlinear canonical correlation analysis), etc.
  • Simple and multiple correspondence analysis can be performed using corresp() and mca() in package MASS.
  • The package ade4 contains an extensive set of functions covering, e.g., principal components, simple and multiple, fuzzy, non symmetric, and decentered correspondence analysis. Additional functionality is provided at Bioconductor in the package made4 (see also here).
  • The package cocorresp fits predictive and symmetric co-correspondence analysis (CoCA) models to relate one data matrix to another data matrix.
  • Apart from several factor analytic methods FactoMineR performs CA including supplementary row and/or column points and multiple correspondence analysis (MCA) with supplementary individuals, supplementary quantitative variables and supplementary qualitative variables.
  • Package vegan supports all basic ordination methods, including non-metric multidimensional scaling. The constrained ordination methods include constrained analysis of proximities, redundancy analysis, and constrained (canonical) and partially constrained correspondence analysis.
  • cabootcrs computes bootstrap confidence regions for CA.
  • cncaGUI implements a GUI with which users can construct and interact with canonical (non-symmetrical) CA.
  • SVD based multivariate exploratory methods such as PCA, CA, MCA (as well as a Hellinger form of CA), generalized PCA are implemented in ExPosition. The package also allows for supplementary data projection.
  • cds can be used for constrained dual scaling for detecting response styles.
  • CAvariants provides six variants of two-way CA: simple, singly ordered, doubly ordered, non-symmetrical, singly ordered non-symmetrical ca, and doubly ordered non-symmetrical.
  • MCAvariants provides MCA and ordered MCA via orthogonal polynomials.
  • Specific and class specific MCA on survey-like data can be fitted using soc.ca.
  • optiscale provides tools for performing an optimal scaling transformation on a data vector.
  • A general framework of optimal scaling methods is implemented in the aspect.

Factor Analysis (FA), Principal Component Analysis (PCA):

  • Exploratory FA is the package stats as function factanal() and fa() and fa.poly() (ordinal data) in psych.
  • esaBcv estimates the number of latent factors and factor matrix.
  • SparseFactorAnalysis scales count and binary data with sparse FA.
  • EFAutilities computes robust standard errors and factor correlations under a variety of conditions.
  • faoutlier implements influential case detection methods for FA and SEM.
  • The package psych includes functions such as fa.parallel() and VSS() for estimating the appropriate number of factors/components as well as ICLUST() for item clustering.
  • PCA can be fitted with prcomp() (based on svd(), preferred) as well as princomp() (based on eigen() for compatibility with S-PLUS). Additional rotation methods for FA based on gradient projection algorithms can be found in the package GPArotation. The package nFactors produces a non-graphical solution to the Cattell scree test. Some graphical PCA representations can be found in the psy package. paran implements Horn's test of principal components/factors.
  • FA and PCA with supplementary individuals and supplementary quantitative/qualitative variables can be performed using the FactoMineR package whereas MCMCpack has some options for sampling from the posterior for ordinal and mixed factor models.
  • The Gifi package implements Princals, a PCA version for mixed-scale level input data.
  • nsprcomp and elasticnet fit sparse PCA.
  • Threeway PCA models (Tucker, Parafac/Candecomp) can be fitted using PTAk, ThreeWay, and multiway.
  • Independent component analysis (ICA) can be computed using fastICA, ica, eegkit (designed for EEG data), and AnalyzeFMRI (designed for fMRI data).
  • A desired number of robust principal components can be computed with the pcaPP package.
  • bpca implements 2D and 3D biplots of multivariate data based on PCA and diagnostic tools of the quality of the reduction.
  • missMDA provides imputation of incomplete continuous or categorical datasets in principal component analysis (PCA), multiple correspondence analysis (MCA) model, or multiple factor analysis (MFA) model.

Structural Equation Models (SEM):

  • The package lavaan can be used to estimate a large variety of multivariate statistical models, including path analysis, confirmatory factor analysis, structural equation modeling and growth curve models. It includes the lavaan model syntax which allows users to express their models in a compact way and allows for ML, GLS, WLS, robust ML using Satorra-Bentler corrections, and FIML for data with missing values. It fully supports for meanstructures and multiple groups and reports standardized solutions, fit measures, modification indices and more as output.
  • The OpenMx package allows for the estimation of a wide variety of advanced multivariate statistical models. It consists of a library of functions and optimizers that allow you to quickly and flexibly define an SEM model and estimate parameters given observed data.
  • The sem package fits general (i.e., latent-variable) SEMs by FIML, and structural equations in observed-variable models by 2SLS. Categorical variables in SEMs can be accommodated via the polycor package.
  • lslx fits semi-confirmatory SEM via penalized likelihood with elastic net or minimax concave penalty.
  • The lavaan.survey package allows for complex survey structural equation modeling (SEM). It fits structural equation models (SEM) including factor analysis, multivariate regression models with latent variables and many other latent variable models while correcting estimates, standard errors, and chi-square-derived fit measures for a complex sampling design. It incorporates clustering, stratification, sampling weights, and finite population corrections into a SEM analysis.
  • The nlsem package fits nonlinear structural equation mixture models using the EM algorithm. Three different approaches are implemented: LMS (Latent Moderated Structural Equations), SEMM (Structural Equation Mixture Models), and NSEMM (Nonlinear Structural Equations Mixture Models).
  • A collection of functions for conducting meta-analysis using a structural equation modeling (SEM) approach via OpenMx is provided by the metaSEM package.
  • A general implementation of a computational framework for latent variable models (including structural equation models) is given in lava.
  • The pls package can be used for partial least-squares estimation. The package semPLS fits structural equation models using partial least squares (PLS). The PLS approach is referred to as soft-modeling technique requiring no distributional assumptions on the observed data.
  • simsem is a package designed to aid in Monte Carlo simulations using SEM (for methodological investigations, power analyses and much more).
  • Sim.DiffProc provides a framework for parallelized Monte Carlo simulation-estimation in multidimensional continuous-time models, which have been implemented as SEM.
  • semTools is a package of add on functions that can aid in fitting SEMs in R (for example one function automates imputing missing data, running imputed datasets and combining the results from these datasets).
  • semPlot produces path diagrams and visual analysis for outputs of various SEM packages.
  • plotSEMM for graphing nonlinear relations among latent variables from structural equation mixture models.
  • SEMModComp conducts tests of difference in fit for mean and covariance structure models as in SEM.
  • semdiag and influence.SEM implements outlier, leverage diagnostics, and case influence for SEM.
  • gSEM conducts semi-supervised generalized SEM and piecewiseSEM fits piecewise SEM.
  • rsem implements robust SEM with missing data and auxiliary variables.
  • regsem performs Regularization on SEM and sparseSEM implements sparse-aware ML for SEM.
  • Recursive partitioning (SEM trees, SEM forests) is implemented in semtree.
  • lsl conducts SEM via penalized likelihood (latent structure learning).
  • MIIVsem contains functions for estimating structural equation models using instrumental variables.
  • The systemfit package implements a wider variety of estimators for observed-variables models, including nonlinear simultaneous-equations models.
  • STARTS contains functions for estimating the STARTS model.
  • Interfaces between R and other SEM software: REQS, MplusAutomation, and lisrelToR.

Multidimensional Scaling (MDS):

  • The smacof package provides many approaches to metric and nonmetric MDS, including extensions for MDS with external constraints, spherical MDS, asymmetric MDS, three-way MDS (INDSCAL/IDIOSCAL), Bentler-Weeks model, unidimensional scaling, Procrustes, inverse MDS.
  • MASS and stats provide functionalities for computing classical MDS using the cmdscale() function. Sammon mapping sammon() and non-metric MDS isoMDS() are other relevant functions.
  • Nonmetric MDS can also be computed with metaMDS() in vegan. Furthermore, labdsv and ecodist provide the function nmds() and some routines can be found in xgobi. Also, the ExPosition implements a function for metric MDS.
  • Principal coordinate analysis can be computed with capscale() in vegan; in labdsv and ecodist using pco() and with dudi.pco() in ade4.
  • INDSCAL is also implemented in the SensoMineR package.
  • The package MLDS allows for the computation of maximum likelihood difference scaling (MLDS).
  • DistatisR implements the DiSTATIS/CovSTATIS 3-way metric MDS approach.
  • Symbolic MDS for interval-valued dissimilarities (hypersphere and hyperbox model) can be fitted with the smds package.
  • Supervised MDS is implemented in superMDS.
  • munfold provides functions for metric unfolding.
  • The asymmetry package implements the slide-vector model for asymmetric MDS.
  • semds fits asymmetric and three-way MDS within an SEM framework.
  • cops performs cluster optimized proximity scaling which refers to MDS methods that aim at pronouncing the clustered appearance of the configuration.

Classical Test Theory (CTT):

  • The CTT package can be used to perform a variety of tasks and analyses associated with classical test theory: score multiple-choice responses, perform reliability analyses, conduct item analyses, and transform scores onto different scales.
  • Functions for correlation theory, meta-analysis (validity generalization), reliability, item analysis, inter-rater reliability, and classical utility are contained in the psychometric package.
  • For multilevel model ICC for slope heterogeneity see iccbeta.
  • An interactive shiny application for CTT is provided by CTTShiny.
  • The cocron package provides functions to statistically compare two or more alpha coefficients based on either dependent or independent groups of individuals.
  • The CMC package calculates and plots the step-by-step Cronbach-Mesbach curve, that is a method, based on the Cronbach alpha coefficient of reliability, for checking the unidimensionality of a measurement scale.
  • The betafunctions package includes an implementation of the so-called "Livingston and Lewis" approach to classification accuracy and consistency.
  • Cronbach alpha, kappa coefficients, and intra-class correlation coefficients (ICC) can be found in the psy package. Functions for ICC computation can be also found in the packages psych, psychometricand ICC.
  • A number of routines for scale construction and reliability analysis useful for personality and experimental psychology are contained in the package psych.
  • subscore can be used for computing subscores in CTT and IRT.
  • The quantifying construct validity procedure is implemented in qcv.

Knowledge Structure Analysis:

  • DAKS provides functions and example datasets for the psychometric theory of knowledge spaces. This package implements data analysis methods and procedures for simulating data and transforming different formulations in knowledge space theory.
  • The kst package contains basic functionality to generate, handle, and manipulate deterministic knowledge structures based on sets and relations. Functions for fitting probabilistic knowledge structures are included in the pks package.

Latent Class and Profile Analysis:

  • LCA with random effects can be performed with the package randomLCA. In addition, the package e1071 provides the function lca(). Another package is poLCA for polytomous variable latent class analysis. LCA can also be fitted using flexmix which optionally allows for the inclusion of concomitant variables and latent class regression.
  • LCAvarsel implements variable selection for LCA.
  • covLCA fits latent class models with covariate effects on underlying and measured variables.
  • lcda fits latent class discriminant analysis.
  • tidyLPA is a user-friendly implementation of latent profile analysis.
  • ClustVarLV clusters variables around latent variables.

Paired Comparisons, Rankings, Ratings:

  • Bradley-Terry models for paired comparisons are implemented in the package BradleyTerry2 and in eba. The latter allows for the computation of elimination-by-aspects models.
  • Recursive partitioning trees for Bradley-Terry models are implemented in psychotree.
  • BTLLasso allows one to include subject-specific and object-specific covariates into paired comparison models shrinks the effects using Lasso.
  • prefmod fits loglinear Bradley-Terry models (LLBT) and pattern models for paired comparisons, rankings, and ratings.
  • pcFactorStan provides convenience functions and pre-programmed Stan models related to the pairwise comparison factor model.
  • PLMIX fits finite mixtures of Plackett-Luce models for partial top rankings/orderings within the Bayesian framework.
  • A variety of unfolding techniques for rankings and ratings are implemented in smacof.
  • Thurstonian IRT models for forced-choice items can be fitted with thurstonianIRT and kcirt.

Network Psychometrics:

  • Estimation of a sparse inverse covariance matrix using a lasso penalty (graphical lasso) can be achieved using glasso.
  • networktools includes assorted tools for network analysis (bridge centrality, impact, and goldbricker).
  • Bootstrap methods to assess accuracy and stability of estimated network structures and centrality indices are implemented in bootnet.
  • Permutation tests for network comparisons are implemented in NetworkComparisonTest.
  • Model-based recursive partitioning for networks: networktree.
  • Network structures for multilevel and graphical vector autoregression models can be obtain using mlVAR and graphicalVAR.
  • mgm estimates time-varying k-order mixed graphical models.
  • EstimateGroupNetwork can be used to simultaneously estimate networks from different groups or classes via joint graphical lasso.
  • Various implementations for Ising models: IsingSampler, elasticIsing, and IsingFit.
  • lvnet simultaneously estimates factor and network models.
  • Approaches for SEM and Confirmatory Network Analysis are implemented in psychonetrics. This includes multi-group (dynamic) SEM in combination with confirmatory network models from cross-sectional, time-series and panel data.
  • Network models for longitudinal data estimated within an SEM framework: gimme.
  • The qgraph package can be used to visualize data as networks.
  • NetworkToolbox implements network analysis and graph theory measures used in neuroscience, cognitive science, and psychology. Methods include various filtering methods and approaches such as threshold, dependency, information filtering networks, and efficiency-cost optimization.
  • Methods and Measures for semantic network analysis including partial node bootstrapping and significance tests: SemNeT.

Bayesian Psychometrics:

  • blavaan fits a variety of Bayesian latent variable models, including confirmatory factor analysis, structural equation models, and latent growth curve models.
  • An analytical framework for latent variables with different Bayesian learning methods, including the partially confirmatory factor analysis and partially confirmatory IRT is implemented in LAWBL.
  • Bayesian approaches for estimating item and person parameters by means of Gibbs-Sampling are included in MCMCpack. In addition, the pscl package allows for Bayesian IRT and roll call analysis.
  • LNIRT is a package for log-normal response time IRT modeling for responses and response times, estimated with MCMC.
  • edstan provides convenience functions and preprogrammed Stan models related to IRT.
  • fourPNO can be used for Bayesian 4-PL IRT estimation.
  • Simulation-based Bayesian inference for IRT latent traits can be performed using ltbayes.
  • Gibbs sampling for Bayesian estimation of (Exploratory) Reduced Reparameterized Unified Models are implemented in rrum and errum.
  • For Bayesian estimation of the (exploratory) DINA (deterministic input, noisy and gate) see dina and edina.
  • Data package containing coded item and q matrices used in various psychometric publications: edmdata.
  • BayesLCA implements Bayesian LCA.

Other Related Packages:

  • The psychotools provides an infrastructure for psychometric modeling such as data classes (e.g., for paired comparisons) and basic model fitting functions (e.g., for Rasch and Bradley-Terry models).
  • quickpsy is a package developed to quickly fit and plot psychometric functions for multiple conditions.
  • cNORM provides methods for generating regression based continuous norms. The approach does not rely on prior distribution assumptions and is thus non-parametric, but it can be combined with Box-Cox power transformations for semi-parametrically modelling the data as well.
  • A system for the management, assessment, and psychometric analysis of data from educational and psychological tests is implemented in dexter, with multi-stage test calibration in dexterMST.
  • Psychometric mixture models based on flexmix infrastructure are provided by means of the psychomix package (at the moment Rasch mixture models and Bradley-Terry mixture models).
  • The equate package contains functions for non-IRT equating under both random groups and nonequivalent groups with anchor test designs. Mean, linear, equipercentile and circle-arc equating are supported, as are methods for univariate and bivariate presmoothing of score distributions. Specific equating methods currently supported include Tucker, Levine observed score, Levine true score, Braun/Holland, frequency estimation, and chained equating.
  • The CopyDetect package contains several IRT and non-IRT based statistical indices proposed in the literature for detecting answer copying on multiple-choice examinations.
  • Interactive shiny application for analysis of educational tests and their items are provided by the ShinyItemAnalysis package.
  • Coefficients for interrater reliability and agreements can be computed with the irr.
  • Psychophysical data can be analyzed with the psyphy package.
  • Functions and example datasets for Fechnerian scaling of discrete object sets are provided by fechner. It computes Fechnerian distances among objects representing subjective dissimilarities, and other related information.
  • The modelfree package provides functions for nonparametric estimation of a psychometric function and for estimation of a derived threshold and slope, and their standard deviations and confidence intervals.
  • Confidence intervals for standardized effect sizes: The MBESS package.
  • The mediation allows both parametric and nonparametric causal mediation analysis. It also allows researchers to conduct sensitivity analysis for certain parametric models.
  • Functions for data screening, testing moderation, mediation, and estimating power are contained in the QuantPsyc package.
  • The package multiplex is especially designed for social networks with relations at different levels. In this sense, the program has effective ways to treat multiple networks data sets with routines that combine algebraic structures like the partially ordered semigroup with the existing relational bundles found in multiple networks. An algebraic approach for two-mode networks is made through Galois derivations between families of the pair of subsets.
  • Social Relations Analyses for round robin designs are implemented in the TripleR package. It implements all functionality of the SOREMO software, and provides new functions like the handling of missing values, significance tests for single groups, or the calculation of the self enhancement index.
  • Fitting and testing multinomial processing tree models, a class of statistical models for categorical data with latent parameters, can be performed using the mpt package. These parameters are the link probabilities of a tree-like graph and represent the cognitive processing steps executed to arrive at observable response categories.The MPTinR package provides a user-friendly way for analysis of multinomial processing tree (MPT) models. The TreeBUGS package provides user-friendly methods to fit Bayesian hierarchical MPT models (beta-MPT and latent-trait MPT) and implements posterior-predictive checks, summary plots, correlations and regressions for person-level MPT parameters.
  • Beta regression for modeling beta-distributed dependent variables, e.g., rates and proportions, is available in betareg.
  • The cocor package provides functions to compare two correlations based on either dependent or independent groups.
  • The profileR package provides a set of tools that implement profile analysis and cross-validation techniques.
  • The TestScorer package provides a GUI for entering test items and obtaining raw and transformed scores. The results are shown on the console and can be saved to a tabular text file for further statistical analysis. The user can define his own tests and scoring procedures through a GUI.
  • wCorr calculates Pearson, Spearman, tetrachoric polychoric, and polyserial correlation coefficients, in weighted or unweighted form.
  • The gtheory package fits univariate and multivariate generalizability theory (G-theory) models.
  • The GDINA package estimates various cognitive diagnosis models (CDMs) within the generalized deterministic inputs, noisy and gate (G-DINA) model and the sequential G-DINA model framework. It can also be used to conduct Q-matrix validation, item and model fit statistics, model comparison at the test and item level and differential item functioning. A graphical user interface is also provided.
  • Simulation routines for cognitive diagnostic model DINA and rRUM are implemented in simcdm.
  • TestDataImputation for missing item responses imputation for test and assessment data.
  • lba performs latent budget analysis for compositional data (two-way contingency table with an exploratory variable and a response variable)
  • LAM includes some procedures for latent variable modeling with a particular focus on multilevel data.
  • psychTools contains tools to accompany the psych package.
  • ata provides a collection of psychometric methods to process item metadataand use target assessment and measurement blueprint constraints to assemble a test form.
  • TestDesign implements optimal test design approaches for fixed and adaptive test construction.


ade4 — 1.7-18

Analysis of Ecological Data: Exploratory and Euclidean Methods in Environmental Sciences

anacor — 1.1-3

Simple and Canonical Correspondence Analysis

AnalyzeFMRI — 1.1-24

Functions for Analysis of fMRI Datasets Stored in the ANALYZE or NIFTI Format

aspect — 1.0-5

A General Framework for Multivariate Analysis with Optimal Scaling

asymmetry — 2.0.3

Multidimensional Scaling of Asymmetric Data

ata — 1.1.1

Automated Test Assembly

betafunctions — 1.6.1

Functions for Working with Two- And Four-Parameter Beta Probability Distributions

betareg — 3.1-4

Beta Regression

BayesLCA — 1.9

Bayesian Latent Class Analysis

birtr — 1.0.0

The R Package for "The Basics of Item Response Theory Using R"

blavaan — 0.3-18

Bayesian Latent Variable Analysis

bootnet — 1.5

Bootstrap Methods for Various Network Estimation Routines

bpca — 1.3-4

Biplot of Multivariate Data Based on Principal Components Analysis

BradleyTerry2 — 1.1-2

Bradley-Terry Models

BTLLasso — 0.1-11

Modelling Heterogeneity in Paired Comparison Data

ca — 0.71.1

Simple, Multiple and Joint Correspondence Analysis

cabootcrs — 2.0

Bootstrap Confidence Regions for Simple and Multiple Correspondence Analysis

cacIRT — 1.4

Classification Accuracy and Consistency under Item Response Theory

catR — 3.16

Generation of IRT Response Patterns under Computerized Adaptive Testing

CAvariants — 5.6

Correspondence Analysis Variants

CDM — 7.5-15

Cognitive Diagnosis Modeling

cds — 1.0.3

Constrained Dual Scaling for Detecting Response Styles

ClustVarLV — 2.0.1

Clustering of Variables Around Latent Variables

CMC — 1.0

Cronbach-Mesbah Curve

cncaGUI — 1.0

Canonical Non-Symmetrical Correspondence Analysis in R

cNORM — 2.1.0

Continuous Norming

cocor — 1.1-3

Comparing Correlations

cocorresp — 0.4-3

Co-Correspondence Analysis Methods

cocron — 1.0-1

Statistical Comparisons of Two or more Alpha Coefficients

conquestr — 0.8.5

An R Front End for 'ACER ConQuest'

cops — 1.2-0

Cluster Optimized Proximity Scaling

CopyDetect — 1.3

Computing Response Similarity Indices for Multiple-Choice Tests

covLCA — 1.0

Latent Class Models with Covariate Effects on Underlying and Measured Variables

CTT — 2.3.3

Classical Test Theory Functions

CTTShiny — 0.1

Classical Test Theory via Shiny

DAKS — 2.1-3

Data Analysis and Knowledge Spaces

dexter — 1.1.4

Data Management and Analysis of Tests

dexterMST — 0.9.2

CML and Bayesian Calibration of Multistage Tests

DFIT — 1.1

Differential Functioning of Items and Tests

DIFboost — 0.3

Detection of Differential Item Functioning (DIF) in Rasch Models by Boosting Techniques

DIFlasso — 1.0-4

A Penalty Approach to Differential Item Functioning in Rasch Models

DIFplus — 1.1

Multilevel Mantel-Haenszel Statistics for Differential Item Functioning Detection

DIFtree — 3.1.6

Item Focussed Trees for the Identification of Items in Differential Item Functioning

difNLR — 1.3.7

DIF and DDF Detection by Non-Linear Regression Models

difR — 5.1

Collection of Methods to Detect Dichotomous Differential Item Functioning (DIF)

dina — 2.0.0

Bayesian Estimation of DINA Model

DistatisR — 1.0.1

DiSTATIS Three Way Metric Multidimensional Scaling

e1071 — 1.7-9

Misc Functions of the Department of Statistics, Probability Theory Group (Formerly: E1071), TU Wien

eba — 1.10-0

Elimination-by-Aspects Models

ecodist — 2.0.7

Dissimilarity-Based Functions for Ecological Analysis

edina — 0.1.1

Bayesian Estimation of an Exploratory Deterministic Input, Noisy and Gate Model

edmdata — 1.2.0

Data Sets for Psychometric Modeling

edstan — 1.0.6

Stan Models for Item Response Theory

eegkit — 1.0-4

Toolkit for Electroencephalography Data

EFAutilities — 2.1.1

Utility Functions for Exploratory Factor Analysis

elasticIsing — 0.2

Ising Network Estimation using Elastic Net and k-Fold Cross-Validation

elasticnet — 1.3

Elastic-Net for Sparse Estimation and Sparse PCA

emIRT — 0.0.11

EM Algorithms for Estimating Item Response Theory Models

equate — 2.0.7

Observed-Score Linking and Equating

equateIRT — 2.2.1

IRT Equating Methods

equateMultiple — 0.1.0

Equating of Multiple Forms

eRm — 1.0-2

Extended Rasch Modeling

errum — 0.0.3

Exploratory Reduced Reparameterized Unified Model Estimation

esaBcv — 1.2.1

Estimate Number of Latent Factors and Factor Matrix for Factor Analysis

EstCRM — 1.4

Calibrating Parameters for the Samejima's Continuous IRT Model

EstimateGroupNetwork — 0.3.1

Perform the Joint Graphical Lasso and Selects Tuning Parameters

ExPosition — 2.8.23

Exploratory Analysis with the Singular Value Decomposition

FactoMineR — 2.4

Multivariate Exploratory Data Analysis and Data Mining

faoutlier — 0.7.6

Influential Case Detection Methods for Factor Analysis and Structural Equation Models

fastICA — 1.2-3

FastICA Algorithms to Perform ICA and Projection Pursuit

fechner — 1.0-3

Fechnerian Scaling of Discrete Object Sets

flexmix — 2.3-17

Flexible Mixture Modeling

fourPNO — 1.1.0

Bayesian 4 Parameter Item Response Model

GDINA — 2.8.8

The Generalized DINA Model Framework

Gifi — 0.3-9

Multivariate Analysis with Optimal Scaling

gimme — 0.7-7

Group Iterative Multiple Model Estimation

glasso — 1.11

Graphical Lasso: Estimation of Gaussian Graphical Models

GLMMRR — 0.5.0

Generalized Linear Mixed Model (GLMM) for Binary Randomized Response Data

GPArotation — 2014.11-1

GPA Factor Rotation

GPCMlasso — 0.1-5

Differential Item Functioning in Generalized Partial Credit Models

graphicalVAR — 0.3

Graphical VAR for Experience Sampling Data

gSEM —

Semi-Supervised Generalized Structural Equation Modeling

gtheory — 0.1.2

Apply Generalizability Theory with R

homals — 1.0-9

Gifi Methods for Optimal Scaling

iarm — 0.4.2

Item Analysis in Rasch Models

ica — 1.0-2

Independent Component Analysis

ICC — 2.3.0

Facilitating Estimation of the Intraclass Correlation Coefficient

iccbeta — 1.2.0

Multilevel Model Intraclass Correlation for Slope Heterogeneity

ifaTools — 0.23

Toolkit for Item Factor Analysis with 'OpenMx'

immer — 1.1-35

Item Response Models for Multiple Ratings

influence.SEM — 2.2

Case Influence in Structural Equation Models

irr — 0.84.1

Various Coefficients of Interrater Reliability and Agreement

irtDemo — 0.1.4

Item Response Theory Demo Collection

irtoys — 0.2.1

A Collection of Functions Related to Item Response Theory (IRT)

irtplay — 1.6.3

Unidimensional Item Response Theory Modeling

irtProb — 1.2

Utilities and Probability Distributions Related to Multidimensional Person Item Response Models

irtrees — 1.0.0

Estimation of Tree-Based Item Response Models

IRTShiny — 1.2

Item Response Theory via Shiny

IsingFit — 0.3.1

Fitting Ising Models Using the ELasso Method

IsingSampler — 0.2.1

Sampling Methods and Distribution Functions for the Ising Model

jrt — 1.1.0

Item Response Theory Modeling and Scoring for Judgment Data

kcirt — 0.6.0

k-Cube Thurstonian IRT Models

kequate — 1.6.3

The Kernel Method of Test Equating

kst — 0.5-2

Knowledge Space Theory

labdsv — 2.0-1

Ordination and Multivariate Analysis for Ecology

LAM — 0.5-15

Some Latent Variable Models

latdiag — 0.3

Draws Diagrams Useful for Checking Latent Scales

lava — 1.6.10

Latent Variable Models

lavaan — 0.6-10

Latent Variable Analysis

lavaan.survey —

Complex Survey Structural Equation Modeling (SEM)

LAWBL — 1.4.0

Latent (Variable) Analysis with Bayesian Learning

lba — 2.4.4

Latent Budget Analysis for Compositional Data

LCAvarsel — 1.1

Variable Selection for Latent Class Analysis

lcda — 0.3

Latent Class Discriminant Analysis

lisrelToR — 0.1.4

Import output from LISREL into R

lme4 — 1.1-27.1

Linear Mixed-Effects Models using 'Eigen' and S4

LNIRT — 0.5.1

LogNormal Response Time Item Response Theory Models

lordif — 0.3-3

Logistic Ordinal Regression Differential Item Functioning using IRT

lsl — 0.5.6

Latent Structure Learning

lslx — 0.6.10

Semi-Confirmatory Structural Equation Modeling via Penalized Likelihood or Least Squares

ltbayes — 0.4

Simulation-Based Bayesian Inference for Latent Traits of Item Response Models

ltm — 1.1-1

Latent Trait Models under IRT

lvnet — 0.3.5

Latent Variable Network Modeling

MASS — 7.3-55

Support Functions and Datasets for Venables and Ripley's MASS

MBESS — 4.8.1

The MBESS R Package

MCAvariants — 2.6

Multiple Correspondence Analysis Variants

MCMCglmm — 2.33

MCMC Generalised Linear Mixed Models

MCMCpack — 1.6-0

Markov Chain Monte Carlo (MCMC) Package

mediation — 4.5.0

Causal Mediation Analysis

metaSEM —

Meta-Analysis using Structural Equation Modeling

mgm — 1.2-12

Estimating Time-Varying k-Order Mixed Graphical Models

MIIVsem — 0.5.8

Model Implied Instrumental Variable (MIIV) Estimation of Structural Equation Models

mirt — 1.35.1

Multidimensional Item Response Theory

mirtCAT — 1.12

Computerized Adaptive Testing with Multidimensional Item Response Theory

missMDA — 1.18

Handling Missing Values with Multivariate Data Analysis

mixRasch — 1.1

Mixture Rasch Models with JMLE

MLCIRTwithin — 2.1.1

Latent Class Item Response Theory (LC-IRT) Models under Within-Item Multidimensionality

MLDS — 0.4.7

Maximum Likelihood Difference Scaling

mlVAR — 0.5

Multi-Level Vector Autoregression

MplusAutomation — 1.0.0

An R Package for Facilitating Large-Scale Latent Variable Analyses in Mplus

modelfree — 1.1-1

Model-free estimation of a psychometric function

mokken — 3.0.6

Conducts Mokken Scale Analysis

mpt — 0.7-0

Multinomial Processing Tree Models

MPTinR — 1.14.1

Analyze Multinomial Processing Tree Models

mudfold — 1.1.2

Multiple UniDimensional unFOLDing

MultiLCIRT — 2.11

Multidimensional Latent Class Item Response Theory Models

multiplex — 2.9.7

Algebraic Tools for the Analysis of Multiple Social Networks

multiway — 1.0-6

Component Models for Multi-Way Data

munfold — 0.3.5

Metric Unfolding

NetworkComparisonTest — 2.2.1

Statistical Comparison of Two Networks Based on Three Invariance Measures

NetworkToolbox — 1.4.2

Methods and Measures for Brain, Cognitive, and Psychometric Network Analysis

networktools — 1.4.0

Tools for Identifying Important Nodes in Networks

networktree — 1.0.1

Recursive Partitioning of Network Models

nFactors — 2.4.1

Parallel Analysis and Other Non Graphical Solutions to the Cattell Scree Test

nlme — 3.1-155

Linear and Nonlinear Mixed Effects Models

nlsem — 0.8

Fitting Structural Equation Mixture Models

nsprcomp — 0.5.1-2

Non-Negative and Sparse PCA

OpenMx — 2.20.0

Extended Structural Equation Modelling

optiscale — 1.2.2

Optimal Scaling

ordinal — 2019.12-10

Regression Models for Ordinal Data

pairwise — 0.5.0-2

Rasch Model Parameters by Pairwise Algorithm

paran — 1.5.2

Horn's Test of Principal Components/Factors

pcaPP — 1.9-74

Robust PCA by Projection Pursuit

pcFactorStan — 1.5.3

Stan Models for the Paired Comparison Factor Model

pcIRT — 0.2.4

IRT Models for Polytomous and Continuous Item Responses

PCMRS — 0.1-3

Model Response Styles in Partial Credit Models

piecewiseSEM — 2.1.2

Piecewise Structural Equation Modeling

pks — 0.4-1

Probabilistic Knowledge Structures

PLMIX — 2.1.1

Bayesian Analysis of Finite Mixtures of Plackett-Luce Models for Partial Rankings/Orderings

PLmixed — 0.1.5

Estimate (Generalized) Linear Mixed Models with Factor Structures

plotSEMM — 2.4

Graphing Nonlinear Relations Among Latent Variables from Structural Equation Mixture Models

plRasch — 1.0

Log Linear by Linear Association models and Rasch family models by pseudolikelihood estimation

pls — 2.8-0

Partial Least Squares and Principal Component Regression

poLCA — 1.4.1

Polytomous variable Latent Class Analysis

polycor — 0.8-1

Polychoric and Polyserial Correlations

PP — 0.6.3-11

Person Parameter Estimation

prefmod — 0.8-34

Utilities to Fit Paired Comparison Models for Preferences

profileR — 0.3-5

Profile Analysis of Multivariate Data in R

pscl — 1.5.5

Political Science Computational Laboratory

psy — 1.1

Various procedures used in psychometry

psych — 2.1.9

Procedures for Psychological, Psychometric, and Personality Research

psychometric — 2.2

Applied Psychometric Theory

psychomix — 1.1-8

Psychometric Mixture Models

psychonetrics — 0.10

Structural Equation Modeling and Confirmatory Network Analysis

psychotools — 0.7-0

Psychometric Modeling Infrastructure

psychotree — 0.15-4

Recursive Partitioning Based on Psychometric Models

psychTools — 2.1.12

Tools to Accompany the 'psych' Package for Psychological Research

psyphy — 0.2-3

Functions for Analyzing Psychophysical Data in R

PTAk — 1.4-0

Principal Tensor Analysis on k Modes

pwrRasch — 0.1-2

Statistical Power Simulation for Testing the Rasch Model

qcv — 1.0

Quantifying Construct Validity

qgraph — 1.9

Graph Plotting Methods, Psychometric Data Visualization and Graphical Model Estimation

QuantPsyc — 1.5

Quantitative Psychology Tools

quickpsy —

Fits Psychometric Functions for Multiple Groups

RaschSampler — 0.8-8

Rasch Sampler

randomLCA — 1.1-1

Random Effects Latent Class Analysis

regsem — 1.8.0

Regularized Structural Equation Modeling

REQS — 0.8-12

R/EQS Interface

rpf — 1.0.11

Response Probability Functions

rrum — 0.2.0

Bayesian Estimation of the Reduced Reparameterized Unified Model with Gibbs Sampling

rsem — 0.5.0

Robust Structural Equation Modeling with Missing Data and Auxiliary Variables

sem — 3.1-13

Structural Equation Models

semdiag — 0.1.2

Structural equation modeling diagnostics

semds — 0.9-6

Structural Equation Multidimensional Scaling

semPLS — 1.0-10

Structural Equation Modeling Using Partial Least Squares

semPlot — 1.1.2

Path Diagrams and Visual Analysis of Various SEM Packages' Output

SEMModComp — 1.0

Model Comparisons for SEM

SemNeT — 1.4.3

Methods and Measures for Semantic Network Analysis

semTools — 0.5-5

Useful Tools for Structural Equation Modeling

semtree — 0.9.17

Recursive Partitioning for Structural Equation Models

SensoMineR — 1.26

Sensory Data Analysis

ShinyItemAnalysis — 1.4.0

Test and Item Analysis via Shiny

Sim.DiffProc — 4.8

Simulation of Diffusion Processes

simcdm — 0.1.1

Simulate Cognitive Diagnostic Model ('CDM') Data

simsem — 0.5-16

SIMulated Structural Equation Modeling

sirt — 3.11-21

Supplementary Item Response Theory Models

smacof — 2.1-3

Multidimensional Scaling

smds — 1.0

Symbolic Multidimensional Scaling

soc.ca — 0.8.0

Specific Correspondence Analysis for the Social Sciences

SparseFactorAnalysis — 1.0

Scaling Count and Binary Data with Sparse Factor Analysis

sparseSEM — 2.5

Sparse-aware Maximum Likelihood for Structural Equation Models

STARTS — 1.2-35

Functions for the STARTS Model

subscore — 3.1

Computing Subscores in Classical Test Theory and Item Response Theory

superMDS — 1.0.2

Implements the supervised multidimensional scaling (superMDS) proposal of Witten and Tibshirani (2011)

systemfit — 1.1-24

Estimating Systems of Simultaneous Equations

TAM — 3.7-16

Test Analysis Modules

TestDataImputation — 2.3

Missing Item Responses Imputation for Test and Assessment Data

TestDesign — 1.2.6

Optimal Test Design Approach to Fixed and Adaptive Test Construction

TestScorer — 1.7.2

GUI for Entering Test Items and Obtaining Raw and Transformed Scores

ThreeWay — 1.1.3

Three-Way Component Analysis

thurstonianIRT — 0.12.1

Thurstonian IRT Models

tidyLPA — 1.1.0

Easily Carry Out Latent Profile Analysis (LPA) Using Open-Source or Commercial Software

TreeBUGS — 1.4.7

Hierarchical Multinomial Processing Tree Modeling

TripleR — 1.5.3

Social Relation Model (SRM) Analyses for Single or Multiple Groups

vegan — 2.5-7

Community Ecology Package

VGAM — 1.1-5

Vector Generalized Linear and Additive Models

wCorr — 1.9.5

Weighted Correlations

WrightMap — 1.2.3

IRT Item-Person Map with 'ConQuest' Integration

xgobi — 1.2-15

Interface to the XGobi and XGvis programs for graphical data analysis

xxIRT — 2.1.2

Item Response Theory and Computer-Based Testing in R

Task view list