Create Validation Tests for Automated Content Analysis

Intended to create standard human-in-the-loop validity tests for typical automated content analysis such as topic modeling and dictionary-based methods. This package offers a standard workflow with functions to prepare, administer and evaluate a human-in-the-loop validity test. This package provides functions for validating topic models using word intrusion and Topic intrusion tests, as described in Chang et al. (2009) <>. This package also provides functions for generating gold-standard data which are useful for validating dictionary-based methods. The default settings of all generated tests match those suggested in Chang et al. (2009) and Song et al. (2020) .


Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.


0.3.4 by Chung-hong Chan, 3 months ago

Report a bug at

Browse source code at

Authors: Chung-hong Chan [aut, cre]

Documentation:   PDF Manual  

LGPL (>= 2.1) license

Imports stm, purrr, tibble, shiny, miniUI, text2vec, digest, R6, quanteda, irr, ggplot2, cowplot, dplyr, stats, utils

Suggests testthat, topicmodels, covr, stringr, knitr, rmarkdown

See at CRAN