Create Validation Tests for Automated Content Analysis

Intended to create standard human-in-the-loop validity tests for typical automated content analysis such as topic modeling and dictionary-based methods. This package offers a standard workflow with functions to prepare, administer and evaluate a human-in-the-loop validity test. This package provides functions for validating topic models using word intrusion and Topic intrusion tests, as described in Chang et al. (2009) < https://papers.nips.cc/paper/3700-reading-tea-leaves-how-humans-interpret-topic-models>. This package also provides functions for generating gold-standard data which are useful for validating dictionary-based methods. The default settings of all generated tests match those suggested in Chang et al. (2009) and Song et al. (2020) .


News

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.

install.packages("oolong")

0.3.4 by Chung-hong Chan, 6 months ago


https://github.com/chainsawriot/oolong


Report a bug at https://github.com/chainsawriot/oolong/issues


Browse source code at https://github.com/cran/oolong


Authors: Chung-hong Chan [aut, cre]


Documentation:   PDF Manual  


LGPL (>= 2.1) license


Imports stm, purrr, tibble, shiny, miniUI, text2vec, digest, R6, quanteda, irr, ggplot2, cowplot, dplyr, stats, utils

Suggests testthat, topicmodels, covr, stringr, knitr, rmarkdown


See at CRAN