Processing Data for Quantitative Language Comparison (QLC)

This is a collection of functions to read, recode, and transcode data for QLC.

version downloads DOI

Functions for data managements in Quantitative Language Comparison (QLC)

The package combines various methods to deal with data in language comparison, and it is intended to grow in the future to allow different datasets to be used and compared.

It consists of various read and write functions to import and produce different kinds of data.

When using external data, there are often various tweaks that one would like to perform before using the data for further research. This package offers assistance for some common recoding problems occurring for nominal data with the function recode. Please see the vignette for a detailed explanation of the intended usage.

To process strings, it is often very useful to tokenize them into graphemes (i.e. functional units of the orthography), and possibly replace those graphemes by other symbols to harmonize the orthographic representation of different orthographic representations ('transcription'). As a quick and easy way to specify, save, and document the decisions taken for the tokenization, we propose using an orthography profile. Function to write and read orthography profiles are provided in this package. The main function tokenize can check orthography profiles against data, and tokenize data into (tailored) graphemes according to orthography profiles.

This is an early alpha version, but it should function. You can download the package directly from CRAN. Have a look at the examples in the help files and at the vignettes to get an idea how to use the package:


If you want to have the latest changes, it is pretty easy to install this package directly from github into R by using:


There are vignettes trying to explain the intended usage of this package. Unfortunately, the vignette will not by build when you install this package. You can try the following, but it might throw an error:

devtools::install_github("cysouw/qlcData", build_vignettes = TRUE)

A few functions are available through a bash terminal. You will have to manually softlink these interfaces to you PATH, for example to link the function tokenize to /usr/local/bin/ use something like:

ln -is `Rscript -e 'cat(file.path(find.package("qlcData"), "exec", "tokenize"))'` /usr/local/bin

All available executables are tokenize, writeprofile and pass_align

Michael Cysouw [email protected]


qlcData 0.2

  • changing default recoding profile to empirically attested combinations of features
  • add glottolog data with access function getTree() and asPhylo()
  • add possibility for factors in recoding
  • including a shiny app for tokenization, with a helper launcher 'launch_shiny()'
  • adding docopt-executables to \exec with some help how to softlink these to PATH
  • make sure orthography profiles are treated as character data, not as factors
  • adding function pass_align() to transfer alignment to a string, with helper join_align()
  • changed vignette naming

qlcData 0.1

  • initial alpha version on CRAN

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.


0.2.1 by Michael Cysouw, a year ago

Browse source code at

Authors: Michael Cysouw

Documentation:   PDF Manual  

GPL-3 license

Imports stringi, yaml, shiny, docopt, data.tree, phytools, ape

Suggests knitr, rmarkdown

See at CRAN