Task view: Natural Language Processing

Last updated on 2021-10-20 by Fridolin Wild, Performance Augmentation Lab (PAL), Oxford Brookes University, UK

Natural language processing has come a long way since its foundations were laid in the 1940s and 50s (for an introduction see, e.g., Jurafsky and Martin (2008): Speech and Language Processing, Pearson Prentice Hall). This CRAN task view collects relevant R packages that support computational linguists in conducting analysis of speech and language on a variety of levels - setting focus on words, syntax, semantics, and pragmatics.

In recent years, we have elaborated a framework to be used in packages dealing with the processing of written material: the package tm. Extension packages in this area are highly recommended to interface with tm's basic routines and useRs are cordially invited to join in the discussion on further developments of this framework package. To get into natural language processing, the cRunch service and tutorials may be helpful.


  • tm provides a comprehensive text mining framework for R. The Journal of Statistical Software article Text Mining Infrastructure in R gives a detailed overview and presents techniques for count-based analysis methods, text clustering, text classification and string kernels.
  • tm.plugin.dc allows for distributing corpora across storage devices (local files or Hadoop Distributed File System).
  • tm.plugin.mail helps with importing mail messages from archive files such as used in Thunderbird (mbox, eml).
  • tm.plugin.alceste allows importing text corpora written in a file in the Alceste format.
  • tm.plugin.webmining allow importing news feeds in XML (RSS, ATOM) and JSON formats. Currently, the following feeds are implemented: Google Blog Search, Google Finance, Google News, NYTimes Article Search, Reuters News Feed, Yahoo Finance, and Yahoo Inplay.
  • RcmdrPlugin.temis is an Rcommander plug-in providing an integrated solution to perform a series of text mining tasks such as importing and cleaning a corpus, and analyses like terms and documents counts, vocabulary tables, terms co-occurrences and documents similarity measures, time series analysis, correspondence analysis and hierarchical clustering.
  • openNLP provides an R interface to OpenNLP, a collection of natural language processing tools including a sentence detector, tokenizer, pos-tagger, shallow and full syntactic parser, and named-entity detector, using the Maxent Java package for training and using maximum entropy models.
  • Trained models for English and Spanish to be used with openNLP are available from http://datacube.wu.ac.at/ as packages openNLPmodels.en and openNLPmodels.es, respectively.
  • RWeka is a interface to Weka which is a collection of machine learning algorithms for data mining tasks written in Java. Especially useful in the context of natural language processing is its functionality for tokenization and stemming.
  • tidytext provides means for text mining for word processing and sentiment analysis using dplyr, ggplot2, and other tidy tools.
  • udpipe provides language-independant tokenization, part of speech tagging, lemmatization, dependency parsing, and training of treebank-based annotation models.

Words (lexical DBs, keyword extraction, string manipulation, stemming)

  • R's base package already provides a rich set of character manipulation routines. See help.search(keyword = "character", package = "base") for more information on these capabilities.
  • wordnet provides an R interface to WordNet, a large lexical database of English.
  • RKEA provides an R interface to KEA (Version 5.0). KEA (for Keyphrase Extraction Algorithm) allows for extracting keyphrases from text documents. It can be either used for free indexing or for indexing with a controlled vocabulary.
  • gsubfn can be used for certain parsing tasks such as extracting words from strings by content rather than by delimiters. demo("gsubfn-gries") shows an example of this in a natural language processing context.
  • textreuse provides a set of tools for measuring similarity among documents and helps with detecting passages which have been reused. The package implements shingled n-gram, skip n-gram, and other tokenizers; similarity/dissimilarity functions; pairwise comparisons; minhash and locality sensitive hashing algorithms; and a version of the Smith-Waterman local alignment algorithm suitable for natural language.
  • boilerpipeR helps with the extraction and sanitizing of text content from HTML files: removal of ads, sidebars, and headers using the boilerpipe Java library.
  • tau contains basic string manipulation and analysis routines needed in text processing such as dealing with character encoding, language, pattern counting, and tokenization.
  • SnowballC provides exactly the same API as Rstem, but uses a slightly different design of the C libstemmer library from the Snowball project. It also supports two more languages.
  • stringi provides R language wrappers to the International Components for Unicode (ICU) library and allows for: conversion of text encodings, string searching and collation in any locale, Unicode normalization of text, handling texts with mixed reading direction (e.g., left to right and right to left), and text boundary analysis (for tokenizing on different aggregation levels or to identify suitable line wrapping locations).
  • stringdist implements an approximate string matching version of R's native 'match' function. It can calculate various string distances based on edits (Damerau-Levenshtein, Hamming, Levenshtein, optimal string alignment), qgrams (q-gram, cosine, jaccard distance) or heuristic metrics (Jaro, Jaro-Winkler). An implementation of soundex is provided as well. Distances can be computed between character vectors while taking proper care of encoding or between integer vectors representing generic sequences.
  • Rstem (available from Omegahat) is an alternative interface to a C version of Porter's word stemming algorithm.
  • koRpus is a diverse collection of functions for automatic language detection, hyphenation, several indices of lexical diversity (e.g., type token ratio, HD-D/vocd-D, MTLD) and readability (e.g., Flesch, SMOG, LIX, Dale-Chall). See the web page for more information.
  • oreprovides an alternative to R's built-in functionality for handling regular expressions, based on the Onigmo Regular Expression Library. Offers first-class compiled regex objects, partial matching and function-based substitutions, amongst other features. A benchmark comparing results for ore functions with stringi and the R base implementation is available jonclayden/regex-performance.
  • languageR provides data sets and functions exemplifying statistical methods, and some facilitatory utility functions used in the book by R. H. Baayen: "Analyzing Linguistic Data: a Practical Introduction to Statistics Using R", Cambridge University Press, 2008.
  • zipfR offers some statistical models for word frequency distributions. The utilities include functions for loading, manipulating and visualizing word frequency data and vocabulary growth curves. The package also implements several statistical models for the distribution of word frequencies in a population. (The name of this library derives from the most famous word frequency distribution, Zipf's law.)
  • wordcloud provides a visualisation similar to the famous wordle ones: it horizontally and vertically distributes features in a pleasing visualisation with the font size scaled by frequency.
  • hunspell is a stemmer and spell-checker library designed for languages with rich morphology and complex word compounding or character encoding. The package can check and analyze individual words as well as search for incorrect words within a text, latex or (R package) manual document.
  • phonics provides a collection of phonetic algorithms including Soundex, Metaphone, NYSIIS, Caverphone, and others.
  • tesseract is an OCR engine with unicode (UTF-8) support that can recognize over 100 languages out of the box.
  • mscsweblm4r provides an interface to the Microsoft Cognitive Services Web Language Model API and can be used to calculate the probability for a sequence of words to appear together, the conditional probability that a specific word will follow an existing sequence of words, get the list of words (completions) most likely to follow a given sequence of words, and insert spaces into a string of words adjoined together without any spaces (hashtags, URLs, etc.).
  • mscstexta4r provides an interface to the Microsoft Cognitive Services Text Analytics API and can be used to perform sentiment analysis, topic detection, language detection, and key phrase extraction.
  • bnosac/sentencepiece (available from github) is an unsupervised tokeniser producing Byte Pair Encoding (BPE), Unigram, Char, or Word models.
  • tokenizers helps split text into tokens, supporting shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, lines, and regular expressions.
  • tokenizers.bpe helps split text into syllable tokens, implemented using Byte Pair Encoding and the YouTokenToMe library.
  • crfsuite uses Conditional Random Fields for labelling sequential data.


  • lsa provides routines for performing a latent semantic analysis with R. The basic idea of latent semantic analysis (LSA) is, that text do have a higher order (=latent semantic) structure which, however, is obscured by word usage (e.g. through the use of synonyms or polysemy). By using conceptual indices that are derived statistically via a truncated singular value decomposition (a two-mode factor analysis) over a given document-term matrix, this variability problem can be overcome. The article Investigating Unstructured Texts with Latent Semantic Analysis gives a detailed overview and demonstrates the use of the package with examples from the are of technology-enhanced learning. The book Learning Analytics in R with LSA, SNA, and MPIA provides comprehensive package-by-package examples and code samples.
  • topicmodels provides an interface to the C code for Latent Dirichlet Allocation (LDA) models and Correlated Topics Models (CTM) by David M. Blei and co-authors and the C++ code for fitting LDA models using Gibbs sampling by Xuan-Hieu Phan and co-authors.
  • BTM helps identify topics in texts from term-term cooccurrences (hence 'biterm' topic model, BTM).
  • topicdoc provides topic-specific diagnostics for LDA and CTM topic models to assist in evaluating topic quality.
  • lda implements Latent Dirichlet Allocation and related models similar to LSA and topicmodels.
  • stm (Structural Topic Model) implements a topic model derivate that can include document-level meta-data. The package also includes tools for model selection, visualization, and estimation of topic-covariate regressions.
  • kernlab allows to create and compute with string kernels, like full string, spectrum, or bounded range string kernels. It can directly use the document format used by tm as input.
  • bnosac/golgotha (not yet on CARN) provides a wrapper to Bidirectional Encoder Representations from Transformers (BERT) for language modelling and textual entailment in particular.
  • ruimtehol provides a neural network machine learning approach to vector space semantics, implementing an interface to StarSpace, providing means for classification, proximity measurement, and model management (training, predicting, several interfaces for textual entailment of varying granularity).
  • skmeans helps with clustering providing several algorithms for spherical k-means partitioning.
  • movMF provides another clustering alternative (approximations are fitted with von Mises-Fisher distributions of the unit length vectors).
  • textir is a suite of tools for text and sentiment mining.
  • textcat provides support for n-gram based text categorization.
  • textrank is an extension of the PageRank and allows to summarize text by calculating how sentences are related to one another.
  • corpora offers utility functions for the statistical analysis of corpus frequency data.
  • text2vec provides tools for text vectorization, topic modeling (LDA, LSA), word embeddings (GloVe), and similarities.


  • qdap helps with quantitative discourse analysis of transcripts.
  • quanteda supports quantitative analysis of textual data.


  • corporaexplorer facilitates visual information retrieval over document collections, supporting filtering and corpus-level as well as document-level visualisation using an interactive web apps built using Shiny.
  • textplot provides various methods for corpus-, document-, and sentence-level visualisation.
  • tm.plugin.factiva, tm.plugin.lexisnexis, tm.plugin.europresse allow importing press and Web corpora from (respectively) Dow Jones Factiva, LexisNexis, and Europresse.


boilerpipeR — 1.3.2

Interface to the Boilerpipe Java Library

BTM — 0.3.6

Biterm Topic Models for Short Text

corpora — 0.5

Statistics and Data Sets for Corpus Frequency Data

corporaexplorer — 0.8.4

A 'Shiny' App for Exploration of Text Collections

crfsuite — 0.3.4

Conditional Random Fields for Labelling Sequential Data in Natural Language Processing

gsubfn — 0.7

Utilities for Strings and Function Arguments

hunspell — 3.0.1

High-Performance Stemmer, Tokenizer, and Spell Checker

kernlab — 0.9-29

Kernel-Based Machine Learning Lab

koRpus — 0.13-8

Text Analysis with Emphasis on POS Tagging, Readability, and Lexical Diversity

languageR — 1.5.0

Analyzing Linguistic Data: A Practical Introduction to Statistics

lda — 1.4.2

Collapsed Gibbs Sampling Methods for Topic Models

lsa — 0.73.2

Latent Semantic Analysis

mscstexta4r — 0.1.2

R Client for the Microsoft Cognitive Services Text Analytics REST API

mscsweblm4r — 0.1.2

R Client for the Microsoft Cognitive Services Web Language Model REST API

movMF — 0.2-6

Mixtures of von Mises-Fisher Distributions

openNLP — 0.2-7

Apache OpenNLP Tools Interface

phonics — 1.3.10

Phonetic Spelling Algorithms

ore — 1.7.0

An R Interface to the Onigmo Regular Expression Library

quanteda — 3.1.0

Quantitative Analysis of Textual Data

qdap — 2.4.3

Bridging the Gap Between Qualitative Data and Quantitative Analysis

RcmdrPlugin.temis — 0.7.10

Graphical Integrated Text Mining Solution

RKEA — 0.0-6

R/KEA Interface

ruimtehol — 0.3

Learn Text 'Embeddings' with 'Starspace'

RWeka — 0.4-43

R/Weka Interface

skmeans — 0.2-13

Spherical k-Means Clustering

SnowballC — 0.7.0

Snowball Stemmers Based on the C 'libstemmer' UTF-8 Library

stm — 1.3.6

Estimation of the Structural Topic Model

stringdist — 0.9.8

Approximate String Matching, Fuzzy Text Search, and String Distance Functions

stringi — 1.7.5

Character String Processing Facilities

tau — 0.0-24

Text Analysis Utilities

tesseract — 4.1.2

Open Source OCR Engine

text2vec — 0.6

Modern Text Mining Framework for R

textcat — 1.0-7

N-Gram Based Text Categorization

textir — 2.0-5

Inverse Regression for Text Analysis

textplot — 0.2.1

Text Plots

textrank — 0.3.1

Summarize Text by Ranking Sentences and Finding Keywords

textreuse — 0.1.5

Detect Text Reuse and Document Similarity

tidytext — 0.3.2

Text Mining using 'dplyr', 'ggplot2', and Other Tidy Tools

tm — 0.7-8

Text Mining Package

tm.plugin.alceste — 1.1

Import texts from files in the Alceste format using the tm text mining framework

tm.plugin.dc — 0.2-10

Text Mining Distributed Corpus Plug-in

tm.plugin.europresse — 1.4

Import Articles from 'Europresse' Using the 'tm' Text Mining Framework

tm.plugin.factiva — 1.8

Import Articles from 'Factiva' Using the 'tm' Text Mining Framework

tm.plugin.lexisnexis — 1.4.1

Import Articles from 'LexisNexis' Using the 'tm' Text Mining Framework

tm.plugin.mail — 0.2-1

Text Mining E-Mail Plug-in

tm.plugin.webmining — 1.3

Retrieve Structured, Textual Data from Various Web Sources

tokenizers — 0.2.1

Fast, Consistent Tokenization of Natural Language Text

tokenizers.bpe — 0.1.0

Byte Pair Encoding Text Tokenization

topicdoc — 0.1.0

Topic-Specific Diagnostics for LDA and CTM Topic Models

topicmodels — 0.2-12

Topic Models

udpipe — 0.8.6

Tokenization, Parts of Speech Tagging, Lemmatization and Dependency Parsing with the 'UDPipe' 'NLP' Toolkit

wordcloud — 2.6

Word Clouds

wordnet — 0.1-15

WordNet Interface

zipfR — 0.6-70

Statistical Models for Word Frequency Distributions

Task view list