Task view: Cluster Analysis & Finite Mixture Models

Last updated on 2021-08-15 by Friedrich Leisch, Bettina Gruen

This CRAN Task View contains a list of packages that can be used for finding groups in data and modeling unobserved cross-sectional heterogeneity. Many packages provide functionality for more than one of the topics listed below, the section headings are mainly meant as quick starting points rather than an ultimate categorization. Except for packages stats and cluster (which ship with base R and hence are part of every R installation), each package is listed only once.

Most of the packages listed in this CRAN Task View, but not all are distributed under the GPL. Please have a look at the DESCRIPTION file of each package to check under which license it is distributed.

Hierarchical Clustering:

  • Functions hclust() from package stats and agnes() from cluster are the primary functions for agglomerative hierarchical clustering, function diana() can be used for divisive hierarchical clustering. Faster alternatives to hclust() are provided by the packages fastcluster and flashClust.
  • Function dendrogram() from stats and associated methods can be used for improved visualization for cluster dendrograms.
  • The dendextend package provides functions for easy visualization (coloring labels and branches, etc.), manipulation (rotating, pruning, etc.) and comparison of dendrograms (tangelgrams with heuristics for optimal branch rotations, and tree correlation measures with bootstrap and permutation tests for significance).
  • Package dynamicTreeCut contains methods for detection of clusters in hierarchical clustering dendrograms.
  • Package genieclust implements a fast hierarchical clustering algorithm with a linkage criterion which is a variant of the single linkage method combining it with the Gini inequality measure to robustify the linkage method while retaining computational efficiency to allow for the use of larger data sets.
  • Package idendr0 allows to interactively explore hierarchical clustering dendrograms and the clustered data. The data can be visualized (and interacted with) in a built-in heat map, but also in GGobi dynamic interactive graphics (provided by rggobi), or base R plots.
  • Package isopam uses an algorithm which is based on the classification of ordination scores from isometric feature mapping. The classification is performed either as a hierarchical, divisive method or as non-hierarchical partitioning.
  • Package mdendro provides an alternative implementation of agglomerative hierarchical clustering. The package natively handles similarity matrices, calculates variable-group dendrograms, which solve the non-uniqueness problem that arises when there are ties in the data, and calculates five descriptors for the final dendrogram: cophenetic correlation coefficient, space distortion ratio, agglomerative coefficient, chaining coefficient, and tree balance.
  • The package protoclust implements a form of hierarchical clustering that associates a prototypical element with each interior node of the dendrogram. Using the package's plot() function, one can produce dendrograms that are prototype-labeled and are therefore easier to interpret.
  • pvclust is a package for assessing the uncertainty in hierarchical cluster analysis. It provides approximately unbiased p-values as well as bootstrap p-values.

Partitioning Clustering:

  • Function kmeans() from package stats provides several algorithms for computing partitions with respect to Euclidean distance.
  • Function pam() from package cluster implements partitioning around medoids and can work with arbitrary distances. Function clara() is a wrapper to pam() for larger data sets. Silhouette plots and spanning ellipses can be used for visualization.
  • Package apcluster implements Frey's and Dueck's Affinity Propagation clustering. The algorithms in the package are analogous to the Matlab code published by Frey and Dueck.
  • Package ClusterR implements k-means, mini-batch-kmeans, k-medoids, affinity propagation clustering and Gaussian mixture models with the option to plot, validate, predict (new data) and estimate the optimal number of clusters. The package takes advantage of RcppArmadillo to speed up the computationally intensive parts of the functions.
  • Package clusterSim allows to search for the optimal clustering procedure for a given dataset.
  • Package clustMixType implements Huang's k-prototypes extension of k-means for mixed type data.
  • Package evclust implements various clustering algorithms that produce a credal partition, i.e., a set of Dempster-Shafer mass functions representing the membership of objects to clusters.
  • Package flexclust provides k-centroid cluster algorithms for arbitrary distance measures, hard competitive learning, neural gas and QT clustering. Neighborhood graphs and image plots of partitions are available for visualization. Some of this functionality is also provided by package cclust.
  • Package kernlab provides a weighted kernel version of the k-means algorithm by kkmeans and spectral clustering by specc.
  • Package kml provides k-means clustering specifically for longitudinal (joint) data.
  • Package skmeans allows spherical k-Means Clustering, i.e. k-means clustering with cosine similarity. It features several methods, including a genetic and a simple fixed-point algorithm and an interface to the CLUTO vcluster program for clustering high-dimensional datasets.
  • Package Spectrum implements a self-tuning spectral clustering method for single or multi-view data and uses either the eigengap or multimodality gap heuristics to determine the number of clusters. The method is sufficiently flexible to cluster a wide range of Gaussian and non-Gaussian structures with automatic selection of K.
  • Package tclust allows for trimmed k-means clustering. In addition using this package other covariance structures can also be specified for the clusters.

Model-Based Clustering:

  • ML estimation:
    • For semi- or partially supervised problems, where for a part of the observations labels are given with certainty or with some probability, package bgmm provides belief-based and soft-label mixture modeling for mixtures of Gaussians with the EM algorithm.
    • EMCluster provides EM algorithms and several efficient initialization methods for model-based clustering of finite mixture Gaussian distribution with unstructured dispersion in unsupervised as well as semi-supervised learning situation.
    • Packages funHDDC and funFEM implement model-based functional data analysis. The funFEM package implements the funFEM algorithm which allows to cluster time series or, more generally, functional data. It is based on a discriminative functional mixture model which allows the clustering of the data in a unique and discriminative functional subspace. This model presents the advantage to be parsimonious and can therefore handle long time series. The funHDDC package implements the funHDDC algorithm which allows the clustering of functional data within group-specific functional subspaces. The funHDDC algorithm is based on a functional mixture model which models and clusters the data into group-specific functional subspaces. The approach allows afterward meaningful interpretations by looking at the group-specific functional curves.
    • Package GLDEX fits mixtures of generalized lambda distributions and for grouped conditional data package mixdist can be used.
    • Package GMCM fits Gaussian mixture copula models for unsupervised clustering and meta-analysis.
    • Package HDclassif provides function hddc to fit Gaussian mixture model to high-dimensional data where it is assumed that the data lives in a lower dimension than the original space.
    • Package teigen allows to fit multivariate t-distribution mixture models (with eigen-decomposed covariance structure) from a clustering or classification point of view.
    • Package mclust fits mixtures of Gaussians using the EM algorithm. It allows fine control of volume and shape of covariance matrices and agglomerative hierarchical clustering based on maximum likelihood. It provides comprehensive strategies using hierarchical clustering, EM and the Bayesian Information Criterion (BIC) for clustering, density estimation, and discriminant analysis. Package Rmixmod provides tools for fitting mixture models of multivariate Gaussian or multinomial components to a given data set with either a clustering, a density estimation or a discriminant analysis point of view. Package mclust as well as packages mixture and Rmixmod provide all 14 possible variance-covariance structures based on the eigenvalue decomposition.
    • Package MetabolAnalyze fits mixtures of probabilistic principal component analysis with the EM algorithm.
    • For grouped conditional data package mixdist can be used.
    • Package MixAll provides EM estimation of diagonal Gaussian, gamma, Poisson and categorical mixtures combined based on the conditional independence assumption using different EM variants and allowing for missing observations. The package accesses the clustering part of the Statistical ToolKit STK++.
    • Package mixR performs maximum likelihood estimation of finite mixture models for raw or binned data for families including Normal, Weibull, Gamma and Lognormal using the EM algorithm, together with the Newton-Raphson algorithm or the bisection method when necessary. The package also provides information criteria or the bootstrap likelihood ratio test for model selection and the model fitting process is accelerated using package Rcpp.
    • mixtools provides fitting with the EM algorithm for parametric and non-parametric (multivariate) mixtures. Parametric mixtures include mixtures of multinomials, multivariate normals, normals with repeated measures, Poisson regressions and Gaussian regressions (with random effects). Non-parametric mixtures include the univariate semi-parametric case where symmetry is imposed for identifiability and multivariate non-parametric mixtures with conditional independent assumption. In addition fitting mixtures of Gaussian regressions with the Metropolis-Hastings algorithm is available.
    • Fitting finite mixtures of uni- and multivariate scale mixtures of skew-normal distributions with the EM algorithm is provided by package mixsmsn.
    • Package MoEClust fits parsimonious finite multivariate Gaussian mixtures of experts models via the EM algorithm. Covariates may influence the mixing proportions and/or component densities and all 14 constrained covariance parameterizations from package mclust are implemented.
    • Package movMF fits finite mixtures of von Mises-Fisher distributions with the EM algorithm.
    • mritc provides tools for classification using normal mixture models and (higher resolution) hidden Markov normal mixture models fitted by various methods.
    • prabclus clusters a presence-absence matrix object by calculating an MDS from the distances, and applying maximum likelihood Gaussian mixtures clustering to the MDS points.
    • Package psychomix estimates mixtures of the dichotomous Rasch model (via conditional ML) and the Bradley-Terry model. Package mixRasch estimates mixture Rasch models, including the dichotomous Rasch model, the rating scale model, and the partial credit model with joint maximum likelihood estimation.
    • Package rebmix implements the REBMIX algorithm to fit mixtures of conditionally independent normal, lognormal, Weibull, gamma, binomial, Poisson, Dirac or von Mises component densities as well as mixtures of multivariate normal component densities with unrestricted variance-covariance matrices.
  • Bayesian estimation:
    • Bayesian estimation of finite mixtures of multivariate Gaussians is possible using package bayesm. The package provides functionality for sampling from such a mixture as well as estimating the model using Gibbs sampling. Additional functionality for analyzing the MCMC chains is available for averaging the moments over MCMC draws, for determining the marginal densities, for clustering observations and for plotting the uni- and bivariate marginal densities.
    • Package bayesmix provides Bayesian estimation using JAGS.
    • Package Bmix provides Bayesian Sampling for stick-breaking mixtures.
    • Package bmixture provides Bayesian estimation of finite mixtures of univariate Gamma and normal distributions.
    • Package GSM fits mixtures of gamma distributions.
    • Package IMIFA fits Infinite Mixtures of Infinite Factor Analyzers and a flexible suite of related models for clustering high-dimensional data. The number of clusters and/or number of cluster-specific latent factors can be non-parametrically inferred, without recourse to model selection criteria.
    • Package mcclust implements methods for processing a sample of (hard) clusterings, e.g. the MCMC output of a Bayesian clustering model. Among them are methods that find a single best clustering to represent the sample, which are based on the posterior similarity matrix or a relabeling algorithm.
    • Package mixAK contains a mixture of statistical methods including the MCMC methods to analyze normal mixtures with possibly censored data.
    • Package NPflow fits Dirichlet process mixtures of multivariate normal, skew normal or skew t-distributions. The package was developed oriented towards flow-cytometry data preprocessing applications.
    • Package PReMiuM is a package for profile regression, which is a Dirichlet process Bayesian clustering where the response is linked non-parametrically to the covariate profile.
    • Package rjags provides an interface to the JAGS MCMC library which includes a module for mixture modelling.
  • Other estimation methods:
    • Package AdMit allows to fit an adaptive mixture of Student-t distributions to approximate a target density through its kernel function.
    • Circular and orthogonal regression clustering using redescending M-estimators is provided by package edci.

Other Cluster Algorithms and Clustering Suites:

  • Package ADPclust allows to cluster high dimensional data based on a two dimensional decision plot. This density-distance plot plots for each data point the local density against the shortest distance to all observations with a higher local density value. The cluster centroids of this non-iterative procedure can be selected using an interactive or automatic selection mode.
  • Package amap provides alternative implementations of k-means and agglomerative hierarchical clustering.
  • Package biclust provides several algorithms to find biclusters in two-dimensional data.
  • Package cba implements clustering techniques for business analytics like "rock" and "proximus".
  • Package CHsharp clusters 3-dimensional data into their local modes based on a convergent form of Choi and Hall's (1999) data sharpening method.
  • Package clue implements ensemble methods for both hierarchical and partitioning cluster methods.
  • Package CoClust implements a cluster algorithm that is based on copula functions and therefore allows to group observations according to the multivariate dependence structure of the generating process without any assumptions on the margins.
  • Package compHclust provides complimentary hierarchical clustering which was especially designed for microarray data to uncover structures present in the data that arise from 'weak' genes.
  • Package DatabionicSwarm implements a swarm system called Databionic swarm (DBS) for self-organized clustering. This method is able to adapt itself to structures of high-dimensional data such as natural clusters characterized by distance and/or density based structures in the data space.
  • Package dbscan provides a fast reimplementation of the DBSCAN (density-based spatial clustering of applications with noise) algorithm using a kd-tree.
  • Fuzzy clustering and bagged clustering are available in package e1071. Further and more extensive tools for fuzzy clustering are available in package fclust.
  • Package FactoClass performs a combination of factorial methods and cluster analysis.
  • Package FCPS provides many conventional clustering algorithms with consistent input and output, several statistical approaches for the estimation of the number of clusters as well as the mirrored density plot (MD-plot) of clusterability and offers a variety of clustering challenges any algorithm should be able to handle when facing real world data.
  • The hopach algorithm is a hybrid between hierarchical methods and PAM and builds a tree by recursively partitioning a data set.
  • For graphs and networks model-based clustering approaches are implemented in latentnet.
  • Package pdfCluster provides tools to perform cluster analysis via kernel density estimation. Clusters are associated to the maximally connected components with estimated density above a threshold. In addition a tree structure associated with the connected components is obtained.
  • Package prcr implements the 2-step cluster analysis where first hierarchical clustering is performed to determine the initial partition for the subsequent k-means clustering procedure.
  • Package ProjectionBasedClustering implements projection-based clustering (PBC) for high-dimensional datasets in which clusters are formed by both distance and density structures (DDS).
  • Package randomLCA provides the fitting of latent class models which optionally also include a random effect. Package poLCA allows for polytomous variable latent class analysis and regression. BayesLCA allows to fit Bayesian LCA models employing the EM algorithm, Gibbs sampling or variational Bayes methods.
  • Package RPMM fits recursively partitioned mixture models for Beta and Gaussian Mixtures. This is a model-based clustering algorithm that returns a hierarchy of classes, similar to hierarchical clustering, but also similar to finite mixture models.
  • Self-organizing maps are available in package som.
  • Several packages provide cluster algorithms which have been developed for bioinformatics applications. These packages include FunCluster for profiling microarray expression data and ORIClust for order-restricted information-based clustering.

Cluster-wise Regression:

  • Package crimCV fits finite mixtures of zero-inflated Poisson models for longitudinal data with time as covariate.
  • Multigroup mixtures of latent Markov models on mixed categorical and continuous data (including time series) can be fitted using depmix or depmixS4. The parameters are optimized using a general purpose optimization routine given linear and nonlinear constraints on the parameters.
  • Package flexmix implements an user-extensible framework for EM-estimation of mixtures of regression models, including mixtures of (generalized) linear models.
  • Package fpc provides fixed-point methods both for model-based clustering and linear regression. A collection of asymmetric projection methods can be used to plot various aspects of a clustering.
  • Package lcmm fits a latent class linear mixed model which is also known as growth mixture model or heterogeneous linear mixed model using a maximum likelihood method.
  • Package mixreg fits mixtures of one-variable regressions and provides the bootstrap test for the number of components.
  • mixPHM fits mixtures of proportional hazard models with the EM algorithm.

Additional Functionality:

  • Package clusterGeneration contains functions for generating random clusters and random covariance/correlation matrices, calculating a separation index (data and population version) for pairs of clusters or cluster distributions, and 1-D and 2-D projection plots to visualize clusters. Alternatively MixSim generates a finite mixture model with Gaussian components for prespecified levels of maximum and/or average overlaps. This model can be used to simulate data for studying the performance of cluster algorithms.
  • Package clusterCrit computes various clustering validation or quality criteria and partition comparison indices.
  • For cluster validation package clusterRepro tests the reproducibility of a cluster. Package clv contains popular internal and external cluster validation methods ready to use for most of the outputs produced by functions from package cluster and clValid calculates several stability measures.
  • Package clustvarsel provides variable selection for Gaussian model-based clustering. Variable selection for latent class analysis for clustering multivariate categorical data is implemented in package LCAvarsel. Package VarSelLCM provides variable selection for model-based clustering of continuous, count, categorical or mixed-type data with missing values where the models used impose a conditional independence assumption given group membership.
  • Package factoextra provides some easy-to-use functions to extract and visualize the output of multivariate data analyses in general including also heuristic and model-based cluster analysis. The package also contains functions for simplifying some cluster analysis steps and uses ggplot2-based visualization.
  • Functionality to compare the similarity between two cluster solutions is provided by cluster.stats() in package fpc.
  • The stability of k-centroid clustering solutions fitted using functions from package flexclust can also be validated via bootFlexclust() using bootstrap methods.
  • Package MOCCA provides methods to analyze cluster alternatives based on multi-objective optimization of cluster validation indices.
  • Package NbClust implements 30 different indices which evaluate the cluster structure and should help to determine on a suitable number of clusters.
  • Mixtures of univariate normal distributions can be printed and plotted using package nor1mix.
  • Package seriation provides dissplot() for visualizing dissimilarity matrices using seriation and matrix shading. This also allows to inspect cluster quality by restricting objects belonging to the same cluster to be displayed in consecutive order.
  • Package sigclust provides a statistical method for testing the significance of clustering results.
  • Package treeClust calculates dissimilarities between data points based on their leaf memberships in regression or classification trees for each variable. It also performs the cluster analysis using the resulting dissimilarity matrix with available heuristic clustering algorithms in R.


AdMit — 2.1.8

Adaptive Mixture of Student-t Distributions

ADPclust — 0.7

Fast Clustering Using Adaptive Density Peak Detection

amap — 0.8-18

Another Multidimensional Analysis Package

apcluster — 1.4.9

Affinity Propagation Clustering

BayesLCA — 1.9

Bayesian Latent Class Analysis

bayesm — 3.1-4

Bayesian Inference for Marketing/Micro-Econometrics

bayesmix — 0.7-5

Bayesian Mixture Models with JAGS

bgmm — 1.8.5

Gaussian Mixture Modeling Algorithms and the Belief-Based Mixture Modeling

biclust — 2.0.3

BiCluster Algorithms

Bmix — 0.6

Bayesian Sampling for Stick-Breaking Mixtures

bmixture — 1.7

Bayesian Estimation for Finite Mixture of Distributions

cba — 0.2-21

Clustering for Business Analytics

cclust — 0.6-23

Convex Clustering Methods and Clustering Indexes

CHsharp — 0.4

Choi and Hall Style Data Sharpening

clue — 0.3-60

Cluster Ensembles

cluster — 2.1.2

"Finding Groups in Data": Cluster Analysis Extended Rousseeuw et al.

clusterCrit — 1.2.8

Clustering Indices

clusterGeneration — 1.3.7

Random Cluster Generation (with Specified Degree of Separation)

ClusterR — 1.2.5

Gaussian Mixture Models, K-Means, Mini-Batch-Kmeans, K-Medoids and Affinity Propagation Clustering

clusterRepro — 0.9

Reproducibility of Gene Expression Clusters

clusterSim — 0.49-2

Searching for Optimal Clustering Procedure for a Data Set

clustMixType — 0.2-15

k-Prototypes Clustering for Mixed Variable-Type Data

clustvarsel — 2.3.4

Variable Selection for Gaussian Model-Based Clustering

clv — 0.3-2.2

Cluster Validation Techniques

clValid — 0.7

Validation of Clustering Results

CoClust — 0.3-2

Copula Based Cluster Analysis

compHclust — 1.0-3

Complementary Hierarchical Clustering

crimCV — 0.9.6

Group-Based Modelling of Longitudinal Data

DatabionicSwarm — 1.1.5

Swarm Intelligence for Self-Organized Clustering

dbscan — 1.1-10

Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Related Algorithms

dendextend — 1.15.2

Extending 'dendrogram' Functionality in R

depmix — 0.9.16

Dependent Mixture Models

depmixS4 — 1.5-0

Dependent Mixture Models - Hidden Markov Models of GLMs and Other Distributions in S4

dynamicTreeCut — 1.63-1

Methods for Detection of Clusters in Hierarchical Clustering Dendrograms

e1071 — 1.7-9

Misc Functions of the Department of Statistics, Probability Theory Group (Formerly: E1071), TU Wien

edci — 1.1-3

Edge Detection and Clustering in Images

EMCluster — 0.2-13

EM Algorithm for Model-Based Clustering of Finite Mixture Gaussian Distribution

evclust — 2.0.2

Evidential Clustering

FactoClass — 1.2.7

Combination of Factorial Methods and Cluster Analysis

factoextra — 1.0.7

Extract and Visualize the Results of Multivariate Data Analyses

fastcluster — 1.2.3

Fast Hierarchical Clustering Routines for R and 'Python'

fclust — 2.1.1

Fuzzy Clustering

FCPS — 1.3.0

Fundamental Clustering Problems Suite

flashClust — 1.01-2

Implementation of optimal hierarchical clustering

flexclust — 1.4-0

Flexible Cluster Algorithms

flexmix — 2.3-17

Flexible Mixture Modeling

fpc — 2.2-9

Flexible Procedures for Clustering

FunCluster — 1.09

Functional Profiling of Microarray Expression Data

funFEM — 1.2

Clustering in the Discriminative Functional Subspace

funHDDC — 2.3.1

Univariate and Multivariate Model-Based Clustering in Group-Specific Functional Subspaces

genieclust — 1.0.0

The Genie++ Hierarchical Clustering Algorithm with Noise Points Detection


Fitting Single and Mixture of Generalised Lambda Distributions (RS and FMKL) using Various Methods

GMCM — 1.4

Fast Estimation of Gaussian Mixture Copula Models

GSM — 1.3.2

Gamma Shape Mixture

HDclassif — 2.2.0

High Dimensional Supervised Classification and Clustering

idendr0 — 1.5.3

Interactive Dendrograms

IMIFA — 2.1.8

Infinite Mixtures of Infinite Factor Analysers and Related Models

isopam — 0.9-13

Isopam (Clustering)

kernlab — 0.9-29

Kernel-Based Machine Learning Lab

kml — 2.4.1

K-Means for Longitudinal Data

latentnet — 2.10.5

Latent Position and Cluster Models for Statistical Networks

LCAvarsel — 1.1

Variable Selection for Latent Class Analysis

lcmm — 1.9.4

Extended Mixed Models Using Latent Classes and Latent Processes

mcclust — 1.0

Process an MCMC Sample of Clusterings

mclust — 5.4.9

Gaussian Mixture Modelling for Model-Based Clustering, Classification, and Density Estimation

mdendro — 2.1.0

Extended Agglomerative Hierarchical Clustering

MetabolAnalyze — 1.3

Probabilistic latent variable models for metabolomic data.

mixsmsn — 1.1-10

Fitting Finite Mixture of Scale Mixture of Skew-Normal Distributions

mixAK — 5.3

Multivariate Normal Mixture Models and Mixtures of Generalized Linear Mixed Models Including Model Based Clustering

MixAll — 1.5.1

Clustering and Classification using Model-Based Mixture Models

mixdist — 0.5-5

Finite Mixture Distribution Models

mixPHM — 0.7-2

Mixtures of Proportional Hazard Models

mixR — 0.2.0

Finite Mixture Modeling for Raw and Binned Data

mixRasch — 1.1

Mixture Rasch Models with JMLE

mixreg — 2.0-10

Functions to Fit Mixtures of Regressions

MixSim — 1.1-5

Simulating Data to Study Performance of Clustering Algorithms

mixtools — 1.2.0

Tools for Analyzing Finite Mixture Models

mixture — 2.0.4

Mixture Models for Clustering and Classification

MOCCA — 1.4

Multi-Objective Optimization for Collecting Cluster Alternatives

MoEClust — 1.4.2

Gaussian Parsimonious Clustering Models with Covariates and a Noise Component

movMF — 0.2-6

Mixtures of von Mises-Fisher Distributions

mritc — 0.5-2

MRI Tissue Classification

NbClust — 3.0

Determining the Best Number of Clusters in a Data Set

nor1mix — 1.3-0

Normal aka Gaussian (1-d) Mixture Models (S3 Classes and Methods)

NPflow — 0.13.3

Bayesian Nonparametrics for Automatic Gating of Flow-Cytometry Data

ORIClust — 1.0-1

Order-restricted Information Criterion-based Clustering Algorithm

pdfCluster — 1.0-3

Cluster Analysis via Nonparametric Density Estimation

poLCA — 1.4.1

Polytomous variable Latent Class Analysis

prabclus — 2.3-2

Functions for Clustering and Testing of Presence-Absence, Abundance and Multilocus Genetic Data

prcr — 0.2.1

Person-Centered Analysis

PReMiuM — 3.2.7

Dirichlet Process Bayesian Clustering, Profile Regression

ProjectionBasedClustering — 1.1.6

Projection Based Clustering

protoclust — 1.6.3

Hierarchical Clustering with Prototypes

psychomix — 1.1-8

Psychometric Mixture Models

pvclust — 2.2-0

Hierarchical Clustering with P-Values via Multiscale Bootstrap Resampling

randomLCA — 1.1-1

Random Effects Latent Class Analysis

rebmix — 2.13.1

Finite Mixture Modeling, Clustering & Classification

rjags — 4-12

Bayesian Graphical Models using MCMC

Rmixmod — 2.1.6

Classification with Mixture Modelling

RPMM — 1.25

Recursively Partitioned Mixture Model

seriation — 1.3.1

Infrastructure for Ordering Objects Using Seriation

sigclust — 1.1.0

Statistical Significance of Clustering

skmeans — 0.2-14

Spherical k-Means Clustering

som — 0.3-5.1

Self-Organizing Map

Spectrum — 1.1

Fast Adaptive Spectral Clustering for Single and Multi-View Data

tclust — 1.4-2

Robust Trimmed Clustering

teigen — 2.2.2

Model-Based Clustering and Classification with the Multivariate t Distribution

treeClust — 1.1-7

Cluster Distances Through Trees

VarSelLCM —

Variable Selection for Model-Based Clustering of Mixed-Type Data Set with Missing Values

Task view list