# Randomer Forest

R-RerF (aka Randomer Forest (RerF) or Random Projection Forests) is an algorithm developed by Tomita (2016) which is similar to Random Forest - Random Combination (Forest-RC) developed by Breiman (2001) . Random Forests create axis-parallel, or orthogonal trees. That is, the feature space is recursively split along directions parallel to the axes of the feature space. Thus, in cases in which the classes seem inseparable along any single dimension, Random Forests may be suboptimal. To address this, Breiman also proposed and characterized Forest-RC, which uses linear combinations of coordinates rather than individual coordinates, to split along. This package, 'rerf', implements RerF which is similar to Forest-RC. The difference between the two algorithms is where the random linear combinations occur: Forest-RC combines features at the per tree level whereas RerF takes linear combinations of coordinates at every node in the tree.

## Repo Contents

• R: R building blocks for user interface code. Internally called by user interface.
• docs: usage of the R-RerF package on real examples.
• man: Package documentation
• src: C++ functions called from within R
• travisTest: Travis CI tests

## Description

Randomer Forest (RerF) is a generalization of the Random Forest (RF) algorithm. RF partitions the input (feature) space via a series of recursive binary hyperplanes. Hyperplanes are constrained to be axis-aligned. In other words, each partition is a test of the form Xi > t, where t is a threshold and Xi is one of p inputs (features) {X1, ..., Xp}. The best axis-aligned split is found by sampling a random subset of the p inputs and choosing the one that best partitions the observed data according to some specified split criterion. RerF relaxes the constraint that the splitting hyperplanes must be axis-aligned. That is, each partition in RerF is a test of the form w1X1 + ... + wpXp > t. The orientations of hyperplanes are sampled randomly via a user-specified distribution on the coefficients wi, although an empirically validated default distribution is provided. Currently only classification is supported. Regression and unsupervised learning will be supported in the future.

## Tested on

• Mac OSX: 10.11 10.12 (Sierra)
• Linux: Ubuntu 16.04, CentOS 6
• Windows: 10

## Hardware Requirements

Any machine with >= 2 GB RAM

## Software Dependencies

• R
• R packages:
• dummies
• compiler
• RcppZiggurat
• parallel

## Installation

• Installation normally takes ~5-10 minutes
• Non-Windows users install the GNU Scientific Library (libgsl0-dev).
• Windows users install Rtools (https://cran.r-project.org/bin/windows/Rtools/)

### Stable Release from CRAN:

From within R-

install.packages("rerf")

### Development Version from Github:

First install the devtools package if not currently installed. From within R-

install.packages("devtools")

Next install rerf from github. From within R-

devtools::install_github("neurodata/R-Rerf")

## How to Use

Runtime for the following examples should be < 1 sec on any machine.

library(rerf)

### Create a forest:

To create a forest the minimum data needed is an n by d input matrix (X) and an n length vector of corresponding class labels (Y). Rows correspond to samples and columns correspond to features.

X <- as.matrix(iris[,1:4])
Y <- iris[[5L]]
forest <- RerF(X, Y, seed = 1L)


Expected output:

$treeMap [1] 1 2 -17 3 4 -1 -2 5 8 -3 6 7 -6 -4 -5 9 -16 10 -15 [20] -7 11 12 -14 13 14 -8 -9 -10 15 -11 16 -12 -13$CutPoint
[1]  -0.80  -6.85  -1.90   4.35  -2.75  -5.90   7.55  -2.85 -10.75  -3.35
[11]   3.45  -3.15   4.90   4.60  -3.05   6.40

$ClassProb [,1] [,2] [,3] [1,] 0 1.0000000 0.0000000 [2,] 0 0.0000000 1.0000000 [3,] 0 1.0000000 0.0000000 [4,] 0 0.3333333 0.6666667 [5,] 0 1.0000000 0.0000000 [6,] 0 1.0000000 0.0000000 [7,] 0 0.0000000 1.0000000 [8,] 0 1.0000000 0.0000000 [9,] 0 0.0000000 1.0000000 [10,] 0 1.0000000 0.0000000 [11,] 0 0.0000000 1.0000000 [12,] 0 0.0000000 1.0000000 [13,] 0 0.6666667 0.3333333 [14,] 0 0.0000000 1.0000000 [15,] 0 1.0000000 0.0000000 [16,] 0 0.0000000 1.0000000 [17,] 1 0.0000000 0.0000000$matAstore
[1]  4 -1  1 -1  1 -1  3  1  2  1  4  1  2 -1  1 -1  1  1  4  1  2 -1  1 -1  3
[26] -1  2 -1  3  1  4 -1  2 -1  3  1  3  1  2 -1  1  1

$matAindex [1] 0 2 4 8 12 14 16 20 22 26 28 32 34 36 38 40 42$ind
NULL

$rotmat NULL$rotdims
NULL

[1] 0.9413333

$rho [1] 0.8451606  ### Compute feature (projection) importance (this feature is not available in the current CRAN release): Computes the Gini importance for all of the unique projections used to split the data. The returned value is a list with members imp and proj. The member imp is a numeric vector of feature importances sorted in decreasing order. The member proj is a list the same length as imp of vectors specifying the split projections corresponding to the values in imp. The projections are represented by the vector such that the odd numbered indices indicate the canonical feature indices and the even numbered indices indicate the linear coefficients. For example a vector (1,-1,4,1,5,-1) is the projection -X1 + X4 - X5. Note: it is highly advised to run this only when the splitting features (projections) have unweighted coefficients, such as for the default setting or for RF. X <- as.matrix(iris[, 1:4]) # feature matrix Y <- iris$Species # class labels
p <- ncol(X) # number of features in the data
d <- ceiling(sqrt(p)) # number of features to sample at each split

# Here we specify that we want to run the standard random forest algorithm and we want to store the decrease in impurity at each split node. The latter option is required in order to compute Gini feature importance.

forest <- RerF(as.matrix(iris[, 1:4]), iris[[5L]], mat.options = list(p, d, "rf", NULL), num.cores = 1L, store.impurity = TRUE, seed = 1L)
feature.imp <- FeatureImportance(forest, num.cores = 1L)


Expected output:

> feature.imp
$imp [1] 4455.7292 4257.6306 861.6474 178.5267$proj
$proj[[1]] [1] 3 1$proj[[2]]
[1] 4 1

$proj[[3]] [1] 1 1$proj[[4]]
[1] 2 1


### Train Structured RerF (S-RerF) for image classification:

S-RerF samples and evaluates a set of random features at each split node, where each feature is defined as a random linear combination of intensities of pixels contained in a contiguous patch within an image. Thus, the generated features exploit local structure inherent in images.

data(mnist)
# p is number of dimensions, d is the number of random features to evaluate, iw is image width, ih is image height, patch.min is min width of square patch to sample pixels from, and patch.max is the max width of square patch
p <- ncol(mnist$Xtrain) d <- ceiling(sqrt(p)) iw <- sqrt(p) ih <- iw patch.min <- 1L patch.max <- 5L forest <- RerF(mnist$Xtrain, mnist$Ytrain, num.cores = 1L, mat.options = list(p, d, "image-patch", iw, ih, patch.min, patch.max), seed = 1L) predictions <- Predict(mnist$Xtest, forest, num.cores = 1L)
error.rate <- mean(predictions != mnist$Ytest)  Expected output: > error.rate [1] 0.0544  ### Train Structured RerF (S-RerF) for spike train inference: Similar to S-RerF for image classification except now in the Spike Train setting. 500 samples were stimulated from the following AR(2) model: $$c_t = \sum_{i=1}^2 \gamma_i c_{t-i} + s_t, \ \ \ s_t \sim Poisson(0.01) \ y_t = a \ c_t + \epsilon_t, \ \ \ \ \epsilon_t \sim \mathcal{N}(0, 1)$$ whre$\gamma_1 = 1.7, \gamma_2 = -0.712$,$a = 1$. We sampled such that the were an equal number of spikes and non-spikes in the datasets. S-RerF was trained on these samples by computing local feature patches across the time series windows. ts.train <- read.csv('calcium-spike_train.csv', header=FALSE) ts.test <- read.csv('calcium-spike_test.csv', header=FALSE) ts.train$X <- ts.train[,1:(ncol(ts.train)-1)]
ts.train$Y <- ts.train[,ncol(ts.train)] ts.test$X <- ts.test[,1:(ncol(ts.test)-1)]
ts.test$Y <- ts.test[,ncol(ts.test)] # p is number of dimensions, d is the number of random features to evaluate, patch.min is min width of a time series patch to sample, and patch.max is the max width of the patch. p <- ncol(ts.train$X)
d <- ceiling(sqrt(p))
patch.min <- 1L
patch.max <- 5L
forest <- RerF(ts.train$X, ts.train$Y, num.cores = 1L, mat.options = list(p, d, "ts-patch", patch.min, patch.max), seed = 1L)
predictions <- Predict(ts.test$X, forest, num.cores = 1L) error.rate <- mean(predictions != ts.test$Y)


Expected output

> error.rate
[1] 0.262


# Reference manual

install.packages("rerf")

2.0.2 by Jesse Patsolic, 2 months ago

https://github.com/neurodata/R-RerF

Report a bug at https://github.com/neurodata/R-RerF/issues

Browse source code at https://github.com/cran/rerf

Authors: Jesse Patsolic [ctb, cre] , Benjamin Falk [ctb] , Jaewon Chung [ctb] , James Browne [aut] , Tyler Tomita [aut] , Joshua Vogelstein [ths]

Documentation:   PDF Manual

Imports parallel, RcppZiggurat, utils, stats, dummies

Suggests roxygen2, testthat

System requirements: GNU make

See at CRAN