Implementation of various kernel adaptive methods in nonparametric curve
estimation like density estimation as introduced in Stute and Srihera (2011)
The goal of kader is to supply functions to compute nonparametric kernel estimators for
The functions are based on the theory introduced in
A very brief summary of the theory and sort of a vignette is presented in Eichner, G. (2017): Kader - An R package for nonparametric kernel adjusted density estimation and regression. In: Ferger, D., et al. (eds.): From Statistics to Mathematical Finance, Festschrift in Honour of Winfried Stute. Springer International Publishing. To appear in Oct. 2017.
You can install kader from CRAN with:
install.packages("kader")
or from github with:
devtools::install_github("GerritEichner/kader")
This example shows you how to estimate at x0 = 2 the value of the density function of the probability distribution underlying Old-Faithful's eruptions data using the (nonrobust) method of Srihera & Stute (2011). The initial grid (given to Sigma
) on which the minimization of the estimated MSE as a function of a (kernel-adjusting) scale parameter σ is started is rather coarse here to save computing time.
library(kader)x0 <- 2sigma <- seq(0.01, 10, length = 21)fit <- kade(x = x0, data = faithful$eruptions, method = "nonrobust",Sigma = sigma, ticker = TRUE)#> h set to n^(-1/5) with n = 272.#> theta set to arithmetic mean of data in faithful$eruptions.#> Using the adaptive method of Srihera & Stute (2011)#> For each element in x: Computing estimated values of#> bias and scaled variance on the sigma-grid.#> Note: x has 1 element(s) and the sigma-grid 21.#>#> As a little distraction, the 'ticker' documents the#> computational progress (if you have set ticker = TRUE).#> x[1]:sigma:1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21.#> Minimizing MSEHat:#> Step 1: Search smallest maximizer of VarHat.scaled on sigma-grid.#> Step 2: Search smallest minimizer of MSEHat on sigma-grid to the#> LEFT of just found smallest maximizer of VarHat.scaled.#> Step 3: Finer search for 'true' minimum of MSEHat using#> numerical minimization. (May take a while.)#> sigma:1.sigma:1.sigma:1.sigma:1.sigma:1.sigma:1.sigma:1.sigma:1.sigma:1.sigma:1.sigma:1.sigma:1.sigma:1.#> Step 4: Check if numerically determined minimum is smaller#> than discrete one.#> Yes, optimize() was 'better' than grid search.#>#> .print(fit)#> x y sigma.adap msehat.min discr.min.smaller sig.range.adj#> 1 2 0.5478784 1.996629 0.003822793 FALSE 0