Found 101 packages in 0.24 seconds
Tree Taper Curves and Sorting Based on 'TapeR'
Providing new german-wide 'TapeR' Models and functions for their evaluation. Included are the most common tree species in Germany (Norway spruce, Scots pine, European larch, Douglas fir, Silver fir as well as European beech, Common/Sessile oak and Red oak). Many other species are mapped to them so that 36 tree species / groups can be processed. Single trees are defined by species code, one or multiple diameters in arbitrary measuring height and tree height. The functions then provide information on diameters along the stem, bark thickness, height of diameters, volume of the total or parts of the trunk and total and component above-ground biomass. It is also possible to calculate assortments from the taper curves. For diameter and volume estimation, uncertainty information is given.
Automatic Fixed Rank Kriging
Automatic fixed rank kriging for (irregularly located)
spatial data using a class of basis functions with multi-resolution features
and ordered in terms of their resolutions. The model parameters are estimated
by maximum likelihood (ML) and the number of basis functions is determined
by Akaike's information criterion (AIC). For spatial data with either one
realization or independent replicates, the ML estimates and AIC are efficiently
computed using their closed-form expressions when no missing value occurs. Details
regarding the basis function construction, parameter estimation, and AIC calculation
can be found in Tzeng and Huang (2018)
Wrapper for MUMPS Library
Some basic features of 'MUMPS' (Multifrontal Massively Parallel
sparse direct Solver) are wrapped in a class whose methods can be used
for sequentially solving a sparse linear system (symmetric or not)
with one or many right hand sides (dense or sparse).
There is a possibility to do separately symbolic analysis,
LU (or LDL^t) factorization and system solving.
Third part ordering libraries are included and can be used: 'PORD', 'METIS', 'SCOTCH'.
'MUMPS' method was first described in Amestoy et al. (2001)
Higher Order Likelihood Inference
Performs likelihood-based inference for a wide range of regression models. Provides higher-order approximations for inference based on extensions of saddlepoint type arguments as discussed in the book Applied Asymptotics: Case Studies in Small-Sample Statistics by Brazzale, Davison, and Reid (2007).
The Nonparametric Classification Methods for Cognitive Diagnosis
Statistical tools for analyzing cognitive diagnosis (CD) data collected from small settings using the nonparametric classification (NPCD) framework. The core methods of the NPCD framework includes the nonparametric classification (NPC) method developed by Chiu and Douglas (2013)
Top-Down Time Ratio Segmentation for Coordinate Trajectories
Data collected on movement behavior is often in the form of time-
stamped latitude/longitude coordinates sampled from the underlying movement
behavior. These data can be compressed into a set of segments via the Top-
Down Time Ratio Segmentation method described in Meratnia and de By (2004)
Production Function Output Gap Estimation
The output gap indicates the percentage difference between the actual output of an economy and its potential. Since potential output is a latent process, the estimation of the output gap poses a challenge and numerous filtering techniques have been proposed. 'RGAP' facilitates the estimation of a Cobb-Douglas production function type output gap, as suggested by the European Commission (Havik et al. 2014) < https://ideas.repec.org/p/euf/ecopap/0535.html>. To that end, the non-accelerating wage rate of unemployment (NAWRU) and the trend of total factor productivity (TFP) can be estimated in two bivariate unobserved component models by means of Kalman filtering and smoothing. 'RGAP' features a flexible modeling framework for the appropriate state-space models and offers frequentist as well as Bayesian estimation techniques. Additional functionalities include direct access to the 'AMECO' < https://economy-finance.ec.europa.eu/economic-research-and-databases/economic-databases/ameco-database_en> database and automated model selection procedures. See the paper by Streicher (2022) < http://hdl.handle.net/20.500.11850/552089> for details.
Interactive, Complex Heatmaps
Make complex, interactive heatmaps. 'iheatmapr' includes a modular system for iteratively building up complex heatmaps, as well as the iheatmap() function for making relatively standard heatmaps.
Processing of Model Parameters
Utilities for processing the parameters of various statistical models. Beyond computing p values, CIs, and other indices for a wide variety of models (see list of supported models using the function 'insight::supported_models()'), this package implements features like bootstrapping or simulating of parameters and models, feature reduction (feature extraction and variable selection) as well as functions to describe data and variable characteristics (e.g. skewness, kurtosis, smoothness or distribution).
A Convenient Way of Descriptive Statistics
Descriptive Statistics is essential for publishing articles. This package can perform
descriptive statistics according to different data types. If the data is a continuous variable,
the mean and standard deviation or median and quartiles are automatically output; if the data
is a categorical variable, the number and percentage are automatically output. In addition,
if you enter two variables in this package, the two variables will be described and their
relationships will be tested automatically according to their data types. For example,
if one of the two input variables is a categorical variable, another variable will be described
hierarchically based on the categorical variable and the statistical differences between
different groups will be compared using appropriate statistical methods. And for groups of
more than two, the post hoc test will be applied. For more information on the methods we used,
please see the following references:
Libiseller, C. and Grimvall, A. (2002)