Found 120 packages in 0.07 seconds
Dose Rate Modelling of Carbonate-Rich Samples
Translation of the 'MATLAB' program 'Carb' (Nathan and Mauz 2008
Improved Methods for Constructing Prediction Intervals for Network Meta-Analysis
Improved methods to construct prediction intervals for network meta-analysis. The parametric bootstrap and Kenward-Roger-type adjustment by Noma et al. (2022)
Generalized Linear Models Extended
Extended techniques for generalized linear models (GLMs), especially for binary responses, including parametric links and heteroscedastic latent variables.
Power and Sample Size for Health Researchers via Shiny
Power and Sample Size for Health Researchers is a Shiny application that brings together a series of functions related to sample size and power calculations for common analysis in the healthcare field. There are functionalities to calculate the power, sample size to estimate or test hypotheses for means and proportions (including test for correlated groups, equivalence, non-inferiority and superiority), association, correlations coefficients, regression coefficients (linear, logistic, gamma, and Cox), linear mixed model, Cronbach's alpha, interobserver agreement, intraclass correlation coefficients, limit of agreement on Bland-Altman plots, area under the curve, sensitivity and specificity incorporating the prevalence of disease. You can also use the online version at < https://hcpa-unidade-bioestatistica.shinyapps.io/PSS_Health/>.
Highlight Conserved Edits Across Versions of a Document
Input multiple versions of a source document, and receive HTML code for a highlighted version of the source document indicating the frequency of occurrence of phrases in the different versions. This method is described in Chapter 3 of Rogers (2024) < https://digitalcommons.unl.edu/dissertations/AAI31240449/>.
Ordination Methods for the Analysis of Beta-Diversity Indices
The analysis of different aspects of biodiversity requires specific algorithms. For example, in regionalisation analyses, the high frequency of ties and zero values in dissimilarity matrices produced by Beta-diversity turnover produces hierarchical cluster dendrograms whose topology and bootstrap supports are affected by the order of rows in the original matrix. Moreover, visualisation of biogeographical regionalisation can be facilitated by a combination of hierarchical clustering and multi-dimensional scaling. The recluster package provides robust techniques to visualise and analyse pattern of biodiversity and to improve occurrence data for cryptic taxa. Other functions related to recluster (e.g. the biodecrypt family) are currently available in GitHub at < https://github.com/leondap/recluster>.
Koeppen-Geiger Climatic Zones
Aids in identifying the Koeppen-Geiger (KG) climatic zone for
a given location. The Koeppen-Geiger climate zones were first published in 1884, as a system
to classify regions of the earth by their relative heat and humidity through the year, for
the benefit of human health, plant and agriculture and other human activity [1]. This climate
zone classification system, applicable to all of the earths surface, has continued to be
developed by scientists up to the present day. Recently one of use (FZ) has published updated,
higher accuracy KG climate zone definitions [2]. In this package we use these updated
high-resolution maps as the data source [3]. We provide functions that return the KG climate zone
for a given longitude and lattitude, or for a given United States zip code. In addition
the CZUncertainty() function will check climate zones nearby to check if the given location
is near a climate zone boundary. In addition an interactive shiny app is provided to define
the KG climate zone for a given longitude and lattitude, or United States zip code.
Digital data, as well as animated maps, showing the shift of the climate zones are provided
on the following website < http://koeppen-geiger.vu-wien.ac.at>.
This work was supported by the DOE-EERE SunShot award DE-EE-0007140.
[1] W. Koeppen, (2011)
Large Amplitude Oscillatory Shear (LAOS)
The Sequence of Physical Processes (SPP) framework is a way of interpreting the transient data derived from oscillatory rheological tests. It is designed to allow both the linear and non-linear deformation regimes to be understood within a single unified framework. This code provides a convenient way to determine the SPP framework metrics for a given sample of oscillatory data. It will produce a text file containing the SPP metrics, which the user can then plot using their software of choice. It can also produce a second text file with additional derived data (components of tangent, normal, and binormal vectors), as well as pre-plotted figures if so desired. It is the R version of the Package SPP by Simon Rogers Group for Soft Matter (Simon A. Rogers, Brian M. Erwin, Dimitris Vlassopoulos, Michel Cloitre (2011)
Ontology Tools with Data FAIRification in Development
Translates several CSV files with ontological terms and corresponding data into RDF triples. These RDF triples are stored in OWL and JSON-LD files, facilitating data accessibility, interoperability, and knowledge unification. The triples are also visualized in a graph saved as an SVG. The input CSVs must be formatted with a template from a public Google Sheet; see README or vignette for more information. This is a tool is used by the SDLE Research Center at Case Western Reserve University to create and visualize material science ontologies, and it includes example ontologies to demonstrate its capabilities. This work was supported by the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy (EERE) under Solar Energy Technologies Office (SETO) Agreement Numbers E-EE0009353 and DE-EE0009347, Department of Energy (National Nuclear Security Administration) under Award Number DE-NA0004104 and Contract number B647887, and U.S. National Science Foundation Award under Award Number 2133576.
Import and Export Data
Import and export data from the most common statistical formats by using R functions that guarantee the least loss of the data information, giving special attention to the date variables and the labelled ones.