UK National River Flow Archive Data from R

Utility functions to retrieve data from the UK National River Flow Archive (<>). The package contains R wrappers to the UK NRFA data temporary-API. There are functions to retrieve stations falling in a bounding box, to generate a map and extracting time series and general information.

rnrfa: An R package to Retrieve, Filter and Visualize Data from the UK National River Flow Archive

DOI Travis-CI Build Status AppVeyor Build Status Coverage Status

CRAN Status Badge CRAN Total Downloads CRAN Monthly Downloads

The UK National River Flow Archive serves daily streamflow data, spatial rainfall averages and information regarding elevation, geology, land cover and FEH related catchment descriptors.

There is currently an API under development that in future should provide access to the following services: metadata catalogue, catalogue filters based on a geographical bounding-box, catalogue filters based on metadata entries, gauged daily data for about 400 stations available in WaterML2 format, the OGC standard used to describe hydrological time series.

The information returned by the first three services is in JSON format, while the last one is an XML variant.

The RNRFA package aims to achieve a simpler and more efficient access to data by providing wrapper functions to send HTTP requests and interpret XML/JSON responses.

The rnrfa package depends on the gdal library, make sure you have it installed on your system before attempting to install this package.

R package dependencies can be installed running the following code:

install.packages(c("cowplot", "plyr", "httr", "xml2", "stringr", "xts", "rjson", "ggmap", "ggplot2", "sp", "rgdal", "parallel"))

This demo makes also use of external libraries. To install and load them run the following commands:

packs <- c("devtools", "DT", "leaflet")
lapply(packs, require, character.only = TRUE)


The stable version of the rnrfa package is available from CRAN:


Or you can install the development version from Github with devtools:


Now, load the rnrfa package:



List of monitoring stations

The R function that deals with the NRFA catalogue to retrieve the full list of monitoring stations is called catalogue(). The function, used with no inputs, requests the full list of gauging stations with associated metadata. The output is a dataframe containing one record for each station and as many columns as the number of metadata entries available.

# Retrieve information for all the stations in the catalogue:
allStations <- catalogue()

Those entries are briefly described as follows:

  • id = Station identification number
  • name = Name of the station
  • location = Area in which the station is located
  • river = River catchment
  • stationDescription = General station description, containing information on weirs, ratings, etc.
  • catchmentDescription = Information on topography, geology, land cover, etc.
  • hydrometricArea = UK hydrometric area identification number
  • operator = UK measuring authorities
  • haName = Hydrometric Area name
  • gridReference = OS Grid Reference number
  • stationType = Type of station (e.g. flume, weir, etc.)
  • catchmentArea = Catchment area in (Km^2)
  • gdfStart = Year in which recordings started
  • gdfEnd = Year in which recordings ended
  • farText = Information on the regime (e.g. natural, regulated, etc.)
  • categories = various tags (e.g. FEH_POOLING, FEH_QMED, HIFLOWS_INCLUDED)
  • altitude = Altitude measured in metres above Ordnance Datum or, in Northern Ireland, Malin Head.
  • sensitivity = Sensitivity index calculated as the percentage change in flow associated with a 10 mm increase in stage at the Q95 flow.
  • lat = a numeric vector of latitude coordinates.
  • lon = a numeric vector of longitude coordinates.

Station filtering

The same function catalogue() can be used to filter stations based on a bounding box or any of the metadata entries.

# Define a bounding box:
bbox <- list(lonMin=-3.82, lonMax=-3.63, latMin=52.43, latMax=52.52)
# Filter stations based on bounding box
someStations <- catalogue(bbox)
# Filter stations belonging to a certain hydrometric area
someStations <- catalogue(columnName="haName", columnValue="Wye (Hereford)")
# Filter based on bounding box & metadata strings
someStations <- catalogue(bbox,
                          columnValue="Wye (Hereford)")
# Filter stations based on threshold
someStations <- catalogue(bbox,
# Filter based on minimum recording years
someStations <- catalogue(bbox,
# Filter stations based on identification number
someStations <- catalogue(columnName="id",
# Other combined filtering
someStations <- catalogue(bbox,


The only geospatial information contained in the list of station in the catalogue is the OS grid reference (column "gridRef"). The RNRFA package allows convenient conversion to more standard coordinate systems. The function "osg_parse()", for example, converts the string to easting and northing in the BNG coordinate system (EPSG code: 27700), as in the example below:

# Where is the first catchment located?
# Convert OS Grid reference to BNG

The same function can also convert from BNG to latitude and longitude in the WSGS84 coordinate system (EPSG code: 4326) as in the example below.

# Convert BNG to WSGS84
osg_parse("SN853872", CoordSystem = "WGS84")

osg_parse() also works with multiple references:


Get time series data

The first column of the table "someStations" contains the id number. This can be used to retrieve time series data and convert waterml2 files to time series object (of class zoo).

The National River Flow Archive serves two types of time series data: gauged daily flow and catchment mean rainfall.

These time series can be obtained using the functions gdf() and cmr(), respectively. Both functions accept three inputs:

  • id, the station identification numbers (single string or character vector).

  • metadata, a logical variable (FALSE by default). If metadata is TRUE means that the result for a single station is a list with two elements: data (the time series) and meta (metadata).

  • cl, This is a cluster object, created by the parallel package. This is set to NULL by default, which sends sequential calls to the server.

Here is how to retrieve mean rainfall (monthly) data for Shin at Lairg (id = 3001) catchment.

# Fetch only time series data from the waterml2 service
info <- cmr(id = "3001")
# Fetch time series data and metadata from the waterml2 service
info <- cmr(id = "3001", metadata = TRUE)
plot(info$data, main=paste("Monthly rainfall data for the",
     xlab="", ylab=info$meta$units)

Here is how to retrieve (daily) flow data for Shin at Lairg (id = 3001) catchment.

# Fetch only time series data from the waterml2 service
info <- gdf(id = "3001")
# Fetch time series data and metadata from the waterml2 service
info <- gdf(id = "3001", metadata = TRUE)
plot(info$data, main=paste("Daily flow data for the",
     xlab="", ylab=info$meta$units)

Multiple sites

By default, the functions getTS() can be used to fetch time series data from multiple site in a sequential mode (using 1 core):

# Search data/metadata in the waterml2 service
s <- cmr(c(3002,3003), metadata = TRUE)
# s is a list of 2 objects (one object for each site)
     main = paste(s[[1]]$meta$stationName, "and", s[[2]]$meta$stationName))
lines(s[[2]]$data, col="green")


Upgrade your data.frame to a data.table:


Create interactive maps using leaflet:

leaflet(data = someStations) %>% addTiles() %>%
  addMarkers(~lon, ~lat, popup = ~as.character(paste(id,name)))

Interactive plots using dygraphs:

dygraph(info$data) %>% dyRangeSelector()

Sequential vs Concurrent requests: a simple benchmark test

# Use detectCores() to find out many cores are available on your machine
cl <- makeCluster(getOption("cl.cores", detectCores()))
# Filter all the stations within the above bounding box
someStations <- catalogue(bbox)
# Get flow data with a sequential approach
system.time( s1 <- gdf(someStations$id, cl = NULL) )
# Get flow data with a concurrent approach (using `parLapply()`)
system.time( s2 <- gdf(id = someStations$id, cl = cl) )  

The measured flows are expected to increase with the catchment area. Let's show this simple regression on a plot:

# Calculate the mean flow for each catchment
someStations$meangdf <- unlist( lapply(s2, mean) )
# Linear model
ggplot(someStations, aes(x = as.numeric(catchmentArea), y = meangdf)) + 
  geom_point() +
  stat_smooth(method = "lm", col = "red") +
  xlab(expression(paste("Catchment area [Km^2]",sep=""))) + 
  ylab(expression(paste("Mean flow [m^3/s]",sep="")))

Terms and Conditions

Please refer to the following Terms and Conditions for use of NRFA Data and disclaimer:

This package uses a non-public API which is likely to change. Package and functions herein are provided as is, without any guarantee.

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.



Updated to v2.0 and submitted to CRAN.

Major changes:

  1. Developed new function to interface new API
  2. Updated existing functions to work with the new API

Updated to v1.5 and submitted to CRAN.

Major changes:

  1. osg_parse now does not fail when gridRefs is a mixture of upper and lower cases, thanks to Christoph Kratz (@bogsnork on GitHub, see!
  2. fixed tests for get_ts
  3. automatic deployment of website for documentation on github

Updated to v1.4 and submitted to CRAN.

Major changes:

  1. osg_parse now is vectorised, thanks to Tobias Gauster!

Updated to v1.3 and submitted to CRAN.

Major changes:

  1. Removed dependency from cowplot package

Updated to v1.2 and submitted paper to the R Journal

Major changes:

  1. Added some utility functions (e.g. plot_trend) to generate plots in the paper

Updated to v1.1 and submitted to CRAN.

Major changes:

  1. testthat framework for unit tests
  2. travis for continuous integration on linux
  3. appveyors for continuous integration on windows
  4. added code of conduct
  5. renamed functions to follow best practice
  6. moved package to root directory to follow best practice

Updated to v0.5.4 and submitted to CRAN.

Major changes:

  1. Michael Spencer (contributor) updated the function OSGparse to work with grid references of different lengths.

  2. Added testthat framework for unit tests

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.


2.0 by Claudia Vitolo, 11 days ago

Report a bug at

Browse source code at

Authors: Claudia Vitolo [aut, cre] , Matthew Fry [ctb] (Matthew supervised the unofficial API integration.) , Wouter Buytaert [ctb] (This package is part of Claudia Vitolo's PhD work and Wouter is the supervisor.) , Michael Spencer [ctb] (Michael updated the function osg_parse to work with grid references of different lengths.) , Tobias Gauster [ctb] (Tobias improved the function osg_parse introducing vectorisation)

Documentation:   PDF Manual  

Task views: Hydrological Data and Modeling

GPL-3 license

Imports rgdal, dplyr, curl, jsonlite, lubridate, graphics, stats, httr, xts, ggmap, ggplot2, sp, parallel, tibble

Suggests testthat, knitr, covr, lintr, rmarkdown

Imported by hddtools.

See at CRAN