Read Rectangular Text Data

The goal of 'readr' is to provide a fast and friendly way to read rectangular data (like 'csv', 'tsv', and 'fwf'). It is designed to flexibly parse many types of data found in the wild, while still cleanly failing when data unexpectedly changes.

CRAN_Status_Badge BuildStatus AppVeyor BuildStatus CoverageStatus


The goal of readr is to provide a fast and friendly way to read rectangular data (like csv, tsv, and fwf). It is designed to flexibly parse many types of data found in the wild, while still cleanly failing when data unexpectedly changes. If you are new to readr, the best place to start is the data import chapter in R for data science.


# The easiest way to get readr is to install the whole tidyverse:
# Alternatively, install just readr:
# Or the the development version from GitHub:
# install.packages("devtools")



readr is part of the core tidyverse, so load it with:

#> ── Attaching packages ────────────────────────────────── tidyverse 1.2.1 ──
#> ✔ ggplot2 3.1.0     ✔ purrr   0.2.5
#> ✔ tibble  1.4.2     ✔ dplyr   0.7.7
#> ✔ tidyr   0.8.2     ✔ stringr 1.3.1
#> ✔ readr   1.2.0     ✔ forcats 0.3.0
#> ── Conflicts ───────────────────────────────────── tidyverse_conflicts() ──
#> ✖ dplyr::filter() masks stats::filter()
#> ✖ dplyr::lag()    masks stats::lag()

To accurately read a rectangular dataset with readr you combine two pieces: a function that parses the overall file, and a column specification. The column specification describes how each column should be converted from a character vector to the most appropriate data type, and in most cases it’s not necessary because readr will guess it for you automatically.

readr supports seven file formats with seven read_ functions:

  • read_csv(): comma separated (CSV) files
  • read_tsv(): tab separated files
  • read_delim(): general delimited files
  • read_fwf(): fixed width files
  • read_table(): tabular files where columns are separated by white-space.
  • read_log(): web log files

In many cases, these functions will just work: you supply the path to a file and you get a tibble back. The following example loads a sample file bundled with readr:

mtcars <- read_csv(readr_example("mtcars.csv"))
#> Parsed with column specification:
#> cols(
#>   mpg = col_double(),
#>   cyl = col_double(),
#>   disp = col_double(),
#>   hp = col_double(),
#>   drat = col_double(),
#>   wt = col_double(),
#>   qsec = col_double(),
#>   vs = col_double(),
#>   am = col_double(),
#>   gear = col_double(),
#>   carb = col_double()
#> )

Note that readr prints the column specification. This is useful because it allows you to check that the columns have been read in as you expect, and if they haven’t, you can easily copy and paste into a new call:

mtcars <- read_csv(readr_example("mtcars.csv"), col_types = 
    mpg = col_double(),
    cyl = col_integer(),
    disp = col_double(),
    hp = col_integer(),
    drat = col_double(),
    vs = col_integer(),
    wt = col_double(),
    qsec = col_double(),
    am = col_integer(),
    gear = col_integer(),
    carb = col_integer()

vignette("readr") gives more detail on how readr guesses the column types, how you can override the defaults, and provides some useful tools for debugging parsing problems.


There are two main alternatives to readr: base R and data.table’s fread(). The most important differences are discussed below.

Base R

Compared to the corresponding base functions, readr functions:

  • Use a consistent naming scheme for the parameters (e.g. col_names and col_types not header and colClasses).

  • Are much faster (up to 10x).

  • Leave strings as is by default, and automatically parse common date/time formats.

  • Have a helpful progress bar if loading is going to take a while.

  • All functions work exactly the same way regardless of the current locale. To override the US-centric defaults, use locale().

data.table and fread()

data.table has a function similar to read_csv() called fread. Compared to fread, readr functions:

  • Are slower. If you want absolutely the best performance, use data.table::fread().

  • Use a slightly more sophisticated parser, recognising both doubled ("""") and backslash escapes ("\""), and can produce factors and date/times directly.

  • Forces you to supply all parameters, where fread() saves you work by automatically guessing the delimiter, whether or not the file has a header, and how many lines to skip.

  • Are built on a different underlying infrastructure. Readr functions are designed to be quite general, which makes it easier to add support for new rectangular data formats. fread() is designed to be as fast as possible.


Thanks to:

  • Joe Cheng for showing me the beauty of deterministic finite automata for parsing, and for teaching me why I should write a tokenizer.

  • JJ Allaire for helping me come up with a design that makes very few copies, and is easy to extend.

  • Dirk Eddelbuettel for coming up with the name!

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.


readr 1.3.1

readr (development version)

  • Column specifications are now coloured when printed. This makes it easy to see at a glance when a column is input as a different type then the rest. Colouring can be disabled by setting options(crayon.enabled = FALSE).

  • as.col_spec() can now use named character vectors, which makes read_csv("file.csv", col_types = c(xyz = "c")) equivalent to read_csv("file.csv", col_types = cols(xyz = col_character())

  • Fix skipping when single quotes are embedded in double quoted strings, and single quotes in skipped or commented lines (#944, #945).

  • Fix for compilation using custom architectures on macOS (#919)

  • Fix for valgrind errors (#941)

readr 1.3.0

Breaking Changes

Blank line skipping

readr's blank line skipping has been modified to be more consistent and to avoid edge cases that affected the behavior in 1.2.0. The skip parameter now behaves more similar to how it worked previous to readr 1.2.0, but in addition the parameter skip_blank_rows can be used to control if fully blank lines are skipped. (#923)

tibble data frame subclass

readr 1.3.0 returns results with a spec_tbl_df subclass. This differs from a regular tibble only that the spec attribute (which holds the column specification) is lost as soon as the object is subset (and a normal tbl_df object is returned).

Historically tbl_df's lost their attributes once they were subset. However recent versions of tibble retain the attributes when subetting, so the spec_tbl_df subclass is needed to ensure the previous behavior.

This should only break compatibility if you are explicitly checking the class of the returned object. A way to get backwards compatible behavior is to call subset with no arguments on your object, e.g. x[].


  • hms objects with NA values are now written without whitespace padding (#930).
  • read_*() functions now return spec_tbl_df objects, which differ from regular tbl_df objects only in that the spec attribute is removed (and they are demoted to regular tbl_df objects) as soon as they are subset (#934).
  • write_csv2() now properly respects the na argument (#928)
  • Fixes compilation with multiple architectures on linux (#922).
  • Fixes compilation with R < 3.3.0

readr 1.2.1

This release skips the clipboard tests on CRAN servers

readr 1.2.0

Breaking Changes

Integer column guessing

readr functions no longer guess columns are of type integer, instead these columns are guessed as numeric. Because R uses 32 bit integers and 64 bit doubles all integers can be stored in doubles, guaranteeing no loss of information. This change was made to remove errors when numeric columns were incorrectly guessed as integers. If you know a certain column is an integer and would like to read them as such you can do so by specifying the column type explicitly with the col_types argument.

Blank line skipping

readr now always skips blank lines automatically when parsing, which may change the number of lines you need to pass to the skip parameter. For instance if your file had a one blank line then two more lines you want to skip previously you would pass skip = 3, now you only need to pass skip = 2.

New features

Melt functions

There is now a family of melt_*() functions in readr. These functions store data in 'long' or 'melted' form, where each row corresponds to a single value in the dataset. This form is useful when your data is ragged and not rectangular.

data <-"a,b,c
#> # A tibble: 9 x 4
#>     row   col data_type value
#>   <dbl> <dbl> <chr>     <chr>
#> 1     1     1 character a    
#> 2     1     2 character b    
#> 3     1     3 character c    
#> 4     2     1 integer   1    
#> 5     2     2 integer   2    
#> 6     3     1 character w    
#> 7     3     2 character x    
#> 8     3     3 character y    
#> 9     3     4 character z

Thanks to Duncan Garmonsway (@nacnudus) for great work on the idea an implementation of the melt_*() functions!

Connection improvements

readr 1.2.0 changes how R connections are parsed by readr. In previous versions of readr the connections were read into an in-memory raw vector, then passed to the readr functions. This made reading connections from small to medium datasets fast, but also meant that the dataset had to fit into memory at least twice (once for the raw data, once for the parsed data). It also meant that reading could not begin until the full vector was read through the connection.

Now we instead write the connection to a temporary file (in the R temporary directory), than parse that temporary file. This means connections may take a little longer to be read, but also means they will no longer need to fit into memory. It also allows the use of the chunked readers to process the data in parts.

Future improvements to readr would allow it to parse data from connections in a streaming fashion, which would avoid many of the drawbacks of either method.

Additional new features

  • melt_*() functions added for reading ragged data (#760, @nacnudus).
  • AccumulateCallback R6 class added to provide an example of accumulating values in a single result (#689, @blakeboswell).
  • read_fwf() can now accept overlapping field specifications (#692, @gergness)
  • type_convert() now allows character column specifications and also silently skips non-character columns (#369, #699)
  • The parse_*() functions and read_fwf() gain a trim_ws argument to control whether the fields should be trimmed before parsing (#636, #735).
  • parse_number() now parses numbers in scientific notation using e and E (#684, @sambrady3).
  • Add write_excel_csv2() function to allow writing csv files with comma as a decimal separator and semicolon as a column separator (#753, @olgamie).
  • read_*() files now support reading from the clipboard by using clipboard() (#656).
  • write_file() gains a sep argument, to specify the line separator (#665).
  • Allow files to be read via FTP over SSH by recognising sftp as a URL protocol (#707, @jdeboer).
  • parse_date*() accepts%a` for local day of week (#763, @tigertoes).
  • Added function read_lines_raw_chunked() (#710, @gergness)
  • write_csv2() added to complement write_excel_csv2() and allow writing csv file readable by read_csv2() (#870, @cderv).
  • as.col_spec() is now exported (#517).
  • write*() functions gain a quote_escape argument to control how quotes are escaped in the output (#854).
  • read*() functions now have a more informative error when trying to read a remote bz2 file (#891).
  • spec_table2() function added to correspond to read_table2() (#778, @mawds).
  • parse_factor() now has levels = NULL by default (#862, @mikmart).
  • "f" can now be used as a shortcode for col_factor() in cols() and the col_types argument to read_delim() and friends (#810, @mikmart).
  • Functions now read connections to a temporary file rather than to an in-memory object (#610, #76).

Bug Fixes

  • standardise_path() now uses a case-insentitive comparison for the file extensions (#794).
  • parse_guess() now guesses logical types when given (lowercase) 'true' and 'false' inputs (#818).
  • read_*() now do not print a progress bar when running inside a RStudio notebook chunk (#793)
  • read_table2() now skips comments anywhere in the file (#908).
  • parse_factor() now handles the case of empty strings separately, so you can have a factor level that is an empty string (#864).
  • read_delim() now correctly reads quoted headers with embeded newlines (#784).
  • fwf_positions() now always returns col_names as a character (#797).
  • format_*() now explicitly marks it's output encoding as UTF-8 (#697).
  • read_delim() now ignores whitespace between the delimiter and quoted fields (#668).
  • read_table2() now properly ignores blank lines at the end of a file like read_table() and read_delim() (#657).
  • read_delim(), read_table() and read_table() now skip blank lines at the start of a file (#680, #747).
  • guess_parser() now guesses a logical type for columns which are all missing. This is useful when binding multiple files together where some files have missing columns. (#662).
  • Column guessing will now never guess an integer type. This avoids issues where double columns are incorrectly guessed as integers if they have only integer values in the first 1000 (#645, #652).
  • read_*() now converts string files to UTF-8 before parsing, which is convenient for non-UTF-8 platforms in most cases (#730, @yutannihilation).
  • write_csv() writes integers up to 10^15 without scientific notation (#765, @zeehio)
  • read_*() no longer throws a "length of NULL cannot be changed" warning when trying to resize a skipped column (#750, #833).
  • read_*() now handles non-ASCII paths properly with R >=3.5.0 on Windows (#838, @yutannihilation).
  • read*()'s trim_ws parameter now trims both spaces and tabs (#767)

readr 1.1.1

  • Point release for test compatibility with tibble v1.3.1.
  • Fixed undefined behavior in localtime.c when using locale(tz = "") after loading a timezone due to incomplete reinitialization of the global locale.

readr 1.1.0

New features

Parser improvements

  • parse_factor() gains a include_na argument, to include NA in the factor levels (#541).
  • parse_factor() will now can accept levels = NULL, which allows one to generate factor levels based on the data (like stringsAsFactors = TRUE) (#497).
  • parse_numeric() now returns the full string if it contains no numbers (#548).
  • parse_time() now correctly handles 12 AM/PM (#579).
  • problems() now returns the file path in additional to the location of the error in the file (#581).
  • read_csv2() gives a message if it updates the default locale (#443, @krlmlr).
  • read_delim() now signals an error if given an empty delimiter (#557).
  • write_*() functions witting whole number doubles are no longer written with a trailing .0 (#526).

Whitespace / fixed width improvements

  • fwf_cols() allows for specifying the col_positions argument of read_fwf() with named arguments of either column positions or widths (#616, @jrnold).
  • fwf_empty() gains an n argument to control how many lines are read for whitespace to determine column structure (#518, @Yeedle).
  • read_fwf() gives error message if specifications have overlapping columns (#534, @gergness)
  • read_table() can now handle pipe() connections (#552).
  • read_table() can now handle files with many lines of leading comments (#563).
  • read_table2() which allows any number of whitespace characters as delimiters, a more exact replacement for utils::read.table() (#608).

Writing to connections

  • write_*() functions now support writing to binary connections. In addition output filenames with .gz, .bz2 or .xz will automatically open the appropriate connection and to write the compressed file. (#348)
  • write_lines() now accepts a list of raw vectors (#542).

Miscellaneous features

  • col_euro_double(), parse_euro_double(), col_numeric(), and parse_numeric() have been removed.
  • guess_encoding() returns a tibble, and works better with lists of raw vectors (as returned by read_lines_raw()).
  • ListCallback R6 Class to provide a more flexible return type for callback functions (#568, @mmuurr)
  • tibble::as.tibble() now used to construct tibbles (#538).
  • read_csv, read_csv2, and read_tsv gain a quote argument, (#631, @noamross)


  • parse_factor() now converts data to UTF-8 based on the supplied locale (#615).
  • read_*() functions with the guess_max argument now throw errors on inappropriate inputs (#588).
  • read_*_chunked() functions now properly end the stream if FALSE is returned from the callback.
  • read_delim() and read_fwf() when columns are skipped using col_types now report the correct column name (#573, @cb4ds).
  • spec() declarations that are long now print properly (#597).
  • read_table() does not print spec when col_types is not NULL (#630, @jrnold).
  • guess_encoding() now returns a tibble for all ASCII input as well (#641).

readr 1.0.0

Column guessing

The process by which readr guesses the types of columns has received a substantial overhaul to make it easier to fix problems when the initial guesses aren't correct, and to make it easier to generate reproducible code. Now column specifications are printing by default when you read from a file:

challenge <- read_csv(readr_example("challenge.csv"))
#> Parsed with column specification:
#> cols(
#>   x = col_integer(),
#>   y = col_character()
#> )

And you can extract those values after the fact with spec():

#> cols(
#>   x = col_integer(),
#>   y = col_character()
#> )

This makes it easier to quickly identify parsing problems and fix them (#314). If the column specification is long, the new cols_condense() is used to condense the spec by identifying the most common type and setting it as the default. This is particularly useful when only a handful of columns have a different type (#466).

You can also generating an initial specification without parsing the file using spec_csv(), spec_tsv(), etc.

Once you have figured out the correct column types for a file, it's often useful to make the parsing strict. You can do this either by copying and pasting the printed output, or for very long specs, saving the spec to disk with write_rds(). In production scripts, combine this with stop_for_problems() (#465): if the input data changes form, you'll fail fast with an error.

You can now also adjust the number of rows that readr uses to guess the column types with guess_max:

challenge <- read_csv(readr_example("challenge.csv"), guess_max = 1500)
#> Parsed with column specification:
#> cols(
#>   x = col_double(),
#>   y = col_date(format = "")
#> )

You can now access the guessing algorithm from R. guess_parser() will tell you which parser readr will select for a character vector (#377). We've made a number of fixes to the guessing algorithm:

  • New example extdata/challenge.csv which is carefully created to cause problems with the default column type guessing heuristics.

  • Blank lines and lines with only comments are now skipped automatically without warning (#381, #321).

  • Single '-' or '.' are now parsed as characters, not numbers (#297).

  • Numbers followed by a single trailing character are parsed as character, not numbers (#316).

  • We now guess at times using the time_format specified in the locale().

We have made a number of improvements to the reification of the col_types, col_names and the actual data:

  • If col_types is too long, it is subsetted correctly (#372, @jennybc).

  • If col_names is too short, the added names are numbered correctly (#374, @jennybc).

  • Missing colum name names are now given a default name (X2, X7 etc) (#318). Duplicated column names are now deduplicated. Both changes generate a warning; to suppress it supply an explicit col_names (setting skip = 1 if there's an existing ill-formed header).

  • col_types() accepts a named list as input (#401).

Column parsing

The date time parsers recognise three new format strings:

  • %I for 12 hour time format (#340).

  • %AD and %AT are "automatic" date and time parsers. They are both slightly less flexible than previous defaults. The automatic date parser requires a four digit year, and only accepts - and / as separators (#442). The flexible time parser now requires colons between hours and minutes and optional seconds (#424).

%y and %Y are now strict and require 2 or 4 characters respectively.

Date and time parsing functions received a number of small enhancements:

  • parse_time() returns hms objects rather than a custom time class (#409). It now correctly parses missing values (#398).

  • parse_date() returns a numeric vector (instead of an integer vector) (#357).

  • parse_date(), parse_time() and parse_datetime() gain an na argument to match all other parsers (#413).

  • If the format argument is omitted parse_date() or parse_time(), date and time formats specified in the locale will be used. These now default to %AD and %AT respectively.

  • You can now parse partial dates with parse_date() and parse_datetime(), e.g. parse_date("2001", "%Y") returns 2001-01-01.

parse_number() is slightly more flexible - it now parses numbers up to the first ill-formed character. For example parse_number("-3-") and parse_number("...3...") now return -3 and 3 respectively. We also fixed a major bug where parsing negative numbers yielded positive values (#308).

parse_logical() now accepts 0, 1 as well as lowercase t, f, true, false.

New readers and writers

  • read_file_raw() reads a complete file into a single raw vector (#451).

  • read_*() functions gain a quoted_na argument to control whether missing values within quotes are treated as missing values or as strings (#295).

  • write_excel_csv() can be used to write a csv file with a UTF-8 BOM at the start, which forces Excel to read it as UTF-8 encoded (#375).

  • write_lines() writes a character vector to a file (#302).

  • write_file() to write a single character or raw vector to a file (#474).

  • Experimental support for chunked reading a writing (read_*_chunked()) functions. The API is unstable and subject to change in the future (#427).

Minor features and bug fixes

  • Printing double values now uses an implementation of the grisu3 algorithm which speeds up writing of large numeric data frames by ~10X. (#432) '.0' is appended to whole number doubles, to ensure they will be read as doubles as well. (#483)

  • readr imports tibble so that you get consistent tbl_df behaviour (#317, #385).

  • New example extdata/challenge.csv which is carefully created to cause problems with the default column type guessing heuristics.

  • default_locale() now sets the default locale in readr.default_locale rather than regenerating it for each call. (#416).

  • locale() now automatically sets decimal mark if you set the grouping mark. It throws an error if you accidentally set decimal and grouping marks to the same character (#450).

  • All read_*() can read into long vectors, substantially increasing the number of rows you can read (#309).

  • All read_*() functions return empty objects rather than signaling an error when run on an empty file (#356, #441).

  • read_delim() gains a trim_ws argument (#312, noamross)

  • read_fwf() received a number of improvements:

    • read_fwf() now can now reliably read only a partial set of columns (#322, #353, #469)

    • fwf_widths() accepts negative column widths for compatibility with the widths argument in read.fwf() (#380, @leeper).

    • You can now read fixed width files with ragged final columns, by setting the final end position in fwf_positions() or final width in fwf_widths() to NA (#353, @ghaarsma). fwf_empty() does this automatically.

    • read_fwf() and fwf_empty() can now skip commented lines by setting a comment argument (#334).

  • read_lines() ignores embedded null's in strings (#338) and gains a na argument (#479).

  • readr_example() makes it easy to access example files bundled with readr.

  • type_convert() now accepts only NULL or a cols specification for col_types (#369).

  • write_delim() and write_csv() now invisibly return the input data frame (as documented, #363).

  • Doubles are parsed with boost::spirit::qi::long_double to work around a bug in the spirit library when parsing large numbers (#412).

  • Fix bug when detecting column types for single row files without headers (#333).

readr 0.2.2

  • Fix bug when checking empty values for missingness (caused valgrind issue and random crashes).

readr 0.2.1

  • Fixes so that readr works on Solaris.

readr 0.2.0


readr now has a strategy for dealing with settings that vary from place to place: locales. The default locale is still US centric (because R itself is), but you can now easily override the default timezone, decimal separator, grouping mark, day & month names, date format, and encoding. This has lead to a number of changes:

  • read_csv(), read_tsv(), read_fwf(), read_table(), read_lines(), read_file(), type_convert(), parse_vector() all gain a locale argument.

  • locale() controls all the input settings that vary from place-to-place.

  • col_euro_double() and parse_euro_double() have been deprecated. Use the decimal_mark parameter to locale() instead.

  • The default encoding is now UTF-8. To load files that are not in UTF-8, set the encoding parameter of the locale() (#40). New guess_encoding() function uses stringi to help you figure out the encoding of a file.

  • parse_datetime() and parse_date() with %B and %b use the month names (full and abbreviate) defined in the locale (#242). They also inherit the tz from the locale, rather than using an explicit tz parameter.

See vignette("locales") for more details.

File parsing improvements

  • cols() lets you pick the default column type for columns not otherwise explicitly named (#148). You can refer to parsers either with their full name (e.g. col_character()) or their one letter abbreviation (e.g. c).

  • cols_only() allows you to load only named columns. You can also choose to override the default column type in cols() (#72).

  • read_fwf() is now much more careful with new lines. If a line is too short, you'll get a warning instead of a silent mistake (#166, #254). Additionally, the last column can now be ragged: the width of the last field is silently extended until it hits the next line break (#146). This appears to be a common feature of "fixed" width files in the wild.

  • In read_csv(), read_tsv(), read_delim() etc:

    • comment argument allows you to ignore comments (#68).

    • trim_ws argument controls whether leading and trailing whitespace is removed. It defaults to TRUE (#137).

    • Specifying the wrong number of column names, or having rows with an unexpected number of columns, generates a warning, rather than an error (#189).

    • Multiple NA values can be specified by passing a character vector to na (#125). The default has been changed to na = c("", "NA"). Specifying na = "" now works as expected with character columns (#114).

Column parsing improvements

Readr gains vignette("column-types") which describes how the defaults work and how to override them (#122).

  • parse_character() gains better support for embedded nulls: any characters after the first null are dropped with a warning (#202).

  • parse_integer() and parse_double() no longer silently ignore trailing letters after the number (#221).

  • New parse_time() and col_time() allows you to parse times (hours, minutes, seconds) into number of seconds since midnight. If the format is omitted, it uses a flexible parser that looks for hours, then optional colon, then minutes, then optional colon, then optional seconds, then optional am/pm (#249).

  • parse_date() and parse_datetime():

    • parse_datetime() no longer incorrectly reads partial dates (e.g. 19, 1900, 1900-01) (#136). These triggered common false positives and after re-reading the ISO8601 spec, I believe they actually refer to periods of time, and should not be translated in to a specific instant (#228).

    • Compound formats "%D", "%F", "%R", "%X", "%T", "%x" are now parsed correctly, instead of using the ISO8601 parser (#178, @kmillar).

    • "%." now requires a non-digit. New "%+" skips one or more non-digits.

    • You can now use %p to refer to AM/PM (and am/pm) (#126).

    • %b and %B formats (month and abbreviated month name) ignore case when matching (#219).

    • Local (non-UTC) times with and without daylight savings are now parsed correctly (#120, @andres-s).

  • parse_number() is a somewhat flexible numeric parser designed to read currencies and percentages. It only reads the first number from a string (using the grouping mark defined by the locale).

  • parse_numeric() has been deprecated because the name is confusing - it's a flexible number parser, not a parser of "numerics", as R collectively calls doubles and integers. Use parse_number() instead.

As well as improvements to the parser, I've also made a number of tweaks to the heuristics that readr uses to guess column types:

  • New parse_guess() and col_guess() to explicitly guess column type.

  • Bumped up row inspection for column typing guessing from 100 to 1000.

  • The heuristics for guessing col_integer() and col_double() are stricter. Numbers with leading zeros now default to being parsed as text, rather than as integers/doubles (#266).

  • A column is guessed as col_number() only if it parses as a regular number when you ignoring the grouping marks.

Minor improvements and bug fixes

  • Now use R's platform independent iconv wrapper, thanks to BDR (#149).

  • Pathological zero row inputs (due to empty input, skip or n_max) now return zero row data frames (#119).

  • When guessing field types, and there's no information to go on, use character instead of logical (#124, #128).

  • Concise col_types specification now understands ? (guess) and - (skip) (#188).

  • count_fields() starts counting from 1, not 0 (#200).

  • format_csv() and format_delim() make it easy to render a csv or delimited file into a string.

  • fwf_empty() now works correctly when col_names supplied (#186, #222).

  • parse_*() gains a na argument that allows you to specify which values should be converted to missing.

  • problems() now reports column names rather than column numbers (#143). Whenever there is a problem, the first five problems are printing out in a warning message, so you can more easily see what's wrong.

  • read_*() throws a warning instead of an error is col_types specifies a non-existent column (#145, @alyst).

  • read_*() can read from a remote gz compressed file (#163).

  • read_delim() defaults to escape_backslash = FALSE and escape_double = TRUE for consistency. n_max also affects the number of rows read to guess the column types (#224).

  • read_lines() gains a progress bar. It now also correctly checks for interrupts every 500,000 lines so you can interrupt long running jobs. It also correctly estimates the number of lines in the file, considerably speeding up the reading of large files (60s -> 15s for a 1.5 Gb file).

  • read_lines_raw() allows you to read a file into a list of raw vectors, one element for each line.

  • type_convert() gains NA and trim_ws arguments, and removes missing values before determining column types.

  • write_csv(), write_delim(), and write_rds() all invisably return their input so you can use them in a pipe (#290).

  • write_delim() generalises write_csv() to write any delimited format (#135). write_tsv() is a helpful wrapper for tab separated files.

    • Quotes are only used when they're needed (#116): when the string contains a quote, the delimiter, a new line or NA.

    • Double vectors are saved using same amount of precision as as.character() (#117).

    • New na argument that specifies how missing values should be written (#187)

    • POSIXt vectors are saved in a ISO8601 compatible format (#134).

    • No longer fails silently if it can't open the target for writing (#193, #172).

  • write_rds() and read_rds() wrap around readRDS() and saveRDS(), defaulting to no compression (#140, @nicolasCoutin).

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.


2.0.2 by Jim Hester, a month ago,

Report a bug at

Browse source code at

Authors: Hadley Wickham [aut] , Jim Hester [aut, cre] , Romain Francois [ctb] , RStudio [cph, fnd] , [cph] (mio library) , Jukka Jylänki [ctb, cph] (grisu3 implementation) , Mikkel Jørgensen [ctb, cph] (grisu3 implementation)

Documentation:   PDF Manual  

MIT + file LICENSE license

Imports cli, clipr, crayon, hms, methods, rlang, R6, tibble, vroom, utils, lifecycle

Suggests covr, curl, dplyr, knitr, rmarkdown, spelling, stringi, testthat, tzdb, waldo, withr, xml2

Linking to cpp11, tzdb

System requirements: C++11

Imported by ARPALData, AirSensor, ArchaeoPhases, BALCONY, BED, BENMMI, BIS, BMRSr, BOJ, BasketballAnalyzeR, CB2, CCWeights, CEDARS, CKMRpop, COVIDIBGE, DDIwR, DMwR2, DSSAT, DiagrammeR, DramaAnalysis, ECOTOXr, EventStudy, FedData, GCalignR, GeodesiCL, GetBCBData, GetDFPData, GetDFPData2, GetFREData, GetLattesData, GetQuandlData, GillespieSSA2, HaDeX, IBMPopSim, IMD, JBrowseR, LilRhino, MIMSunit, MPTmultiverse, MazamaLocationUtils, MetaIntegrator, PAMpal, PL94171, PNADcIBGE, PNSIBGE, POFIBGE, PWFSLSmoke, RALSA, REDCapR, REddyProc, RTD, RTL, RchivalTag, RcppEigenAD, ReDaMoR, Rilostat, SEERaBomb, SanFranBeachWater, ShinyTester, Statsomat, SurvHiDim, SurviMChd, SwimmeR, TKCat, TUFLOWR, TailClassifier, UCSCXenaTools, VancouvR, WebGestaltR, WikidataQueryServiceR, WikidataR, abbyyR, abcrf, accucor, acroname, actel, actogrammr, adepro, admixr, adventr, aire.zmvm, airr, alakazam, aliases2entrez, alphavantager, amanida, anyflights, aquodom, archetyper, asciiSetupReader, audrex, basedosdados, bcdata, beastier, benthos, bibliometrix, biomartr, bioseq, bjscrapeR, blaise, breathtestcore, buildr, campfin, cansim, card, cdcfluview, cder, cgmanalysis, chronicle, chronochrt, climaemet, clustDRM, cms, codified, compstatr, covid19france, cpsvote, crimedata, crimeutils, crosswalkr, crsra, cspp, csvwr, cuperdec, czechrates, czso, damr, daqapo, dataRetrieval, dataonderivatives, datapackage.r, datapasta, dataspice, dataverse, datazoom.amazonia, dbparser, dccvalidator, ddpcr, deckgl, discoverableresearch, distribglm, downloadthis, dragon, duawranglr, dynwrap, echor, ecochange, educationdata, electionsBR, emba, emuR, encryptedRmd, encryptr, entrymodels, eph, epidata, esmisc, estatapi, etl, eurlex, eurostat, evaluator, exampletestr, exoplanets, eyelinker, farff, farrell, fastqcr, fec16, fgdr, fgeo.tool, fipe, fitzRoy, flowr, framecleaner, frenchdata, gbfs, gde, genio, genogeographer, geobr, geodimension, geojsonio, geomander, geoviz, getlandsat, geysertimes, ggPMX, ggbuildr, ggcleveland, ggplotgui, giftwrap, googlenlp, googlesheets, gtfs2gps, gutenbergr, hakaiApi, haven, hdd, healthfinance, hlaR, hockeystick, homologene, htsr, hydroscoper, iNZightTools, iheiddown, ijtiff, immunarch, ipumsr, isaeditor, isoreader, itraxR, jenga, jsmodule, kindisperse, lambdaTS, lazytrade, lfmm, libr, lineartestr, litteR, macleish, manifestoR, matahari, mdbr, metaboData, metacoder, metajam, metsyn, miRetrieve, microsamplingDesign, mipplot, mixl, moexer, molnet, mosaic, mudata2, myTAI, nasapower, ncappc, nesRdata, netmhc2pan, neuroim, nflseedR, njtr1, nlrx, nomisr, nser, obfuscatoR, oncrawlR, ondisc, onsr, openadds, openair, openintro, owidR, params, parlitools, parseRPDR, parsermd, pdfetch, pedquant, pestr, pguIMP, photobiologyInOut, piwikproR, povcalnetR, prism, projects, proteus, protti, prozor, pubtatordb, puzzle, qsub, qualtRics, quantable, quickerstats, rPraat,, radous, radsafer, rai, rapbase, ratematrix, rbedrock, rdflib, rdfp, readODS, readit, readroper, readtextgrid, redcapAPI, redist, repana, reproducer, rerddapXtracto, rfishbase, rgeopat2, rgho, rglobi, rgnparser, ricu, ringostat, rmapshaper, rmsfuns, romic, ropercenter, rrefine, rrtable, rticulate, rubias, rwebstat, ryandexdirect, salesforcer, saqgetr, sasMap, scopr, secuTrialR, sense, sequenza, sergeant, shiny.pwa, shinyobjects, simpleMLP, simplevis, snap, speakr, spiR, spotifyr, staRdom, starschemar, starter, starvz, stationaRy, statnipokladna, stlcsb, stminsights, studentlife, suddengains, swfscAirDAS, swfscDAS, swirlify, swmmr, taxadb, taxizedb, textdata, thinkr, threesixtygiving, tidyBdE, tidycensus, tidyhydat, tidypmc, tidyquant, tidystats, tidytransit, tidytreatment, tidytuesdayR, tidyverse, timetk, tongfen, tor, traits, truthiness, ukbtools, upstartr, usdarnass, utr.annotation, valr, visdat, visxhclust, vpc, wbstats, wcde, webTRISr, webreadr, wiesbaden, worldmet, xgxr, xmlconvert, xplain, xpose, xpose4, zoltr.

Depended on by MOQA, MicroDatosEs, efreadr, sim1000G.

Suggested by AzureStor, BayesMallows, DOPE, EstimationTools, INQC, ManagedCloudProvider, MazamaSpatialUtils, NHSRdatasets, RSocrata, ReviewR, Rmagic, SIPDIBGE, SPOT, SimplyAgree, TeXCheckR, Tplyr, VarBundle, WASP, altair, ambiorix, argoFloats, arkdb, auk, beautier, bigrquery, brolgar, canvasXpress, convergEU, coronavirus, covid19italy, cytominer, disk.frame, econullnetr, ecotox, eechidna, enc, epicontacts, epubr, europop, fastR2, faux, fgeo.analyze, finalfit, fivethirtyeight, forcats, fuzzyjoin, geckor, gemma2, geofi, ggrepel, googleCloudStorageR, googlesheets4, gravitas, gsheet, guardianapi, here, hipread, httr, hutils, kibior, leaflet.extras, mailmerge, mcmcr, meltr, metaconfoundr, moderndive, mopac, noctua, nsrr, patentr, pccc, pdxTrees, pharmaRTF, piggyback, plumber, podr, pointblank, processR, prophet, psfmi, rakeR, raw, rdwd, read.gt3x, reporter, resourcer, rio, rmonad, robservable, rprime, rrr, rtweet, rvest, sift, spatialEco, spup, ssdtools, stylo, sugrrants, sweep, tabshiftr, tabulog, tidyr, tidytext, torchdatasets, trackdf, trajr, unpivotr, usefun, vegawidget, wiad, xplorerr, zipcodeR.

See at CRAN