Discover Probable Duplicates in Plant Genetic Resources Collections

Provides functions to aid the identification of probable/possible duplicates in Plant Genetic Resources (PGR) collections using 'passport databases' comprising of information records of each constituent sample. These include methods for cleaning the data, creation of a searchable Key Word in Context (KWIC) index of keywords associated with sample records and the identification of nearly identical records with similar information by fuzzy, phonetic and semantic matching of keywords.

PGRdup: Discover Probable Duplicates in Plant Genetic Resources Collections ---------------------------------------------------------------------------

Copyright (C) 2014, ICAR-NBPGR ; License: GPL-2 | GPL-3

The R package PGRdup was developed as a tool to aid genebank managers in the identification of probable duplicate accessions from plant genetic resources (PGR) passport databases.

This package primarily implements a workflow designed to fetch groups or sets of germplasm accessions with similar passport data particularly in fields associated with accession names within or across PGR passport databases.

The functions in this package are primarily built using the following R packages:

The package can be installed from CRAN as follows:

install.packages('PGRdup', dependencies=TRUE)

The series of steps involve in the workflow along with the associated functions are are illustrated below:

Function(s) :

  • DataClean
  • MergeKW
  • MergePrefix
  • MergeSuffix

Use these functions for the appropriate data standardisation of the relevant fields in the passport databases to harmonize punctuation, leading zeros, prefixes, suffixes etc. associated with accession names.

Function(s) :

  • KWIC

Use this function to extract the information in the relevant fields as keywords or text strings in the form of a searchable Keyword in Context (KWIC) index.

Function(s) :

  • ProbDup

Execute fuzzy, phonetic and semantic matching of keywords to identify probable duplicate sets either within a single KWIC index or between two indexes using this function. For fuzzy matching the levenshtein edit distance is used, while for phonetic matching, the double metaphone algorithm is used. For semantic matching, synonym sets or ‘synsets’ of accession names can be supplied as an input and the text strings in such sets will be treated as being identical for matching. Various options to tweak the matching strategies used are also available in this function.

Function(s) :

  • DisProbDup
  • ReviewProbDup
  • ReconstructProbDup

Inspect, revise and improve the retrieved sets using these functions. If considerable intersections exist between the initially identified sets, then DisProbDup may be used to get the disjoint sets. The identified sets may be subjected to clerical review after transforming them into an appropriate spreadsheet format which contains the raw data from the original database(s) using ReviewProbDup and subsequently converted back using ReconstructProbDup.

Function(s) :

  • ValidatePrimKey
  • DoubleMetaphone
  • ParseProbDup
  • AddProbDup
  • SplitProbDup
  • MergeProbDup
  • ViewProbDup
  • KWCounts
  • read.genesys

Use these helper functions if needed. ValidatePrimKey can be used to check whether a column chosen in a data.frame as the primary primary key/ID confirms to the constraints of absence of duplicates and NULL values.

DoubleMetaphone is an implementation of the Double Metaphone phonetic algorithm in R and is used for phonetic matching.

ParseProbDup and AddProbDup work with objects of class ProbDup. The former can be used to parse the probable duplicate sets in a ProbDup object to a data.frame while the latter can be used to add these sets data fields to the passport databases. SplitProbDup can be used to split an object of class ProbDup according to set counts. MergeProbDup can be used to merge together two objects of class ProbDup. ViewProbDup can be used to plot the summary visualizations of probable duplicate sets retrieved in an object of class ProbDup.

KWCounts can be used to compute keyword counts from PGR passport database fields(columns), which can give a rough indication of the completeness of the data.

read.genesys can be used to import PGR data in a Darwin Core - germplasm zip archive downloaded from genesys database into the R environment.

  • Use fread to rapidly read large files instead of read.csv or read.table in base.
  • In case the PGR passport data is in any DBMS, use the appropriate R-database interface packages to get the required table as a data.frame in R.
  • The ProbDup function can be memory hungry with large passport databases. In such cases, ensure that the system has sufficient memory for smooth functioning (See ?ProbDup).
#> To cite the R package 'PGRdup' in publications use:
#>   Aravind, J., J. Radhamani, Kalyani Srinivasan, B. Ananda
#>   Subhash, and R. K. Tyagi ().  PGRdup: Discover Probable
#>   Duplicates in Plant Genetic Resources Collections. R package
#>   version 0.2.2.
#> A BibTeX entry for LaTeX users is
#>   @Manual{,
#>     title = {PGRdup: Discover Probable Duplicates in Plant Genetic Resources Collections},
#>     author = {{Aravind J} and {Radhamani J} and {Kalyani Srinivasan} and {Ananda Subhash B} and Rishi Kumar Tyagi},
#>     note = {R package version 0.2.2},
#>   }



  • read.genesys - Convert 'Darwin Core - Germplasm' zip archive to a flat file.
  • ViewProbDup - Visualize the probable duplicate sets retrived in a ProbDup object.


  • ReconstructProbDup - Fixed bug regarding failure to retrieve db2 fields when method "c" is used.
  • ProbDup - Updated code after bugfix in stringdist package (stringdistmatrix: output was transposed when length(a)==1).


  • Changed the contact email addresses of four authors (including maintainer) in DESCRIPTION.
  • Updated the vignette and with the details of new functions.


  • SplitProbDup - Split an object of class ProbDup.
  • MergeProbDup - Merges two objects of class ProbDup.
  • KWCounts - Generates keyword counts from database fields.
  • print.KWIC - Prints summary of an object of class KWIC to console.
  • print.ProbDup - Pprints summary of an object of class ProbDup to console.


  • ProbDup - Modified the phonetic matching for better handling of strings with digits.
  • ProbDup - Fixed throwing of error when no duplicate sets are retrieved.
  • ProbDup - Fixed issue regarding memory out error when large number of exceptions are there.
  • ProbDup - Further converted code to use data.table package for greater efficiency and speed.
  • ProbDup - Fixed bug regarding inconsistent matching when method "b" is used.
  • ProbDup - Reduced the dimensions of the string matching matrices produced for greater efficiency and speed.
  • MergeKW - Modified for better handling of regex special characters.
  • ReconstructProbDup - Modified to ignore sets with counts less than 2 after reconstruction.


  • Edited formatting.
  • Added diagram, microbenchmark and wordcloud (required for vignette) to suggests field in DESCRIPTION.
  • Added imports to functions from methods, stats and utils as R CMD check --as-cran now checks code usage (via codetools) with only the base package attached.
  • Dropped the abbreviation PGR in the title in DESCRIPTION as it is mentioned in the description text.


  • Added vignette "An Introduction to PGRdup package".

  • First release

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.

install.packages("PGRdup") by J. Aravind, 6 days ago,,,,

Report a bug at

Browse source code at

Authors: J. Aravind [aut, cre] (0000-0002-4791-442X), J. Radhamani [aut], Kalyani Srinivasan [aut], B. Ananda Subhash [aut], R. K. Tyagi [aut], ICAR-NBGPR [cph], Maurice Aubrey [ctb] (Double Metaphone), Kevin Atkinson [ctb] (Double Metaphone), Lawrence Philips [ctb] (Double Metaphone)

Documentation:   PDF Manual  

Task views:

GPL-2 | GPL-3 license

Imports data.table, igraph, stringdist, stringi, ggplot2, grid, gridExtra, methods, utils, stats

Suggests diagram, wordcloud, microbenchmark, XML, knitr, rmarkdown

See at CRAN