Wrapper to the 'spaCy' 'NLP' Library

An R wrapper to the 'Python' 'spaCy' 'NLP' library, from < http://spacy.io>.


CRANVersion Travis-CI BuildStatus Appveyor Buildstatus codecov.io Downloads TotalDownloads

An R wrapper to the spaCy “industrial strength natural language processing”" Python library from https://spacy.io.

Installing the package

  1. Install miniconda

    The easiest way to install spaCy and spacyr is through the spacyr function spacy_install(). This function by default creates a new conda environment called spacy_condaenv, as long as some version of conda is installed on the user’s the system. You can install miniconda from https://conda.io/miniconda.html. (Choose the 64-bit version, or alternatively, run to the computer store now and purchase a 64-bit system to replace your ancient 32-bit platform.)

    If you already have any version of conda, you can skip this step. You can check it by entering conda --version in the Terminal.

    For a Windows-based system, Visual C++ Build Tools or Visual Studio Express must be installed to compile spaCy for pip installation. The version of Visual Studio required for the installation of spaCy is found here and the default python version used in our installation method is 3.6.x.

  2. Install the spacyr R package:

    • From GitHub:

      To install the latest package from source, you can simply run the following.

    devtools::install_github("quanteda/spacyr", build_vignettes = FALSE)
    • From CRAN:
    install.packages("spacyr")
  3. Install spaCy in a conda environment

    • For Windows, you need to run R as an administrator to make installation work properly. To do so, right click the RStudio icon (or R desktop icon) and select “Run as administrator” when launching R.

    • To install spaCy, you can simply run

    library("spacyr")
    spacy_install()

    This will create a stand-alone conda environment including a python executable separate from your system Python (or anaconda python), install the latest version of spaCy (and its required packages), and download English language model. After installation, you can initialize spaCy in R with

    spacy_initialize()

    This will return the following message if spaCy was installed with this method.

    ## Found 'spacy_condaenv'. spacyr will use this environment
    ## successfully initialized (spaCy Version: 2.0.18, language model: en)
    ## (python options: type = "condaenv", value = "spacy_condaenv")
  4. (optional) Add more language models

    For spaCy installed by spacy_install(), spacyr provides a useful helper function to install additional language models. For instance, to install German language model

    spacy_download_langmodel("de")

    (Again, Windows users have to run this command as an administrator. Otherwise, he symlink (alias) to the language model will fail.)

Permanently setting the default Python

If you are using the same setting for spaCy (e.g. condaenv or python path) every time and want to reduce the time for initialization, you can fixate the setting by specifying it in an R-startup file (For Mac/Linux, the file is ~/.Rprofile), which is read every time a new R is launched. You can set the option permanently when you call spacy_initialize():

spacy_initialize(save_profile = TRUE)

Once this is appropriately set up, the message from spacy_initialize() changes to something like:

## spacy python option is already set, spacyr will use:
##  condaenv = "spacy_condaenv"
## successfully initialized (spaCy Version: 2.0.18, language model: en)
## (python options: type = "condaenv", value = "spacy_condaenv")

To ignore the permanently set options, you can initialize spacy with refresh_settings = TRUE.

Comments and feedback

We welcome your comments and feedback. Please file issues on the issues page, and/or send us comments at [email protected] and [email protected].

News

v1.0

  • Added new commands spacy_tokenize(), spacy_extract_entity(), spacy_extract_nounphrases(), nounphrase_extract(), and nounphrase_consolidate() for direct extraction of entities, nounphrases, and tokens, and extraction of noun phrases from spacyr parsed tests.
  • Added a new argument additional_attributes to spacy_parse() allowing the return of any tokens-level attribute available from https://spacy.io/api/token#attributes.
  • Added a vignette and significantly improved the documentation site https://spacyr.quanteda.io.

v0.9.9

  • Added spacy_install(), spacy_install_virtualenv(), and spacy_upgrade() to make installing or upgrading spaCy (and Python itself) easy and automatic.
  • Added support for multithreading in spacy_parse() via the multithreading argument. This uses the "pipes" functionality in spaCy for improved performance.

v0.9.6

  • Create an option to permanently set the default Python through .Rprofile
  • Performance enhancement through spacy_initialize(entity = FALSE) (#91)
  • Now looks for Python settings from .bash_profile.

v0.9.3

  • Updated for the newer spaCy 2.0 release and new language models.
  • Add ask = FALSE to spacy_initialize(), to find spaCy installations automatically.

v0.9.2

  • Fixed a bug caused by zero-token "sentences" in spacy_parse(), by changing 1:length() to seq_along().

v0.9.1

  • Fixed a bug causing non-ASCII characters to be dropped when using Python 2.7.x (#58).
  • Fixed issue with automatic detection of python3 when both python and python3 exist, but only python3 has spaCy installed (#62).

v0.9.0

  • Initial CRAN release.

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.

install.packages("spacyr")

1.0 by Kenneth Benoit, 5 months ago


https://spacyr.quanteda.io


Report a bug at https://github.com/quanteda/spacyr/issues


Browse source code at https://github.com/cran/spacyr


Authors: Kenneth Benoit [cre, aut, cph] , Akitaka Matsuo [aut] , European Research Council [fnd] (ERC-2011-StG 283794-QUANTESS)


Documentation:   PDF Manual  


GPL-3 license


Imports data.table, reticulate

Depends on methods

Suggests knitr, quanteda, R.rsp, rmarkdown, spelling, testthat


Imported by politeness, quanteda.

Enhanced by NLP.


See at CRAN