|Programming language||Python||Python||Java / Python|
|Neural network models|
|Integrated word vectors|
Natural Language Understanding is an active area of research and development, so there are many different tools or technologies catering to different use-cases. The table below summarizes a few libraries (spaCy, NLTK, AllenNLP, StanfordNLP and TensorFlow) to help you get a feel for things fit together.
|I’m a beginner and just getting started with NLP.|
|I want to build an end-to-end production application.|
|I want to try out different neural network architectures for NLP.|
|I want to try the latest models with state-of-the-art accuracy.|
|I want to train models from my own data.|
|I want my application to be efficient on CPU.|
Two peer-reviewed papers in 2015 confirmed that spaCy offers the fastest syntactic parser in the world and that its accuracy is within 1% of the best available. The few systems that are more accurate are 20× slower or more.
|spaCy v2.x||2017||Python / Cython||92.6||n/a|
|spaCy v1.x||2015||Python / Cython||91.8||13,963|
In this section, we compare spaCy’s algorithms to recently published systems, using some of the most popular benchmarks. These benchmarks are designed to help isolate the contributions of specific algorithmic decisions, so they promote slightly “idealized” conditions. Specifically, the text comes pre-processed with “gold standard” token and sentence boundaries. The data sets also tend to be fairly small, to help researchers iterate quickly. These conditions mean the models trained on these data sets are not always useful for practical purposes.
This is the “classic” evaluation, so it’s the number parsing researchers are most easily able to put in context. However, it’s quite far removed from actual usage: it uses sentences with gold-standard segmentation and tokenization, from a pretty specific type of text (articles from a single newspaper, 1984-1989).
|Dozat and Manning||2017||neural||95.75|
|Andor et al.||2016||neural||94.44|
|SyntaxNet Parsey McParseface||2016||neural||94.15|
|Weiss et al.||2015||neural||93.91|
|Zhang and McDonald||2014||linear||93.32|
|Martins et al.||2013||linear||93.10|
This is the evaluation we use to tune spaCy’s parameters to decide which algorithms are better than the others. It’s reasonably close to actual usage, because it requires the parses to be produced from raw text, without any pre-processing.
|Strubell et al.||2017||neural||86.81|
|Chiu and Nichols||2016||neural||86.19|
|Durrett and Klein||2014||neural||84.04|
|Ratinov and Roth||2009||linear||83.45|
In this section, we provide benchmark accuracies for the pre-trained model pipelines we distribute with spaCy. Evaluations are conducted end-to-end from raw text, with no “gold standard” pre-processing, over text from a mix of genres where possible.
Here we compare the per-document processing time of various spaCy functionalities against other NLP libraries. We show both absolute timings (in ms) and relative performance (normalized to spaCy). Lower is better.
|Absolute (ms per doc)||Relative (to spaCy)|