Get started

What’s New in v3.3

New features and how to upgrade

spaCy v3.3 improves the speed of core pipeline components, adds a new trainable lemmatizer, and introduces trained pipelines for Finnish, Korean and Swedish.

Speed improvements

v3.3 includes a slew of speed improvements:

  • Speed up parser and NER by using constant-time head lookups.
  • Support unnormalized softmax probabilities in spacy.Tagger.v2 to speed up inference for tagger, morphologizer, senter and trainable lemmatizer.
  • Speed up parser projectivization functions.
  • Replace Ragged with faster AlignmentArray in Example for training.
  • Improve Matcher speed.
  • Improve serialization speed for empty Doc.spans.

For longer texts, the trained pipeline speeds improve 15% or more in prediction. We benchmarked en_core_web_md (same components as in v3.2) and de_core_news_md (with the new trainable lemmatizer) across a range of text sizes on Linux (Intel Xeon W-2265) and OS X (M1) to compare spaCy v3.2 vs. v3.3:

Intel Xeon W-2265

ModelAvg. Words/Docv3.2 Words/Secv3.3 Words/SecDiff
en_core_web_md10017292174410.86%
(=same components)100015408160244.00%
10000127981534619.91%
de_core_news_md1002022119321-4.45%
(+v3.3 trainable lemmatizer)10001748017345-0.77%
10000145131703617.38%

Apple M1

ModelAvg. Words/Docv3.2 Words/Secv3.3 Words/SecDiff
en_core_web_md10018272184080.74%
(=same components)100018794192482.42%
10000151441751315.64%
de_core_news_md10019227195911.89%
(+v3.3 trainable lemmatizer)100020047206282.90%
10000159211854616.49%

Trainable lemmatizer

The new trainable lemmatizer component uses edit trees to transform tokens into lemmas. Try out the trainable lemmatizer with the training quickstart!

displaCy support for overlapping spans and arcs

displaCy now supports overlapping spans with a new span style and multiple arcs with different labels between the same tokens for dep visualizations.

Overlapping spans can be visualized for any spans key in doc.spans:

import spacy
from spacy import displacy
from spacy.tokens import Span

nlp = spacy.blank("en")
text = "Welcome to the Bank of China."
doc = nlp(text)
doc.spans["custom"] = [Span(doc, 3, 6, "ORG"), Span(doc, 5, 6, "GPE")]
displacy.serve(doc, style="span", options={"spans_key": "custom"})

Additional features and improvements

  • Config comparisons with spacy debug diff-config.
  • Span suggester debugging with SpanCategorizer.set_candidates.
  • Big endian support with thinc-bigendian-ops and updates to make floret, murmurhash, Thinc and spaCy endian neutral.
  • Initial support for Lower Sorbian and Upper Sorbian.
  • Language updates for English, French, Italian, Japanese, Korean, Norwegian, Russian, Slovenian, Spanish, Turkish, Ukrainian and Vietnamese.
  • New noun chunks for Finnish.

Trained pipelines

New trained pipelines

v3.3 introduces new CPU/CNN pipelines for Finnish, Korean and Swedish, which use the new trainable lemmatizer and floret vectors. Due to the use Bloom embeddings and subwords, the pipelines have compact vectors with no out-of-vocabulary words.

PackageLanguageUPOSParser LASNER F
fi_core_news_smFinnish92.571.975.9
fi_core_news_mdFinnish95.978.680.6
fi_core_news_lgFinnish96.279.482.4
ko_core_news_smKorean86.165.671.3
ko_core_news_mdKorean94.780.983.1
ko_core_news_lgKorean94.781.385.3
sv_core_news_smSwedish95.075.974.7
sv_core_news_mdSwedish96.378.579.3
sv_core_news_lgSwedish96.379.181.1

Pipeline updates

The following languages switch from lookup or rule-based lemmatizers to the new trainable lemmatizer: Danish, Dutch, German, Greek, Italian, Lithuanian, Norwegian, Polish, Portuguese and Romanian. The overall lemmatizer accuracy improves for all of these pipelines, but be aware that the types of errors may look quite different from the lookup-based lemmatizers. If you’d prefer to continue using the previous lemmatizer, you can switch from the trainable lemmatizer to a non-trainable lemmatizer.

Modelv3.2 Lemma Accv3.3 Lemma Acc
da_core_news_md84.994.8
de_core_news_md73.497.7
el_core_news_md56.588.9
fi_core_news_md-86.2
it_core_news_md86.697.2
ko_core_news_md-90.0
lt_core_news_md71.184.8
nb_core_news_md76.797.1
nl_core_news_md81.594.0
pl_core_news_md87.193.7
pt_core_news_md76.796.9
ro_core_news_md81.895.5
sv_core_news_md-95.5

In addition, the vectors in the English pipelines are deduplicated to improve the pruned vectors in the md models and reduce the lg model size.

Notes about upgrading from v3.2

Span comparisons

Span comparisons involving ordering (<, <=, >, >=) now take all span attributes into account (start, end, label, and KB ID) so spans may be sorted in a slightly different order.

Whitespace annotation

During training, annotation on whitespace tokens is handled in the same way as annotation on non-whitespace tokens in order to allow custom whitespace annotation.

Doc.from_docs

Doc.from_docs now includes Doc.tensor by default and supports excludes with an exclude argument in the same format as Doc.to_bytes. The supported exclude fields are spans, tensor and user_data.

Docs including Doc.tensor may be quite a bit larger in RAM, so to exclude Doc.tensor as in v3.2:

-merged_doc = Doc.from_docs(docs)
+merged_doc = Doc.from_docs(docs, exclude=["tensor"])

Using trained pipelines with floret vectors

If you’re running a new trained pipeline for Finnish, Korean or Swedish on new texts and working with Doc objects, you shouldn’t notice any difference with floret vectors vs. default vectors.

If you use vectors for similarity comparisons, there are a few differences, mainly because a floret pipeline doesn’t include any kind of frequency-based word list similar to the list of in-vocabulary vector keys with default vectors.

  • If your workflow iterates over the vector keys, you should use an external word list instead:

    - lexemes = [nlp.vocab[orth] for orth in nlp.vocab.vectors]
    + lexemes = [nlp.vocab[word] for word in external_word_list]
    
  • Vectors.most_similar is not supported because there’s no fixed list of vectors to compare your vectors to.

Pipeline package version compatibility

When you’re loading a pipeline package trained with an earlier version of spaCy v3, you will see a warning telling you that the pipeline may be incompatible. This doesn’t necessarily have to be true, but we recommend running your pipelines against your test suite or evaluation data to make sure there are no unexpected results.

If you’re using one of the trained pipelines we provide, you should run spacy download to update to the latest version. To see an overview of all installed packages and their compatibility, you can run spacy validate.

If you’ve trained your own custom pipeline and you’ve confirmed that it’s still working as expected, you can update the spaCy version requirements in the meta.json:

- "spacy_version": ">=3.2.0,<3.3.0",
+ "spacy_version": ">=3.2.0,<3.4.0",

Updating v3.2 configs

To update a config from spaCy v3.2 with the new v3.3 settings, run init fill-config:

python -m spacy init fill-config config-v3.2.cfg config-v3.3.cfg

In many cases (spacy train, spacy.load), the new defaults will be filled in automatically, but you’ll need to fill in the new settings to run debug config and debug data.

To see the speed improvements for the Tagger architecture, edit your config to switch from spacy.Tagger.v1 to spacy.Tagger.v2 and then run init fill-config.