Pipeline

Lemmatizer

classv3
String name:lemmatizerTrainable:
Pipeline component for lemmatization

Component for assigning base forms to tokens using rules based on part-of-speech tags, or lookup tables. Different Language subclasses can implement their own lemmatizer components via language-specific factories. The default data used is provided by the spacy-lookups-data extension package.

For a trainable lemmatizer, see EditTreeLemmatizer.

Assigned Attributes

Lemmas generated by rules or predicted will be saved to Token.lemma.

LocationValue
Token.lemmaThe lemma (hash). int
Token.lemma_The lemma. str

Config and implementation

The default config is defined by the pipeline component factory and describes how the component should be configured. You can override its settings via the config argument on nlp.add_pipe or in your config.cfg for training. For examples of the lookups data format used by the lookup and rule-based lemmatizers, see spacy-lookups-data.

SettingDescription
modeThe lemmatizer mode, e.g. "lookup" or "rule". Defaults to lookup if no language-specific lemmatizer is available (see the following table). str
overwriteWhether to overwrite existing lemmas. Defaults to False. bool
modelNot yet implemented: the model to use. Model
keyword-only
scorerThe scoring method. Defaults to Scorer.score_token_attr for the attribute "lemma". Optional[Callable]

Many languages specify a default lemmatizer mode other than lookup if a better lemmatizer is available. The lemmatizer modes rule and pos_lookup require token.pos from a previous pipeline component (see example pipeline configurations in the pretrained pipeline design details) or rely on third-party libraries (pymorphy3).

LanguageDefault Mode
bnrule
capos_lookup
elrule
enrule
esrule
farule
frrule
itpos_lookup
mkrule
nbrule
nlrule
plpos_lookup
rupymorphy3
svrule
ukpymorphy3
explosion/spaCy/master/spacy/pipeline/lemmatizer.py

Lemmatizer.__init__ method

Create a new pipeline instance. In your application, you would normally use a shortcut for this and instantiate the component using its string name and nlp.add_pipe.

NameDescription
vocabThe shared vocabulary. Vocab
modelNot yet implemented: The model to use. Model
nameString name of the component instance. Used to add entries to the losses during training. str
keyword-only
modeThe lemmatizer mode, e.g. "lookup" or "rule". Defaults to "lookup". str
overwriteWhether to overwrite existing lemmas. bool

Lemmatizer.__call__ method

Apply the pipe to one document. The document is modified in place, and returned. This usually happens under the hood when the nlp object is called on a text and all pipeline components are applied to the Doc in order.

NameDescription
docThe document to process. Doc

Lemmatizer.pipe method

Apply the pipe to a stream of documents. This usually happens under the hood when the nlp object is called on a text and all pipeline components are applied to the Doc in order.

NameDescription
streamA stream of documents. Iterable[Doc]
keyword-only
batch_sizeThe number of documents to buffer. Defaults to 128. int

Lemmatizer.initialize method

Initialize the lemmatizer and load any data resources. This method is typically called by Language.initialize and lets you customize arguments it receives via the [initialize.components] block in the config. The loading only happens during initialization, typically before training. At runtime, all data is loaded from disk.

NameDescription
get_examplesFunction that returns gold-standard annotations in the form of Example objects. Defaults to None. Optional[Callable[[], Iterable[Example]]]
keyword-only
nlpThe current nlp object. Defaults to None. Optional[Language]
lookupsThe lookups object containing the tables such as "lemma_rules", "lemma_index", "lemma_exc" and "lemma_lookup". If None, default tables are loaded from spacy-lookups-data. Defaults to None. Optional[Lookups]

Lemmatizer.lookup_lemmatize method

Lemmatize a token using a lookup-based approach. If no lemma is found, the original string is returned.

NameDescription
tokenThe token to lemmatize. Token

Lemmatizer.rule_lemmatize method

Lemmatize a token using a rule-based approach. Typically relies on POS tags.

NameDescription
tokenThe token to lemmatize. Token

Lemmatizer.is_base_form method

Check whether we’re dealing with an uninflected paradigm, so we can avoid lemmatization entirely.

NameDescription
tokenThe token to analyze. Token

Lemmatizer.get_lookups_config classmethod

Returns the lookups configuration settings for a given mode for use in Lemmatizer.load_lookups.

NameDescription
modeThe lemmatizer mode. str

Lemmatizer.to_disk method

Serialize the pipe to disk.

NameDescription
pathA path to a directory, which will be created if it doesn’t exist. Paths may be either strings or Path-like objects. Union[str,Path]
keyword-only
excludeString names of serialization fields to exclude. Iterable[str]

Lemmatizer.from_disk method

Load the pipe from disk. Modifies the object in place and returns it.

NameDescription
pathA path to a directory. Paths may be either strings or Path-like objects. Union[str,Path]
keyword-only
excludeString names of serialization fields to exclude. Iterable[str]

Lemmatizer.to_bytes method

Serialize the pipe to a bytestring.

NameDescription
keyword-only
excludeString names of serialization fields to exclude. Iterable[str]

Lemmatizer.from_bytes method

Load the pipe from a bytestring. Modifies the object in place and returns it.

NameDescription
bytes_dataThe data to load from. bytes
keyword-only
excludeString names of serialization fields to exclude. Iterable[str]

Attributes

NameDescription
vocabThe shared Vocab. Vocab
lookupsThe lookups object. Lookups
modeThe lemmatizer mode. str

Serialization fields

During serialization, spaCy will export several data fields used to restore different aspects of the object. If needed, you can exclude them from serialization by passing in the string names via the exclude argument.

NameDescription
vocabThe shared Vocab.
lookupsThe lookups. You usually don’t want to exclude this.