integrates Large Language Models (LLMs) into spaCy pipelines, featuring a modular system for fast prototyping and prompting, and turning unstructured responses into robust outputs for various NLP tasks, no training data required.
llmcomponent to integrate prompts into your pipeline
- Modular functions to define the task (prompting and parsing) and model (model to use)
- Support for hosted APIs and self-hosted open-source models
- Integration with
- Access to OpenAI API, including GPT-4 and various GPT-3 models
- Built-in support for various open-source models hosted on Hugging Face
- Usage examples for standard NLP tasks such as Named Entity Recognition and Text Classification
- Easy implementation of your own functions via the for custom prompting, parsing and model integrations
Large Language Models (LLMs) feature powerful natural language understanding capabilities. With only a few (and sometimes no) examples, an LLM can be prompted to perform custom NLP tasks such as text categorization, named entity recognition, coreference resolution, information extraction and more.
Supervised learning is much worse than LLM prompting for prototyping, but for many tasks it’s much better for production. A transformer model that runs comfortably on a single GPU is extremely powerful, and it’s likely to be a better choice for any task for which you have a well-defined output. You train the model with anything from a few hundred to a few thousand labelled examples, and it will learn to do exactly that. Efficiency, reliability and control are all better with supervised learning, and accuracy will generally be higher than LLM prompting as well.
spacy-llm lets you have the best of both worlds. You can quickly
initialize a pipeline with components powered by LLM prompts, and freely mix in
components powered by other approaches. As your project progresses, you can look
at replacing some or all of the LLM-powered components as you require.
Of course, there can be components in your system for which the power of an LLM is fully justified. If you want a system that can synthesize information from multiple documents in subtle ways and generate a nuanced summary for you, bigger is better. However, even if your production system needs an LLM for some of the task, that doesn’t mean you need an LLM for all of it. Maybe you want to use a cheap text classification model to help you find the texts to summarize, or maybe you want to add a rule-based system to sanity check the output of the summary. These before-and-after tasks are much easier with a mature and well-thought-out library, which is exactly what spaCy provides.
spacy-llm will be installed automatically in future spaCy versions. For now,
you can run the following in the same virtual environment where you already have
The task and the model have to be supplied to the
llm pipeline component using
the . This package provides various
built-in functionality, as detailed in the API documentation.
Note that Hugging Face will download the
"databricks/dolly-v2-3b" model the
first time you use it. You can
define the cached directory
by setting the environmental variable
HF_HOME. Also, you can upgrade the model
"databricks/dolly-v2-12b" for better performance.
Note that for efficient usage of resources, typically you would use
nlp.pipe(docs) with a batch, instead of calling
nlp(doc) with a single document.
To write a
task, you need to implement two functions:
generate_prompts that takes a list of
Doc objects and transforms
them into a list of prompts, and
parse_responses that transforms the LLM
outputs into annotations on the
Doc, e.g. entity spans, text
categories and more.
To register your custom task, decorate a factory function using the
spacy_llm.registry.llm_tasks decorator with a custom name that you can refer
to in your config.
spacy-llm has a built-in logger that can log the prompt sent to the LLM as well
as its raw response. This logger uses the debug level and by default has a
In order to use this logger, you can setup a simple handler like this:
Then when using the pipeline you’ll be able to view the prompt and response.
E.g. with the config and code from Example 1 above:
You will see
logging output similar to:
print(doc.cats) to standard output should look like:
llm component is defined by two main settings:
- A task, defining the prompt to send to the LLM as well as the functionality to parse the resulting response back into structured fields on the objects.
- A model defining the model to use and how to connect to it.
spacy-llmsupports both access to external APIs (such as OpenAI) as well as access to self-hosted open-source LLMs (such as using Dolly through Hugging Face).
spacy-llm exposes a customizable caching functionality
to avoid running the same document through an LLM service (be it local or
through a REST API) more than once.
Finally, you can choose to save a stringified version of LLM prompts/responses
Doc.user_data["llm_io"] attribute by setting
Doc.user_data["llm_io"] is a dictionary containing one entry for every LLM
component within the
nlp pipeline. Each entry is itself a dictionary, with two
A note on
validate_types: by default,
spacy-llm checks whether the
signatures of the
task callables are consistent with each other
and emits a warning if they don’t.
validate_types can be set to
False if you
want to disable this behavior.
A task defines an NLP problem or question, that will be sent to the LLM via a
prompt. Further, the task defines how to parse the LLM’s responses back into
structured information. All tasks are registered in the
Practically speaking, a task should adhere to the
It needs to define a
generate_prompts function and a
|Takes a collection of documents, and returns a collection of “prompts”, which can be of type |
|Takes a collection of LLM responses and the original documents, parses the responses into structured information, and sets the annotations on the documents.|
Moreover, the task may define an optional
It should accept an iterable of
Example objects as input and return a score
dictionary. If the
scorer method is defined,
spacy-llm will call it to
evaluate the component.
|The summarization task prompts the model for a concise summary of the provided text.|
|Implements Chain-of-Thought reasoning for NER extraction - obtains higher accuracy than v1 or v2.|
|Builds on v1 and additionally supports defining the provided labels with explicit descriptions.|
|The original version of the built-in NER task supports both zero-shot and few-shot prompting.|
|Adaptation of the v3 NER task to support overlapping entities and store its annotations in |
|Adaptation of the v2 NER task to support overlapping entities and store its annotations in |
|Adaptation of the v1 NER task to support overlapping entities and store its annotations in |
|Relation Extraction task supporting both zero-shot and few-shot prompting.|
|Version 3 builds on v2 and allows setting definitions of labels.|
|Version 2 builds on v1 and includes an improved prompt template.|
|Version 1 of the built-in TextCat task supports both zero-shot and few-shot prompting.|
|Lemmatizes the provided text and updates the |
|Performs sentiment analysis on provided texts.|
|This task is only useful for testing - it tells the LLM to do nothing, and does not set any fields on the |
All built-in tasks support few-shot prompts, i. e. including examples in a
prompt. Examples can be supplied in two ways: (1) as a separate file containing
only examples or (2) by initializing
llm with a
(like any other pipeline component).
(1) Few-shot example file
A file containing examples for few-shot prompting can be configured like this:
The supplied file has to conform to the format expected by the required task (see the task documentation further down).
(2) Initializing the
llm component with a
Alternatively, you can initialize your
nlp pipeline by providing a
get_examples callback for
n_prompt_examples to a positive number to automatically fetch a few
examples for few-shot learning. Set
-1 to use all
examples as part of the few-shot learning prompt.
A model defines which LLM model to query, and how to query it. It can be a
simple function taking a collection of prompts (consistent with the output type
task.generate_prompts()) and returning a collection of responses
(consistent with the expected input of
parse_responses). Generally speaking,
it’s a function of type
Callable[[Iterable[Any]], Iterable[Any]], but specific
implementations can have other signatures, like
All built-in models are registered in
llm_models. If no model is specified,
the repo currently connects to the
OpenAI API by default using REST, and
Currently three different approaches to use LLMs are supported:
spacy-llms native REST interface. This is the default for all hosted models (e. g. OpenAI, Cohere, Anthropic, …).
- A HuggingFace integration that allows to run a limited set of HF models locally.
- A LangChain integration that allows to run any model supported by LangChain (hosted or locally).
Approaches 1. and 2 are the default for hosted model and local models,
respectively. Alternatively you can use LangChain to access hosted or local
models by specifying one of the models registered with the
|Azure’s OpenAI models.|
|Dolly models through HuggingFace.|
|Falcon models through HuggingFace.|
|Mistral models through HuggingFace.|
|Llama2 models through HuggingFace.|
|StableLM models through HuggingFace.|
|OpenLLaMA models through HuggingFace.|
|LangChain models for API retrieval.|
Note that the chat models variants of Llama 2 are currently not supported. This
is because they need a particular prompting setup and don’t add any discernible
benefits in the use case of
spacy-llm (i. e. no interactive chat) compared to
the completion model variants.
Interacting with LLMs, either through an external API or a local instance, is
costly. Since developing an NLP pipeline generally means a lot of exploration
spacy-llm implements a built-in
to avoid reprocessing the same
documents at each run that keeps batches of documents stored on disk.
|This function is registered in spaCy’s |
|This function is registered in spaCy’s |
|These functions provide simple normalizations for string comparisons, e.g. between a list of specified labels and a label given in the raw text of the LLM response.|