Large Language Models
The spacy-llm package integrates Large Language Models (LLMs) into spaCy pipelines, featuring a modular system for fast prototyping and prompting, and turning unstructured responses into robust outputs for various NLP tasks, no training data required.
- Serializable
llm
component to integrate prompts into your pipeline - Modular functions to define the task (prompting and parsing) and model (model to use)
- Support for hosted APIs and self-hosted open-source models
- Integration with
LangChain
- Access to OpenAI API, including GPT-4 and various GPT-3 models
- Built-in support for various open-source models hosted on Hugging Face
- Usage examples for standard NLP tasks such as Named Entity Recognition and Text Classification
- Easy implementation of your own functions via the registry for custom prompting, parsing and model integrations
Motivation
Large Language Models (LLMs) feature powerful natural language understanding capabilities. With only a few (and sometimes no) examples, an LLM can be prompted to perform custom NLP tasks such as text categorization, named entity recognition, coreference resolution, information extraction and more.
Supervised learning is much worse than LLM prompting for prototyping, but for many tasks it’s much better for production. A transformer model that runs comfortably on a single GPU is extremely powerful, and it’s likely to be a better choice for any task for which you have a well-defined output. You train the model with anything from a few hundred to a few thousand labelled examples, and it will learn to do exactly that. Efficiency, reliability and control are all better with supervised learning, and accuracy will generally be higher than LLM prompting as well.
spacy-llm
lets you have the best of both worlds. You can quickly
initialize a pipeline with components powered by LLM prompts, and freely mix in
components powered by other approaches. As your project progresses, you can look
at replacing some or all of the LLM-powered components as you require.
Of course, there can be components in your system for which the power of an LLM is fully justified. If you want a system that can synthesize information from multiple documents in subtle ways and generate a nuanced summary for you, bigger is better. However, even if your production system needs an LLM for some of the task, that doesn’t mean you need an LLM for all of it. Maybe you want to use a cheap text classification model to help you find the texts to summarize, or maybe you want to add a rule-based system to sanity check the output of the summary. These before-and-after tasks are much easier with a mature and well-thought-out library, which is exactly what spaCy provides.
Install
spacy-llm
will be installed automatically in future spaCy versions. For now,
you can run the following in the same virtual environment where you already have
spacy
installed.
Usage
The task and the model have to be supplied to the llm
pipeline component using
the config system. This package provides various
built-in functionality, as detailed in the API documentation.
Example 1: Add a text classifier using a GPT-3 model from OpenAI
Create a new API key from openai.com or fetch an existing one, and ensure the keys are set as environmental variables. For more background information, see the OpenAI section.
Create a config file config.cfg
containing at least the following (or see the
full example
here):
Now run:
Example 2: Add NER using an open-source model through Hugging Face
To run this example, ensure that you have a GPU enabled, and transformers
,
torch
and CUDA installed. For more background information, see the
DollyHF section.
Create a config file config.cfg
containing at least the following (or see the
full example
here):
Now run:
Note that Hugging Face will download the "databricks/dolly-v2-3b"
model the
first time you use it. You can
define the cached directory
by setting the environmental variable HF_HOME
. Also, you can upgrade the model
to be "databricks/dolly-v2-12b"
for better performance.
Example 3: Create the component directly in Python
The llm
component behaves as any other component does, and there are
task-specific components defined to help
you hit the ground running with a reasonable built-in task implementation.
Note that for efficient usage of resources, typically you would use
nlp.pipe(docs)
with a batch, instead of calling
nlp(doc)
with a single document.
Example 4: Implement your own custom task
To write a task
, you need to implement two functions:
generate_prompts
that takes a list of Doc
objects and transforms
them into a list of prompts, and parse_responses
that transforms the LLM
outputs into annotations on the Doc
, e.g. entity spans, text
categories and more.
To register your custom task, decorate a factory function using the
spacy_llm.registry.llm_tasks
decorator with a custom name that you can refer
to in your config.
Logging
spacy-llm has a built-in logger that can log the prompt sent to the LLM as well
as its raw response. This logger uses the debug level and by default has a
logging.NullHandler()
configured.
In order to use this logger, you can setup a simple handler like this:
Then when using the pipeline you’ll be able to view the prompt and response.
E.g. with the config and code from Example 1 above:
You will see logging
output similar to:
print(doc.cats)
to standard output should look like:
API
spacy-llm
exposes an llm
factory with
configurable settings.
An llm
component is defined by two main settings:
- A task, defining the prompt to send to the LLM as well as the functionality to parse the resulting response back into structured fields on the Doc objects.
- A model defining the model to use and how to connect to it.
Note that
spacy-llm
supports both access to external APIs (such as OpenAI) as well as access to self-hosted open-source LLMs (such as using Dolly through Hugging Face).
Moreover, spacy-llm
exposes a customizable caching functionality
to avoid running the same document through an LLM service (be it local or
through a REST API) more than once.
Finally, you can choose to save a stringified version of LLM prompts/responses
within the Doc.user_data["llm_io"]
attribute by setting save_io
to True
.
Doc.user_data["llm_io"]
is a dictionary containing one entry for every LLM
component within the nlp
pipeline. Each entry is itself a dictionary, with two
keys: prompt
and response
.
A note on validate_types
: by default, spacy-llm
checks whether the
signatures of the model
and task
callables are consistent with each other
and emits a warning if they don’t. validate_types
can be set to False
if you
want to disable this behavior.
Tasks
A task defines an NLP problem or question, that will be sent to the LLM via a
prompt. Further, the task defines how to parse the LLM’s responses back into
structured information. All tasks are registered in the llm_tasks
registry.
Practically speaking, a task should adhere to the Protocol
named LLMTask
defined in
ty.py
. It
needs to define a generate_prompts
function and a parse_responses
function.
Tasks may support prompt sharding (for more info see the API docs on
sharding and
non-sharding tasks). The function
signatures for generate_prompts
and parse_responses
depend on whether they
do.
For tasks not supporting sharding:
Task | Description | |
---|---|---|
task.generate_prompts | Takes a collection of documents, and returns a collection of prompts, which can be of type Any . | |
task.parse_responses | Takes a collection of LLM responses and the original documents, parses the responses into structured information, and sets the annotations on the documents. |
For tasks supporting sharding:
Task | Description | |
---|---|---|
task.generate_prompts | Takes a collection of documents, and returns a collection of collections of prompt shards, which can be of type Any . | |
task.parse_responses | Takes a collection of collections of LLM responses (one per prompt shard) and the original documents, parses the responses into structured information, sets the annotations on the doc shards, and merges those doc shards back into a single doc instance. |
Moreover, the task may define an optional scorer
method.
It should accept an iterable of Example
objects as input and return a score
dictionary. If the scorer
method is defined, spacy-llm
will call it to
evaluate the component.
Component | Description |
---|---|
spacy.EntityLinker.v1 | The entity linking task prompts the model to link all entities in a given text to entries in a knowledge base. |
spacy.Summarization.v1 | The summarization task prompts the model for a concise summary of the provided text. |
spacy.NER.v3 | Implements Chain-of-Thought reasoning for NER extraction - obtains higher accuracy than v1 or v2. |
spacy.NER.v2 | Builds on v1 and additionally supports defining the provided labels with explicit descriptions. |
spacy.NER.v1 | The original version of the built-in NER task supports both zero-shot and few-shot prompting. |
spacy.SpanCat.v3 | Adaptation of the v3 NER task to support overlapping entities and store its annotations in doc.spans . |
spacy.SpanCat.v2 | Adaptation of the v2 NER task to support overlapping entities and store its annotations in doc.spans . |
spacy.SpanCat.v1 | Adaptation of the v1 NER task to support overlapping entities and store its annotations in doc.spans . |
spacy.REL.v1 | Relation Extraction task supporting both zero-shot and few-shot prompting. |
spacy.TextCat.v3 | Version 3 builds on v2 and allows setting definitions of labels. |
spacy.TextCat.v2 | Version 2 builds on v1 and includes an improved prompt template. |
spacy.TextCat.v1 | Version 1 of the built-in TextCat task supports both zero-shot and few-shot prompting. |
spacy.Lemma.v1 | Lemmatizes the provided text and updates the lemma_ attribute of the tokens accordingly. |
spacy.Raw.v1 | Executes raw doc content as prompt to LLM. |
spacy.Sentiment.v1 | Performs sentiment analysis on provided texts. |
spacy.Translation.v1 | Translates doc content into the specified target language. |
spacy.NoOp.v1 | This task is only useful for testing - it tells the LLM to do nothing, and does not set any fields on the docs . |
Providing examples for few-shot prompts
All built-in tasks support few-shot prompts, i. e. including examples in a
prompt. Examples can be supplied in two ways: (1) as a separate file containing
only examples or (2) by initializing llm
with a get_examples()
callback
(like any other pipeline component).
(1) Few-shot example file
A file containing examples for few-shot prompting can be configured like this:
The supplied file has to conform to the format expected by the required task (see the task documentation further down).
(2) Initializing the llm
component with a get_examples()
callback
Alternatively, you can initialize your nlp
pipeline by providing a
get_examples
callback for nlp.initialize
and
setting n_prompt_examples
to a positive number to automatically fetch a few
examples for few-shot learning. Set n_prompt_examples
to -1
to use all
examples as part of the few-shot learning prompt.
Model
A model defines which LLM model to query, and how to query it. It can be a
simple function taking a collection of prompts (consistent with the output type
of task.generate_prompts()
) and returning a collection of responses
(consistent with the expected input of parse_responses
). Generally speaking,
it’s a function of type Callable[[Iterable[Any]], Iterable[Any]]
, but specific
implementations can have other signatures, like
Callable[[Iterable[str]], Iterable[str]]
.
All built-in models are registered in llm_models
. If no model is specified,
the repo currently connects to the OpenAI
API by default using REST, and
accesses the "gpt-3.5-turbo"
model.
Currently three different approaches to use LLMs are supported:
spacy-llm
s native REST interface. This is the default for all hosted models (e. g. OpenAI, Cohere, Anthropic, …).- A HuggingFace integration that allows to run a limited set of HF models locally.
- A LangChain integration that allows to run any model supported by LangChain (hosted or locally).
Approaches 1. and 2 are the default for hosted model and local models,
respectively. Alternatively you can use LangChain to access hosted or local
models by specifying one of the models registered with the langchain.
prefix.
Model | Description |
---|---|
spacy.GPT-4.v2 | OpenAI’s gpt-4 model family. |
spacy.GPT-3-5.v2 | OpenAI’s gpt-3-5 model family. |
spacy.Text-Davinci.v2 | OpenAI’s text-davinci model family. |
spacy.Code-Davinci.v2 | OpenAI’s code-davinci model family. |
spacy.Text-Curie.v2 | OpenAI’s text-curie model family. |
spacy.Text-Babbage.v2 | OpenAI’s text-babbage model family. |
spacy.Text-Ada.v2 | OpenAI’s text-ada model family. |
spacy.Davinci.v2 | OpenAI’s davinci model family. |
spacy.Curie.v2 | OpenAI’s curie model family. |
spacy.Babbage.v2 | OpenAI’s babbage model family. |
spacy.Ada.v2 | OpenAI’s ada model family. |
spacy.Azure.v1 | Azure’s OpenAI models. |
spacy.Command.v1 | Cohere’s command model family. |
spacy.Claude-2.v1 | Anthropic’s claude-2 model family. |
spacy.Claude-1.v1 | Anthropic’s claude-1 model family. |
spacy.Claude-instant-1.v1 | Anthropic’s claude-instant-1 model family. |
spacy.Claude-instant-1-1.v1 | Anthropic’s claude-instant-1.1 model family. |
spacy.Claude-1-0.v1 | Anthropic’s claude-1.0 model family. |
spacy.Claude-1-2.v1 | Anthropic’s claude-1.2 model family. |
spacy.Claude-1-3.v1 | Anthropic’s claude-1.3 model family. |
spacy.PaLM.v1 | Google’s PaLM model family. |
spacy.Dolly.v1 | Dolly models through HuggingFace. |
spacy.Falcon.v1 | Falcon models through HuggingFace. |
spacy.Mistral.v1 | Mistral models through HuggingFace. |
spacy.Llama2.v1 | Llama2 models through HuggingFace. |
spacy.StableLM.v1 | StableLM models through HuggingFace. |
spacy.OpenLLaMA.v1 | OpenLLaMA models through HuggingFace. |
LangChain models | LangChain models for API retrieval. |
Note that the chat models variants of Llama 2 are currently not supported. This
is because they need a particular prompting setup and don’t add any discernible
benefits in the use case of spacy-llm
(i. e. no interactive chat) compared to
the completion model variants.
Cache
Interacting with LLMs, either through an external API or a local instance, is
costly. Since developing an NLP pipeline generally means a lot of exploration
and prototyping, spacy-llm
implements a built-in
cache to avoid reprocessing the same
documents at each run that keeps batches of documents stored on disk.
Various functions
Function | Description |
---|---|
spacy.FewShotReader.v1 | This function is registered in spaCy’s misc registry, and reads in examples from a .yml , .yaml , .json or .jsonl file. It uses srsly to read in these files and parses them depending on the file extension. |
spacy.FileReader.v1 | This function is registered in spaCy’s misc registry, and reads a file provided to the path to return a str representation of its contents. This function is typically used to read Jinja files containing the prompt template. |
Normalizer functions | These functions provide simple normalizations for string comparisons, e.g. between a list of specified labels and a label given in the raw text of the LLM response. |