
tokenwiser
Example
import spacy from sklearn.pipeline import make_pipeline from sklearn.feature_extraction.text import CountVectorizer from sklearn.linear_model import LogisticRegression from tokenwiser.component import attach_sklearn_categoriser X = [ 'i really like this post', 'thanks for that comment', 'i enjoy this friendly forum', 'this is a bad post', 'i dislike this article', 'this is not well written' ] y = ['pos', 'pos', 'pos', 'neg', 'neg', 'neg'] # Note that we're training a pipeline here via a single-batch `.fit()` method pipe = make_pipeline(CountVectorizer(), LogisticRegression()).fit(X, y) nlp = spacy.load('en_core_web_sm') # This is where we attach our pre-trained model as a pipeline step. attach_sklearn_categoriser(nlp, pipe_name='silly_sentiment', estimator=pipe)
GitHubkoaning/tokenwiser
Found a mistake or something isn't working?
If you've come across a universe project that isn't working or is incompatible with the reported spaCy version, let us know by opening a discussion thread.
Submit your project
If you have a project that you want the spaCy community to make use of, you can suggest it by submitting a pull request to the spaCy website repository. The Universe database is open-source and collected in a simple JSON file. For more details on the formats and available fields, see the documentation. Looking for inspiration your own spaCy plugin or extension? Check out the project idea
label on the issue tracker.