spaCy is a library for advanced natural language processing in Python and Cython.
Documentation and details: https://spacy.io/
spaCy is built on the very latest research, but it isn't researchware. It was designed from day 1 to be used in real products. It's commercial open-source software, released under the MIT license.
spaCy alternatives and similar packages
Based on the "Natural Language Processing" category.
Alternatively, view spaCy alternatives based on common mentions on social networks and blogs.
funNLP9.9 4.8 spaCy VS funNLP中英文敏感词、语言检测、中外手机/电话归属地/运营商查询、名字推断性别、手机号抽取、身份证抽取、邮箱抽取、中日文人名库、中文缩写库、拆字词典、词汇情感值、停用词、反动词表、暴恐词表、繁简体转换、英文模拟中文发音、汪峰歌词生成器、职业名称词库、同义词库、反义词库、否定词库、汽车品牌词库、汽车零件词库、连续英文切割、各种中文词向量、公司名字大全、古诗词库、IT词库、财经词库、成语词库、地名词库、历史名人词库、诗词词库、医学词库、饮食词库、法律词库、汽车词库、动物词库、中文聊天语料、中文谣言数据、百度中文问答数据集、句子相似度匹配算法集合、bert资源、文本生成&摘要相关工具、cocoNLP信息抽取工具、国内电话号码正则匹配、清华大学XLORE:中英文跨语言百科知识图谱、清华大学人工智能技术系列报告、自然语言生成、NLU太难了系列、自动对联数据及机器人、用户名黑名单列表、罪名法务名词及分类模型、微信公众号语料、cs224n深度学习自然语言处理课程、中文手写汉字识别、中文自然语言处理 语料/数据集、变量命名神器、分词语料库+代码、任务型对话英文数据集、ASR 语音数据集 + 基于深度学习的中文语音识别系统、笑声检测器、Microsoft多语言数字/单位/如日期时间识别包、中华新华字典数据库及api(包括常用歇后语、成语、词语和汉字)、文档图谱自动生成、SpaCy 中文模型、Common Voice语音识别数据集新版、神经网络关系抽取、基于bert的命名实体识别、关键词(Keyphrase)抽取包pke、基于医疗领域知识图谱的问答系统、基于依存句法与语义角色标注的事件三元组抽取、依存句法分析4万句高质量标注数据、cnocr：用来做中文OCR的Python3包、中文人物关系知识图谱项目、中文nlp竞赛项目及代码汇总、中文字符数据、speech-aligner: 从“人声语音”及其“语言文本”产生音素级别时间对齐标注的工具、AmpliGraph: 知识图谱表示学习(Python)库：知识图谱概念链接预测、Scattertext 文本可视化(python)、语言/知识表示工具：BERT & ERNIE、中文对比英文自然语言处理NLP的区别综述、Synonyms中文近义词工具包、HarvestText领域自适应文本挖掘工具（新词发现-情感分析-实体链接等）、word2word：(Py
Jieba9.8 0.0 L5 spaCy VS Jieba结巴中文分词
NLTK9.4 9.0 L2 spaCy VS NLTKNLTK Source
Pattern9.1 0.0 L2 spaCy VS PatternWeb mining module for Python, with tools for scraping, natural language processing, machine learning, network analysis and visualization.
TextBlob8.9 0.0 L3 spaCy VS TextBlobSimple, Pythonic, text processing--Sentiment analysis, part-of-speech tagging, noun phrase extraction, translation, and more.
SnowNLP8.7 0.0 L4 spaCy VS SnowNLPPython library for processing Chinese text
Stanza8.5 0.0 spaCy VS StanzaOfficial Stanford NLP Python Library for Many Human Languages
pkuseg-python8.5 2.1 spaCy VS pkuseg-pythonpkuseg多领域中文分词工具; The pkuseg toolkit for multi-domain Chinese word segmentation
pytext8.5 7.8 spaCy VS pytextA natural language modeling framework based on PyTorch
polyglot6.5 0.0 spaCy VS polyglotMultilingual text (NLP) processing toolkit
PyTorch-NLP6.4 0.0 spaCy VS PyTorch-NLPBasic Utilities for PyTorch Natural Language Processing (NLP)
langid.py6.4 0.0 L3 spaCy VS langid.pyStand-alone language identification system
textacy6.3 0.0 L3 spaCy VS textacyNLP, before and after spaCy
quepy5.7 0.0 L5 spaCy VS quepyA python framework to transform natural language questions to queries in a database query language.
IEPY4.9 0.0 L5 spaCy VS IEPYInformation Extraction in Python
TextGrocery4.5 0.0 L1 spaCy VS TextGroceryA simple short-text classification tool based on LibLinear
Lineflow2.3 1.3 spaCy VS Lineflow:zap:A Lightweight NLP Data Loader for All Deep Learning Frameworks in Python
stanfordnlp2.1 0.0 spaCy VS stanfordnlp[Deprecated] This library has been renamed to "Stanza". Latest development at: https://github.com/stanfordnlp/stanza
Simplemma1.6 8.0 spaCy VS SimplemmaSimple multilingual lemmatizer for Python, especially useful for speed and efficiency
pntl0.9 2.0 spaCy VS pntlPractical Natural Language Processing Tools for Humans is build on the top of Senna Natural Language Processing (NLP) predictions: part-of-speech (POS) tags, chunking (CHK), name entity recognition (NER), semantic role labeling (SRL) and syntactic parsing (PSG) with skip-gram all in Python and still more features will be added. The website give is for downlarding Senna tool
py3langid0.7 3.9 spaCy VS py3langidFaster, modernized fork of the language identification tool langid.py
Access the most powerful time series database as a service
* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of spaCy or a related project?
spaCy: Industrial-strength NLP
spaCy is a library for advanced Natural Language Processing in Python and Cython. It's built on the very latest research, and was designed from day one to be used in real products.
spaCy comes with pretrained pipelines and currently supports tokenization and training for 70+ languages. It features state-of-the-art speed and neural network models for tagging, parsing, named entity recognition, text classification and more, multi-task learning with pretrained transformers like BERT, as well as a production-ready training system and easy model packaging, deployment and workflow management. spaCy is commercial open-source software, released under the MIT license.
💫 Version 3.4 out now! Check out the release notes here.
|⭐️ spaCy 101||New to spaCy? Here's everything you need to know!|
|📚 Usage Guides||How to use spaCy and its features.|
|🚀 New in v3.0||New features, backwards incompatibilities and migration guide.|
|🪐 Project Templates||End-to-end workflows you can clone, modify and run.|
|🎛 API Reference||The detailed reference for spaCy's API.|
|📦 Models||Download trained pipelines for spaCy.|
|🌌 Universe||Plugins, extensions, demos and books from the spaCy ecosystem.|
|👩🏫 Online Course||Learn spaCy in this free and interactive online course.|
|📺 Videos||Our YouTube channel with video tutorials, talks and more.|
|🛠 Changelog||Changes and version history.|
|💝 Contribute||How to contribute to the spaCy project and code base.|
|Get a custom spaCy pipeline, tailor-made for your NLP problem by spaCy's core developers. Streamlined, production-ready, predictable and maintainable. Start by completing our 5-minute questionnaire to tell us what you need and we'll be in touch! Learn more →|
💬 Where to ask questions
The spaCy project is maintained by the spaCy team. Please understand that we won't be able to provide individual support via email. We also believe that help is much more valuable if it's shared publicly, so that more people can benefit from it.
|🚨 Bug Reports||GitHub Issue Tracker|
|🎁 Feature Requests & Ideas||GitHub Discussions|
|👩💻 Usage Questions||GitHub Discussions · Stack Overflow|
|🗯 General Discussion||GitHub Discussions|
- Support for 70+ languages
- Trained pipelines for different languages and tasks
- Multi-task learning with pretrained transformers like BERT
- Support for pretrained word vectors and embeddings
- State-of-the-art speed
- Production-ready training system
- Linguistically-motivated tokenization
- Components for named entity recognition, part-of-speech-tagging, dependency parsing, sentence segmentation, text classification, lemmatization, morphological analysis, entity linking and more
- Easily extensible with custom components and attributes
- Support for custom models in PyTorch, TensorFlow and other frameworks
- Built in visualizers for syntax and NER
- Easy model packaging, deployment and workflow management
- Robust, rigorously evaluated accuracy
📖 For more details, see the facts, figures and benchmarks.
⏳ Install spaCy
For detailed installation instructions, see the documentation.
- Operating system: macOS / OS X · Linux · Windows (Cygwin, MinGW, Visual Studio)
- Python version: Python 3.6+ (only 64 bit)
- Package managers: pip · [conda] (via
Using pip, spaCy releases are available as source packages and binary wheels.
Before you install spaCy and its dependencies, make sure that
wheel are up to date.
pip install -U pip setuptools wheel pip install spacy
To install additional data tables for lemmatization and normalization you can
pip install spacy[lookups] or install
separately. The lookups package is needed to create blank models with
lemmatization data, and to lemmatize in languages that don't yet come with
pretrained models and aren't powered by third-party libraries.
When using pip it is generally recommended to install packages in a virtual environment to avoid modifying system state:
python -m venv .env source .env/bin/activate pip install -U pip setuptools wheel pip install spacy
You can also install spaCy from
conda via the
conda-forge channel. For the
feedstock including the build recipe and configuration, check out
conda install -c conda-forge spacy
Some updates to spaCy may require downloading new statistical models. If you're
running spaCy v2.0 or higher, you can use the
validate command to check if
your installed models are compatible and if not, print details on how to update
pip install -U spacy python -m spacy validate
If you've trained your own models, keep in mind that your training and runtime inputs must match. After updating spaCy, we recommend retraining your models with the new version.
📖 For details on upgrading from spaCy 2.x to spaCy 3.x, see the migration guide.
📦 Download model packages
Trained pipelines for spaCy can be installed as Python packages. This
means that they're a component of your application, just like any other module.
Models can be installed using spaCy's
command, or manually by pointing pip to a path or URL.
|Available Pipelines||Detailed pipeline descriptions, accuracy figures and benchmarks.|
|Models Documentation||Detailed usage and installation instructions.|
|Training||How to train your own pipelines on your data.|
# Download best-matching version of specific model for your spaCy installation python -m spacy download en_core_web_sm # pip install .tar.gz archive or .whl from path or URL pip install /Users/you/en_core_web_sm-3.0.0.tar.gz pip install /Users/you/en_core_web_sm-3.0.0-py3-none-any.whl pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz
Loading and using models
To load a model, use
with the model name or a path to the model data directory.
import spacy nlp = spacy.load("en_core_web_sm") doc = nlp("This is a sentence.")
You can also
import a model directly via its full name and then call its
load() method with no arguments.
import spacy import en_core_web_sm nlp = en_core_web_sm.load() doc = nlp("This is a sentence.")
📖 For more info and examples, check out the models documentation.
⚒ Compile from source
The other way to install spaCy is to clone its GitHub repository and build it from source. That is the common way if you want to make changes to the code base. You'll need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, virtualenv and git installed. The compiler part is the trickiest. How to do that depends on your system.
|Ubuntu||Install system-level dependencies via
|Mac||Install a recent version of XCode, including the so-called "Command Line Tools". macOS and OS X ship with Python and git preinstalled.|
|Windows||Install a version of the Visual C++ Build Tools or Visual Studio Express that matches the version that was used to compile your Python interpreter.|
For more details and instructions, see the documentation on compiling spaCy from source and the quickstart widget to get the right commands for your platform and Python version.
git clone https://github.com/explosion/spaCy cd spaCy python -m venv .env source .env/bin/activate # make sure you are using the latest pip python -m pip install -U pip setuptools wheel pip install -r requirements.txt pip install --no-build-isolation --editable .
To install with extras:
pip install --no-build-isolation --editable .[lookups,cuda102]
🚦 Run tests
spaCy comes with an [extensive test suite](spacy/tests). In order to run the
tests, you'll usually want to clone the repository and build spaCy from source.
This will also install the required development dependencies and test utilities
defined in the [
Alternatively, you can run
pytest on the tests from within the installed
spacy package. Don't forget to also install the test utilities via spaCy's
pip install -r requirements.txt python -m pytest --pyargs spacy
*Note that all licence references and agreements mentioned in the spaCy README section above are relevant to that project's source code only.