Changelog History
Page 5
-
v2.1.8 Changes
August 08, 2019๐ฑ โจ New features and improvements
- ๐ NEW: Alpha tokenization support for Serbian
- ๐ Improve language data for Urdu.
- ๐ Support installing and loading model packages in the same session.
๐ฑ ๐ด Bug fixes
- ๐ Fix issue #4002: Make
PhraseMatcher
work as expected forNORM
attribute. - ๐ Fix issue #4063: Improve docs on
Matcher
attributes. - ๐ Fix issue #4068: Make Korean work as expected on Python 2.7.
- ๐ Fix issue #4069: Add
validate
option toEntityRuler
. - ๐ Fix issue #4074: Raise error if annotation dict in simple training style has unexpected keys.
- ๐ Fix issue #4081: Fix typo in
pyproject.toml
. - ๐ Fix handling of keyword arguments in
Language.evaluate
.
๐ ๐ Documentation and examples
- ๐ Improve
Matcher
attribute docs. - ๐ Fix various typos and inconsistencies.
๐ฅ Contributors
Thanks to @akornilo, @mirfan899, @veer-bains, @seppeljordan, @Pavle992, @svlandeg, @jenojp and @adrianeboyd for the pull requests and contributions.
-
v2.1.7 Changes
August 01, 2019๐ฑ โจ New features and improvements
- โ Add
Token.tensor
andSpan.tensor
attributes. - ๐ Support simple training format of
(text, annotations)
instead of only(doc, gold)
fornlp.evaluate
. - โ Add support for
"lang_factory"
setting in modelmeta.json
(see #4031). - ๐ฆ Also support
"requirements"
inmeta.json
to define packages for setup'sinstall_requires
. - ๐ Improve
Pipe
base class methods and make them less presumptuous. - ๐ Improve Danish and Korean tokenization.
- ๐ Improve error messages when deserializing model fails.
๐ฑ ๐ด Bug fixes
- ๐ Fix issue #3669, #3962: Fix dependency copy in
Span.as_doc
that could cause segfault. - ๐ Fix issue #3968: Fix bug in per-entity scores.
- ๐ Fix issue #4000: Improve entity linking API.
- ๐ Fix issue #4022: Fix error when Korean text contains special characters.
- ๐ Fix issue #4030: Handle edge case when calling
TextCategorizer.predict
with emptyDoc
. - ๐ Fix issue #4045: Correct
Span.sent
docs. - ๐ Fix issue #4048: Fix
init-model
command if there's no vocab. - ๐ Fix issue #4052: Improve per-type scoring of NER.
- ๐ Fix issue #4054: Ensure the
lang
ofnlp
andnlp.vocab
stay consistent. - ๐ Fix bugs in
Token.similarity
andSpan.similarity
when called via hook.
๐ ๐ Documentation and examples
- โ Add documentation for
gold.align
helper. - โ Add more explicit section on processing text.
- ๐ Improve documentation on disabling pipeline components.
- ๐ Fix various typos and inconsistencies.
๐ฅ Contributors
Thanks to @sorenlind, @pmbaumgartner, @svlandeg, @FallakAsad, @BreakBB, @adrianeboyd, @polm, @b1uec0in, @mdaudali and @ejarkm for the pull requests and contributions.
- โ Add
-
v2.1.6 Changes
July 12, 2019๐ฑ ๐ด Bug fixes
- ๐ Fix issue #3958: Fix order of symbols that caused tag maps to be out-of-sync.
-
v2.1.5 Changes
July 12, 2019๐ฑ โจ New features and improvements
- ๐ NEW: Base language data for Marathi and Korean (via
mecab-ko
,mecab-ko-dic
andnatto-py
). - ๐ Improve language data for Lithuanian, Spanish, Kannada, French, Norwegian and Hindi.
- โ Add evaluation metrics per entity type.
- โ Add resume logic to
spacy pretrain
. - โ Add optional
id
property to EntityRuler patterns. - ๐ Better introspection and IDE automcomplete for custom extension attributes.
- ๐ Make
Doc.is_sentenced
always returnTrue
for single-token docs.
๐ฑ ๐ด Bug fixes
- ๐ Fix issue #3490: Add evaluation metrics per entity type to
Scorer
. - ๐ Fix issue #3526: Serialize
EntityRuler
settings correctly. - ๐ Fix issue #3558: Improve
E024
error message for incorrectGoldParse
. - ๐ Fix issue #3611: Fix bug when setting
ngram
parameter in text classifier. - ๐ Fix issue #3625: Improve default punctuation rules for Hindi.
- ๐ Fix issue #3707: Improve introspection of custom attributes.
- ๐ Fix issue #3737: Check if component is callable in
Language.replace_pipe
. - ๐ Fix issue #3743: Fix documentation of
lex_id
. - ๐ Fix issue #3749: Change vector training script to work with latest Gensim.
- ๐ Fix issue #3762, #3934: Make
Doc.is_sentenced
default toTrue
for single-tokenDoc
s. - ๐ Fix issue #3802: Fix typo in docs example.
- ๐ Fix issue #3811: Fix type of
--seed
option inspacy pretrain
. - ๐ Fix issue #3822: Allow passing
PhraseMatcher
arguments toEntityRuler
. - ๐ Fix issue #3839: Ensure the
Matcher
returns correct match IDs when used with operators. - ๐ Fix issue #3840: Improve error messages in
spacy pretrain
. - ๐ Fix issue #3853: Rename vectors if multiple models are loaded to prevent clashes.
- ๐ Fix issue #3859: Update
pretrain
to prevent unintended overwriting of weight files. - ๐ Fix issue #3862: Fix matcher callback example.
- ๐ Fix issue #3868: Add
"v.s."
to English tokenizer exceptions. - ๐ Fix issue #3869: Make
Doc.count_by
work as expected. - ๐ Fix issue #3880: Fix unflatten padding in Thinc when last element is empty.
- ๐ Fix issue #3882: Exclude
user_data
when copying doc in displaCy. - ๐ Fix issue #3892: Update
Tokenizer
initialization docs. - ๐ Fix issue #3912: Make text classifier raise more friendly errors.
๐ ๐ Documentation and examples
- Add documentation for
Scorer
,Language.evaluate
andgold.docs_to_json
. - ๐ Fix various typos and inconsistencies.
๐ฅ Contributors
Thanks to @BreakBB, @ujwal-narayan, @estr4ng7d, @maknotavailable, @ramananbalakrishnan, @nipunsadvilkar, @NirantK, @munozbravo, @intrafindBreno, @Azagh3l, @jarib, @tokestermw, @polm, @skrcode, @kabirkhan, @demongolem, @elbaulp, @clarus, @BramVanroy, @rokasramas, @askhogan, @khellan, @kognate, @cedar101 and @yash1994 for the pull requests and contributions.
- ๐ NEW: Base language data for Marathi and Korean (via
-
v2.1.4 Changes
May 11, 2019๐ฑ โจ New features and improvements
- ๐ NEW:
util.filter_spans
helper to filter duplicates and overlaps from a list ofSpan
objects. - ๐ Improve language data for Thai, Japanese, Indonesian and Dutch.
- โ Add
--n-save-every
tospacy pretrain
and rename--nr-iter
to--n-iter
for consistency. - โ Add
--return-scores
flag tospacy evaluate
to return a dict. - โ Add
--n-early-stopping
option tospacy train
to define maximum number of iterations without dev accuracy improvements.
๐ฑ ๐ด Bug fixes
- ๐ Fix issue #3307: Fix symlink creation to show error on Windows.
- ๐ Fix issue #3473: Fix GPU training for text classification.
- ๐ Fix issue #3475: Change favicon.
- ๐ Fix issue #3482: Add Estonian base support to documentation.
- ๐ Fix issue #3484: Ensure lemmatization is always consistent between sessions.
- ๐ Fix issue #3521: Add variations of contractions to English stop words.
- ๐ Fix issue #3523: Make
spacy convert
correctly default tojson
. - ๐ Fix issue #3525, #3551, #3572: Fix problem that'd cause lemmas to not be lowercase.
- ๐ Fix issue #3531: Don't make
"settings"
or"title"
required in displaCy data. - ๐ Fix issue #3533: Remove non-existent example from docs.
- Fix issue #3546: Make sure path in
GoldParse. __del__
is a string. - ๐ Fix issue #3549: Ensure match pattern error isn't raised on empty errors list.
- ๐ Fix issue #3561: Fix
DependencyParser.predict
docs. - ๐ Fix issue #3598: Allow
jupyter=False
to override Jupyter mode indisplacy
. - ๐ Fix issue #3620: Fix bug in
.iob
converter. - ๐ Fix issue #3628: Relax
jsonschema
pin. - ๐ Fix issue #3667: Fix offset bug in loading pre-trained word2vec.
- ๐ Fix issue #3679: Update glossary to include missing labels in
spacy.explain
. - ๐ Fix issue #3680: Re-add missing universe README.
- ๐ Fix issue #3681: Rewrite information extraction example to use
Doc.retokenize
. - ๐ Fix issue #3692: Fix return value in
Language.update
docs. - ๐ Fix issue #3694: Make
"text"
inspacy pretrain
optional when"tokens"
is provided. - ๐ Fix issue #3701: Improve
Token.prob
andLexeme.prob
docs. - ๐ Fix issue #3708: Fix error in regex matcher examples.
- ๐ Fix issue #3713: Call
rmtree
andcopytree
with strings inspacy train
. - ๐ Fix issue #3720: Add version tag to
--base-model
argument inspacy train
docs.
๐ ๐ Documentation and examples
- โ Add free interactive spaCy course.
- ๐ Fix various typos and inconsistencies.
- โ Add new projects to the spaCy universe.
๐ฅ Contributors
Thanks to @svlandeg, @wannaphongcom, @Bharat123rox, @DuyguA, @SamuelLKane, @graus, @HiromuHota, @jeannefukumaru, @ivigamberdiev, @socool, @yvespeirsman, @lemontheme, @Dobita21, @w4nderlust, @pierremonico, @bryant1410, @celikomer, @xssChauhan, @kowaalczyk, @BreakBB, @fizban99, @tokestermw, @bjascob, @pickfire, @yaph, @amitness, @henry860916, @d5555, @BramVanroy, @F0rge1cE, @richardpaulhudson, @ldorigo, @aaronkub and @devforfu for the pull requests and contributions.
- ๐ NEW:
-
v2.1.3 Changes
March 23, 2019๐ฑ โจ New features and improvements
- ๐ Allow customizing punctuation characters in sentencizer and make it serializable.
- โ Add new
"bow"
architecture forTextCategorizer
, to do faster bag-of-words text classification.
๐ฑ ๐ด Bug fixes
- ๐ Fix issue #3433, #3458: Fix mismatch of classes in parser after serialization.
- ๐ Fix issue #3464: Fix training loop in
train_textcat.py
example. - Fix issue #3468: Make sentencizer set
Token.is_sent_start
correctly. - ๐ Fix bug in the
"ensemble"
TextClassifier
architecture that prevented the unigram bag-of-words submodel from working properly.
๐ฅ Contributors
Thanks to @chkoar for the pull request!
-
v2.1.2 Changes
March 22, 2019๐ฑ ๐ด Bug fixes
- ๐ Fix issue #3356: Fix handling of unicode ranges in regular expressions on Python 2.
- ๐ Fix issue #3432: Update
wasabi
to better handle non-UTF-8 terminals. - Fix issue #3445: Update docs on
label
argument inSpan. __init__
. - ๐ Fix issue #3455: Bring English
tag_map
in line with UD Treebank.
๐ ๐ Documentation and examples
- Add
--init-tok2vec
argument totrain_textcat.py
example. - ๐ Fix various typos and inconsistencies.
-
v2.1.1 Changes
March 20, 2019๐ฑ โจ New features and improvements
- ๐ Raise error if user is running a narrow unicode build.
- Move
ud_train
,ud_evaluate
and other UD scripts from CLI to/bin
in repo only. - ๐ Improve accuracy of
spacy pretrain
by implementing cosine loss.
๐ฑ ๐ด Bug fixes
- ๐ Fix issue #3421: Update docs and raise error for narrow unicode builds.
- ๐ Fix issue #3427: Correct mistake in French lemmatizer.
- ๐ Fix issue #3431: Make
Doc.vector
andDoc.vector_norm
work as expected on GPU. - ๐ Fix issue #3437: Fix installation problem on GPU.
- ๐ Fix issue #3439, #3446: Don't include UD scripts in
spacy.cli
.
๐ฅ Contributors
Thanks to @mhham and @Bharat123rox for the pull requests!
-
v2.1.0 Changes
March 18, 2019๐ > โ ๏ธ This version of spaCy requires downloading new models. You can use the
spacy validate
command to find out which models need updating, and print update instructions. If you've been training your own models , you'll need to retrain them with the new version.๐ฑ โจ New features and improvements
๐ท Tagger, Parser, NER and Text Categorizer
- ๐ NEW: Experimental ULMFit/BERT/Elmo-like pretraining (see #2931) via the new
spacy pretrain
command. This pre-trains the CNN using BERT's cloze task. A new trick we're calling Language Modelling with Approximate Outputs is used to apply the pre-training to smaller models. The pre-training outputs CNN and embedding weights that can be used inspacy train
, using the new-t2v
argument. - ๐ NEW: Allow parser to do joint word segmentation and parsing. If you pass in data where the tokenizer over-segments, the parser now learns to merge the tokens.
- ๐ Make parser, tagger and NER faster, through better hyperparameters.
- โ Add simpler, GPU-friendly option to
TextCategorizer
, and allow settingexclusive_classes
andarchitecture
arguments on initialization. - โ Add
EntityRecognizer.labels
property. - โ Remove document length limit during training, by implementing faster Levenshtein alignment.
- ๐ Use Thinc v7.0, which defaults to single-thread with fast
blis
kernel for matrix multiplication. Parallelisation should be performed at the task level, e.g. by running more containers.
Models & Language Data
- ๐ NEW: 2-3 times faster tokenization across all languages at the same accuracy!
- ๐ NEW: Small accuracy improvements for parsing, tagging and NER for 6+ languages.
- ๐ NEW: The English and German models are now available under the MIT license.
- ๐ NEW: Statistical models for Greek.
- ๐ NEW: Alpha support for Tamil, Ukrainian and Kannada, and base language classes for Afrikaans, Bulgarian, Czech, Icelandic, Lithuanian, Latvian, Slovak, Slovenian and Albanian.
- ๐ Improve loading time of
French
by ~30%. - โ Add
Vocab.writing_system
(populated via the language data) to expose settings like writing direction.
CLI
- ๐ NEW:
pretrain
command for ULMFit/BERT/Elmo-like pretraining (see #2931). - ๐ NEW: New
ud-train
command, to train and evaluate using the CoNLL 2017 shared task data. - Check if model is already installed before downloading it via
spacy download
. - Pass additional arguments of
download
command topip
to customise installation. - ๐ Improve
train
command by lettingGoldCorpus
stream data, instead of loading into memory. - ๐ Improve
init-model
command, including support for lexical attributes and word-vectors, using a variety of formats. This replaces thespacy vocab
command, which is now deprecated. - โ Add support for multi-task objectives to
train
command. - โ Add support for data-augmentation to
train
command.
Other
- ๐ NEW: Enhanced pattern API for rule-based
Matcher
(see #1971). - ๐ NEW:
Doc.retokenize
context manager for merging and splitting tokens more efficiently. - ๐ NEW: Add support for custom pipeline component factories via entry points (#2348).
- ๐ NEW: Implement fastText vectors with subword features.
- ๐ NEW: Built-in rule-based NER component to add entities based on match patterns (see #2513).
- ๐ NEW: Allow
PhraseMatcher
to match on token attributes other thanORTH
, e.g.LOWER
(for case-insensitive matching) or evenPOS
orTAG
. - ๐ NEW: Replace
ujson
,msgpack
,msgpack-numpy
,pickle
,cloudpickle
anddill
with our own packagesrsly
to centralise dependencies and allow binary wheels. - ๐ NEW:
Doc.to_json()
method which outputs data in spaCy's training format. This will be the only place where the format is hard-coded (see #2932). - ๐ NEW: Built-in
EntityRuler
component to make it easier to build rule-based NER and combinations of statistical and rule-based systems. - ๐ NEW:
gold.spans_from_biluo_tags
helper that returnsSpan
objects, e.g. to overwrite thedoc.ents
. - โ Add warnings if
.similarity
method is called with empty vectors or without word vectors. - Improve rule-based
Matcher
and addreturn_matches
keyword argument toMatcher.pipe
to yield(doc, matches)
tuples instead of onlyDoc
objects, andas_tuples
to add context to theDoc
objects. - Make stop words via
Token.is_stop
andLexeme.is_stop
case-insensitive. - Accept
"TEXT"
as an alternative to"ORTH"
inMatcher
patterns. - ๐ Use
black
for auto-formatting.py
source and optimse codebase usingflake8
. You can now runflake8 spacy
and it should return no errors or warnings. SeeCONTRIBUTING.md
for details.
๐ฑ ๐ด Bug fixes
- ๐ Fix issue #795: Fix behaviour of
Token.conjuncts.
- ๐ Fix issue #1487: Add
Doc.retokenize()
context manager. - ๐ Fix issue #1537: Make
Span.as_doc
return a copy, not a view. - ๐ Fix issue #1574: Make sure stop words are available in medium and large English models.
- ๐ Fix issue #1585: Prevent parser from predicting unseen classes.
- ๐ Fix issue #1642: Replace
regex
withre
and speed up tokenization. - Fix issue #1665: Correct typos in symbol
Animacy_inan
and addAnimacy_nhum
. - ๐ Fix issue #1748, #1798, #2756, #2934: Add simpler GPU-friendly option to
TextCategorizer
. - ๐ Fix issue #1773: Prevent tokenizer exceptions from setting
POS
but notTAG
. - ๐ Fix issue #1782, #2343: Fix training on GPU.
- ๐ Fix issue #1816: Allow custom
Language
subclasses via entry points. - Fix issue #1865: Correct licensing of
it_core_news_sm
model. - ๐ Fix issue #1889: Make stop words case-insensitive.
- ๐ Fix issue #1903: Add
relcl
dependency label to symbols. - ๐ Fix issue #1963: Resize
Doc.tensor
when merging spans. - ๐ Fix issue #1971: Update
Matcher
engine to support regex, extension attributes and rich comparison. - ๐ Fix issue #2014: Make
Token.pos_
writeable. - ๐ Fix issue #2091: Fix
displacy
support for RTL languages. - ๐ Fix issue #2203, #3268: Prevent bad interaction of lemmatizer and tokenizer exceptions.
- ๐ Fix issue #2329: Correct
TextCategorizer
andGoldParse
API docs. - ๐ Fix issue #2369: Respect pre-defined warning filters.
- ๐ Fix issue #2390: Support setting lexical attributes during retokenization.
- Fix issue #2396: Fix
Doc.get_lca_matrix
. - ๐ Fix issue #2464, #3009: Fix behaviour of
Matcher
's?
quantifier. - ๐ Fix issue #2482: Fix serialization when parser model is empty.
- ๐ Fix issue #2512, #2153: Fix issue with deserialization into non-empty vocab.
- ๐ Fix issue #2603: Improve handling of missing NER tags.
- ๐ Fix issue #2644: Add table explaining training metrics to docs.
- ๐ Fix issue #2648: Fix
KeyError
inVectors.most_similar
. - ๐ Fix issue #2671, #2675: Fix incorrect match ID on some patterns.
- ๐ Fix issue #2693: Only use
'sentencizer'
as built-in sentence boundary component name. - ๐ Fix issue #2728: Fix HTML escaping in
displacy
NER visualization and correct API docs. - ๐ Fix issue #2740: Add ability to pass additional arguments to pipeline components.
- ๐ Fix issue #2754, #3028: Make
NORM
aToken
attribute instead of aLexeme
attribute to allow setting context-specific norms in tokenizer exceptions. - ๐ Fix issue #2769: Fix issue that'd cause segmentation fault when calling
EntityRecognizer.add_label
. - ๐ Fix issue #2772: Fix bug in sentence starts for non-projective parses.
- ๐ Fix issue #2779: Fix handling of pre-set entities.
- ๐ Fix issue #2782: Make
like_num
work with prefixed numbers. - ๐ Fix issue #2833: Raise better error if
Token
orSpan
are pickled. - ๐ Fix issue #2838: Add
Retokenizer.split
method to split one token into several. - Fix issue #2869: Make
doc[0].is_sent_start == True
. - ๐ Fix issue #2870: Make it illegal for the entity recognizer to predict whitespace tokens as
B
,L
orU
. - ๐ Fix issue #2871: Fix vectors for reserved words.
- ๐ Fix issue #2901: Fix issue with first call of
nlp
in Japanese (MeCab). - ๐ Fix issue #2924: Make IDs of displaCy arcs more unique to avoid clashes.
- ๐ท Fix issue #3012: Fix clobber of
Doc.is_tagged
inDoc.from_array
. - ๐ Fix issue #3027: Allow
Span
to take unicode value forlabel
argument. - ๐ Fix issue #3036: Support mutable default arguments in extension attributes.
- ๐ Fix issue #3048: Raise better errors for uninitialized pipeline components.
- ๐ Fix issue #3064: Allow single string attributes in
Doc.to_array
. - ๐ Fix issue #3093, #3067: Set
vectors.name
correctly when exporting model via CLI. - ๐ Fix issue #3112: Make sure entity types are added correctly on GPU.
- ๐ Fix issue #3191: Fix pickling of
Japanese
. - ๐ Fix issue #3122: Correct docs of
Token.subtree
andSpan.subtree
. - ๐ Fix issue #3128: Improve error handling in converters.
- Fix issue #3248: Fix
PhraseMatcher
pickling and make__len__
consistent. - ๐ Fix issue #3274: Make
Token.sent
work as expected without the parser. - ๐ Fix issue #3277: Add en/em dash to tokenizer prefixes and suffixes.
- ๐ Fix issue #3346: Expose Japanese stop words in language class.
- ๐ Fix issue #3357: Update displaCy examples in docs to correctly show
Token.pos_
. - ๐ Fix issue #3345: Fix NER when preset entities cross-sentence boundaries.
- ๐ Fix issue #3348: Don't use
numpy
directly for similarity. - ๐ Fix issue #3366: Improve converters, training data formats and docs.
- ๐ Fix issue #3369: Fix
#egg
fragments in direct downloads. - Fix issue #3382: Make
Doc.from_array
consistent withDoc.to_array
. - ๐ Fix issue #3398: Don't set extension attributes in language classes.
- ๐ Fix issue #3373: Merge and improve
conllu
converters. - ๐ Fix serialization of custom tokenizer if not all functions are defined.
- ๐ Fix bugs in beam-search training objective.
- ๐ Fix problems with model pickling.
๐ฑ โ ๏ธ Backwards incompatibilities
- ๐ This version of spaCy requires downloading new models. You can use the
spacy validate
command to find out which models need updating, and print update instructions. - If you've been training your own models , you'll need to retrain them with the new version.
- ๐ Due to difficulties linking our new
blis
for faster platform-independent matrix multiplication, v2.1.x currently doesn't work on Python 2.7 on Windows. We expect this to be corrected in the future. ๐ While the
Matcher
API is fully backwards compatible, its algorithm has changed to fix a number of bugs and performance issues. This means that theMatcher
inv2.1.x
may produce different results compared to theMatcher
inv2.0.x
.The deprecated
Doc.merge
andSpan.merge
methods still work, but you may notice that they now run slower when merging many objects in a row. That's because the merging engine was rewritten to be more reliable and to support more efficient merging in bulk. To take advantage of this, you should rewrite your logic to use theDoc.retokenize
context manager and perform as many merges as possible together in thewith
block.- doc[1:5].merge()- doc[6:8].merge()+ with doc.retokenize() as retokenizer:+ retokenizer.merge(doc[1:5])+ retokenizer.merge(doc[6:8])
The serialization methods
to_disk
,from_disk
,to_bytes
andfrom_bytes
now support a singleexclude
argument to provide a list of string names to exclude. The docs have been updated to list the available serialization fields for each class. Thedisable
argument on theLanguage
serialization methods has been renamed toexclude
for consistency.- nlp.to_disk("/path", disable=["parser", "ner"])+ nlp.to_disk("/path", exclude=["parser", "ner"])- data = nlp.tokenizer.to_bytes(vocab=False)+ data = nlp.tokenizer.to_bytes(exclude=["vocab"])
๐ The
.pos
value for several common English words has changed, due to corrections to long-standing mistakes in the English tag map (see #593, #3311).๐ For better compatibility with the Universal Dependencies data, the lemmatizer now preserves capitalization, e.g. for proper nouns (see #3256).
๐ The keyword argument
n_threads
on the.pipe
methods is now deprecated, as the v2.x models cannot release the global interpreter lock. (Future versions may introduce an_process
argument for parallel inference via multiprocessing.)๐ The
Doc.print_tree
method is not deprecated in favour of a unifiedDoc.to_json
method, which outputs data in the same format as the expected JSON training data.๐ The built-in rule-based sentence boundary detector is now only called
'sentencizer'
โ the name'sbd'
is deprecated.- sentence_splitter = nlp.create_pipe('sbd')+ sentence_splitter = nlp.create_pipe('sentencizer')
The
is_sent_start
attribute of the first token in aDoc
now correctly defaults toTrue
. It previously defaulted toNone
.๐ The
spacy train
command now lets you specify a comma-separated list of pipeline component names, instead of separate flags like--no-parser
to disable components. This is more flexible and also handles custom components out-of-the-box.- $ spacy train en /output train_data.json dev_data.json --no-parser+ $ spacy train en /output train_data.json dev_data.json --pipeline tagger,ner
The
spacy init-model
command now uses a--jsonl-loc
argument to pass in a a newline-delimited JSON (JSONL) file containing one lexical entry per line instead of a separate--freqs-loc
and--clusters-loc
.- $ spacy init-model en ./model --freqs-loc ./freqs.txt --clusters-loc ./clusters.txt+ $ spacy init-model en ./model --jsonl-loc ./vocab.jsonl
Also note that some of the model licenses have changed:
it_core_news_sm
is now correctly licensed under CC BY-NC-SA 3.0, and all English and German models are now published under the MIT license.
๐ Benchmarks
Model Language Version UAS LAS POS NER F Vec Size en_core_web_sm
English 2.1.0 91.5 89.7 96.8 85.9 ๐ 10 MB en_core_web_md
English 2.1.0 91.8 90.0 96.9 86.6 โ 90 MB en_core_web_lg
English 2.1.0 91.8 90.1 97.0 86.6 โ 788 MB de_core_news_sm
German 2.1.0 90.7 88.6 96.3 83.1 ๐ 10 MB de_core_news_md
German 2.1.0 91.2 89.4 96.6 83.8 โ 210 MB es_core_news_sm
Spanish 2.1.0 90.4 87.3 96.9 89.5 ๐ 10 MB es_core_news_md
Spanish 2.1.0 91.0 88.2 97.2 89.7 โ 69 MB pt_core_news_sm
Portuguese 2.1.0 89.1 85.9 80.4 88.9 ๐ 12 MB fr_core_news_sm
French 2.1.0 87.6 84.7 94.5 82.6 ๐ 14 MB fr_core_news_md
French 2.1.0 89.1 86.4 95.3 83.1 โ 82 MB it_core_news_sm
Italian 2.1.0 91.0 87.3 95.8 86.1 ๐ 10 MB nl_core_news_sm
Dutch 2.1.0 83.7 77.6 91.6 87.0 ๐ 10 MB el_core_news_sm
Greek 2.1.0 84.4 80.6 94.6 71.6 ๐ 10 MB el_core_news_md
Greek 2.1.0 88.3 85.0 96.6 81.1 โ 126 MB xx_ent_wiki_sm
Multi 2.1.0 - - - 81.3 ๐ 3 MB ๐ฑ > ๐ฌ UAS: Unlabelled dependencies (parser). LAS: Labelled dependencies (parser). POS: Part-of-speech tags (fine-grained tags, i.e.
Token.tag_
). NER F: Named entities (F-score). Vec: Model contains word vectors. Size: Model file size (zipped archive).๐ ๐ Documentation and examples
Although it looks pretty much the same, we've rebuilt the entire documentation using Gatsby and MDX. It's now an even faster progressive web app and allows us to write all content entirely in Markdown , without having to compromise on easy-to-use custom UI components. We're hoping that the Markdown source will make it even easier to contribute to the documentation. For more details, check out the styleguide and source.
โ๏ธ While converting the pages to Markdown, we've also fixed a bunch of typos, improved the existing pages and added some new content:
- Usage Guide: Rule-based Matching. How to use the
Matcher
,PhraseMatcher
and the newEntityRuler
, and write powerful components to combine statistical models and rules. - Usage Guide: Saving and Loading. Everything you need to know about serialization, and how to save and load pipeline components, package your spaCy models as Python modules and use entry points.
- Usage Guide: Merging and Splitting. How to retokenize a
Doc
using the newretokenize
context manager and merge spans into single tokens and split single tokens into multiple. - Universe: Videos and Podcasts
- API:
EntityRuler
- API:
SentenceSegmenter
- API: Pipeline functions
๐ฅ Contributors
Thanks to @DuyguA, @giannisdaras, @mgogoulos, @louridas, @skrcode, @gavrieltal, @svlandeg, @jarib, @alvaroabascar, @kbulygin, @moreymat, @mirfan899, @ozcankasal, @willprice, @alvations, @amperinet, @retnuh, @Loghijiaha, @DeNeutoy, @gavrieltal, @boena, @BramVanroy, @pganssle, @foufaster, @adrianeboyd, @maknotavailable, @pierremonico, @lauraBaakman, @juliamakogon, @Gizzio, @Abhijit-2592, @akki2825, @grivaz, @roshni-b, @mpuig, @mikelibg, @danielkingai2, @adrienball and @Poluglottos for the pull requests and contributions.
- ๐ NEW: Experimental ULMFit/BERT/Elmo-like pretraining (see #2931) via the new
-
v2.1.0.a13 Changes
March 12, 2019๐ This is an alpha pre-release of spaCy v2.1.0 and available on pip as
spacy-nightly
. It's not intended for production use. See here for the updated nightly docs.pip install -U spacy-nightly
๐ If you want to test the new version, we recommend using a new virtual environment. Also make sure to download the new models โ see below for details and benchmarks.
๐ > โ ๏ธ This nightly release currently doesn't work on Python 2.7 on Windows, due to difficulties compiling our new matrix multiplication dependency
blis
in that environment. We expect this can be corrected in future.๐ฑ โจ New features and improvements
๐ท Tagger, Parser, NER and Text Categorizer
- ๐ NEW: Experimental ULMFit/BERT/Elmo-like pretraining (see #2931) via the new
spacy pretrain
command. This pre-trains the CNN using BERT's cloze task. A new trick we're calling Language Modelling with Approximate Outputs is used to apply the pre-training to smaller models. The pre-training outputs CNN and embedding weights that can be used inspacy train
, using the new-t2v
argument. - ๐ NEW: Allow parser to do joint word segmentation and parsing. If you pass in data where the tokenizer over-segments, the parser now learns to merge the tokens.
- ๐ Make parser, tagger and NER faster, through better hyperparameters.
- โ Add simpler, GPU-friendly option to
TextCategorizer
, and allow settingexclusive_classes
andarchitecture
arguments on initialization. - โ Add
EntityRecognizer.labels
property. - โ Remove document length limit during training, by implementing faster Levenshtein alignment.
- ๐ Use Thinc v7.0, which defaults to single-thread with fast
blis
kernel for matrix multiplication. Parallelisation should be performed at the task level, e.g. by running more containers.
Models & Language Data
- ๐ NEW: 2-3 times faster tokenization across all languages at the same accuracy!
- ๐ NEW: Small accuracy improvements for parsing, tagging and NER for 6+ languages.
- ๐ NEW: The English and German models are now available under the MIT license.
- ๐ NEW: Statistical models for Greek.
- ๐ NEW: Alpha support for Tamil, Ukrainian and Kannada, and base language classes for Afrikaans, Bulgarian, Czech, Icelandic, Lithuanian, Latvian, Slovak, Slovenian and Albanian.
- ๐ Improve loading time of
French
by ~30%. - โ Add
Vocab.writing_system
(populated via the language data) to expose settings like writing direction.
CLI
- ๐ NEW:
pretrain
command for ULMFit/BERT/Elmo-like pretraining (see #2931). - ๐ NEW: New
ud-train
command, to train and evaluate using the CoNLL 2017 shared task data. - Check if model is already installed before downloading it via
spacy download
. - Pass additional arguments of
download
command topip
to customise installation. - ๐ Improve
train
command by lettingGoldCorpus
stream data, instead of loading into memory. - ๐ Improve
init-model
command, including support for lexical attributes and word-vectors, using a variety of formats. This replaces thespacy vocab
command, which is now deprecated. - โ Add support for multi-task objectives to
train
command. - โ Add support for data-augmentation to
train
command.
Other
- ๐ NEW: Enhanced pattern API for rule-based
Matcher
(see #1971). - ๐ NEW:
Doc.retokenize
context manager for merging and splitting tokens more efficiently. - ๐ NEW: Add support for custom pipeline component factories via entry points (#2348).
- ๐ NEW: Implement fastText vectors with subword features.
- ๐ NEW: Built-in rule-based NER component to add entities based on match patterns (see #2513).
- ๐ NEW: Allow
PhraseMatcher
to match on token attributes other thanORTH
, e.g.LOWER
(for case-insensitive matching) or evenPOS
orTAG
. - ๐ NEW: Replace
ujson
,msgpack
,msgpack-numpy
,pickle
,cloudpickle
anddill
with our own packagesrsly
to centralise dependencies and allow binary wheels. - ๐ NEW:
Doc.to_json()
method which outputs data in spaCy's training format. This will be the only place where the format is hard-coded (see #2932). - ๐ NEW: Built-in
EntityRuler
component to make it easier to build rule-based NER and combinations of statistical and rule-based systems. - ๐ NEW:
gold.spans_from_biluo_tags
helper that returnsSpan
objects, e.g. to overwrite thedoc.ents
. - โ Add warnings if
.similarity
method is called with empty vectors or without word vectors. - Improve rule-based
Matcher
and addreturn_matches
keyword argument toMatcher.pipe
to yield(doc, matches)
tuples instead of onlyDoc
objects, andas_tuples
to add context to theDoc
objects. - Make stop words via
Token.is_stop
andLexeme.is_stop
case-insensitive. - Accept
"TEXT"
as an alternative to"ORTH"
inMatcher
patterns. - ๐ Use
black
for auto-formatting.py
source and optimse codebase usingflake8
. You can now runflake8 spacy
and it should return no errors or warnings. SeeCONTRIBUTING.md
for details.
๐ฑ ๐ด Bug fixes
- ๐ Fix issue #795: Fix behaviour of
Token.conjuncts.
- ๐ Fix issue #1487: Add
Doc.retokenize()
context manager. - ๐ Fix issue #1537: Make
Span.as_doc
return a copy, not a view. - ๐ Fix issue #1574: Make sure stop words are available in medium and large English models.
- ๐ Fix issue #1585: Prevent parser from predicting unseen classes.
- ๐ Fix issue #1642: Replace
regex
withre
and speed up tokenization. - Fix issue #1665: Correct typos in symbol
Animacy_inan
and addAnimacy_nhum
. - ๐ Fix issue #1748, #1798, #2756, #2934: Add simpler GPU-friendly option to
TextCategorizer
. - ๐ Fix issue #1773: Prevent tokenizer exceptions from setting
POS
but notTAG
. - ๐ Fix issue #1782, #2343: Fix training on GPU.
- ๐ Fix issue #1816: Allow custom
Language
subclasses via entry points. - Fix issue #1865: Correct licensing of
it_core_news_sm
model. - ๐ Fix issue #1889: Make stop words case-insensitive.
- ๐ Fix issue #1903: Add
relcl
dependency label to symbols. - ๐ Fix issue #1963: Resize
Doc.tensor
when merging spans. - ๐ Fix issue #1971: Update
Matcher
engine to support regex, extension attributes and rich comparison. - ๐ Fix issue #2014: Make
Token.pos_
writeable. - ๐ Fix issue #2091: Fix
displacy
support for RTL languages. - ๐ Fix issue #2203, #3268: Prevent bad interaction of lemmatizer and tokenizer exceptions.
- ๐ Fix issue #2329: Correct
TextCategorizer
andGoldParse
API docs. - ๐ Fix issue #2369: Respect pre-defined warning filters.
- ๐ Fix issue #2390: Support setting lexical attributes during retokenization.
- Fix issue #2396: Fix
Doc.get_lca_matrix
. - ๐ Fix issue #2464, #3009: Fix behaviour of
Matcher
's?
quantifier. - ๐ Fix issue #2482: Fix serialization when parser model is empty.
- ๐ Fix issue #2512, #2153: Fix issue with deserialization into non-empty vocab.
- ๐ Fix issue #2603: Improve handling of missing NER tags.
- ๐ Fix issue #2644: Add table explaining training metrics to docs.
- ๐ Fix issue #2648: Fix
KeyError
inVectors.most_similar
. - ๐ Fix issue #2671, #2675: Fix incorrect match ID on some patterns.
- ๐ Fix issue #2693: Only use
'sentencizer'
as built-in sentence boundary component name. - ๐ Fix issue #2728: Fix HTML escaping in
displacy
NER visualization and correct API docs. - ๐ Fix issue #2740: Add ability to pass additional arguments to pipeline components.
- ๐ Fix issue #2754, #3028: Make
NORM
aToken
attribute instead of aLexeme
attribute to allow setting context-specific norms in tokenizer exceptions. - ๐ Fix issue #2769: Fix issue that'd cause segmentation fault when calling
EntityRecognizer.add_label
. - ๐ Fix issue #2772: Fix bug in sentence starts for non-projective parses.
- ๐ Fix issue #2779: Fix handling of pre-set entities.
- ๐ Fix issue #2782: Make
like_num
work with prefixed numbers. - ๐ Fix issue #2833: Raise better error if
Token
orSpan
are pickled. - ๐ Fix issue #2838: Add
Retokenizer.split
method to split one token into several. - Fix issue #2869: Make
doc[0].is_sent_start == True
. - ๐ Fix issue #2870: Make it illegal for the entity recognizer to predict whitespace tokens as
B
,L
orU
. - ๐ Fix issue #2871: Fix vectors for reserved words.
- ๐ Fix issue #2901: Fix issue with first call of
nlp
in Japanese (MeCab). - ๐ Fix issue #2924: Make IDs of displaCy arcs more unique to avoid clashes.
- ๐ท Fix issue #3012: Fix clobber of
Doc.is_tagged
inDoc.from_array
. - ๐ Fix issue #3027: Allow
Span
to take unicode value forlabel
argument. - ๐ Fix issue #3036: Support mutable default arguments in extension attributes.
- ๐ Fix issue #3048: Raise better errors for uninitialized pipeline components.
- ๐ Fix issue #3064: Allow single string attributes in
Doc.to_array
. - ๐ Fix issue #3093, #3067: Set
vectors.name
correctly when exporting model via CLI. - ๐ Fix issue #3112: Make sure entity types are added correctly on GPU.
- ๐ Fix issue #3191: Fix pickling of
Japanese
. - ๐ Fix issue #3122: Correct docs of
Token.subtree
andSpan.subtree
. - ๐ Fix issue #3128: Improve error handling in converters.
- Fix issue #3248: Fix
PhraseMatcher
pickling and make__len__
consistent. - ๐ Fix issue #3274: Make
Token.sent
work as expected without the parser. - ๐ Fix issue #3277: Add en/em dash to tokenizer prefixes and suffixes.
- ๐ Fix issue #3346: Expose Japanese stop words in language class.
- ๐ Fix issue #3357: Update displaCy examples in docs to correctly show
Token.pos_
. - ๐ Fix issue #3345: Fix NER when preset entities cross-sentence boundaries.
- ๐ Fix issue #3348: Don't use
numpy
directly for similarity. - ๐ Fix issue #3366: Improve converters, training data formats and docs.
- ๐ Fix issue #3369: Fix
#egg
fragments in direct downloads. - Fix issue #3382: Make
Doc.from_array
consistent withDoc.to_array
. - ๐ Fix issue #3398: Don't set extension attributes in language classes.
- ๐ Fix serialization of custom tokenizer if not all functions are defined.
- ๐ Fix bugs in beam-search training objective.
- ๐ Fix problems with model pickling.
๐ฑ โ ๏ธ Backwards incompatibilities
- ๐ This version of spaCy requires downloading new models. You can use the
spacy validate
command to find out which models need updating, and print update instructions. - If you've been training your own models , you'll need to retrain them with the new version.
- ๐ Due to difficulties linking our new
blis
for faster platform-independent matrix multiplication, v2.1.x currently doesn't work on Python 2.7 on Windows. We expect this to be corrected in the future. ๐ While the
Matcher
API is fully backwards compatible, its algorithm has changed to fix a number of bugs and performance issues. This means that theMatcher
inv2.1.x
may produce different results compared to theMatcher
inv2.0.x
.The deprecated
Doc.merge
andSpan.merge
methods still work, but you may notice that they now run slower when merging many objects in a row. That's because the merging engine was rewritten to be more reliable and to support more efficient merging in bulk. To take advantage of this, you should rewrite your logic to use theDoc.retokenize
context manager and perform as many merges as possible together in thewith
block.- doc[1:5].merge()- doc[6:8].merge()+ with doc.retokenize() as retokenizer:+ retokenizer.merge(doc[1:5])+ retokenizer.merge(doc[6:8])
The serialization methods
to_disk
,from_disk
,to_bytes
andfrom_bytes
now support a singleexclude
argument to provide a list of string names to exclude. The docs have been updated to list the available serialization fields for each class. Thedisable
argument on theLanguage
serialization methods has been renamed toexclude
for consistency.- nlp.to_disk("/path", disable=["parser", "ner"])+ nlp.to_disk("/path", exclude=["parser", "ner"])- data = nlp.tokenizer.to_bytes(vocab=False)+ data = nlp.tokenizer.to_bytes(exclude=["vocab"])
๐ For better compatibility with the Universal Dependencies data, the lemmatizer now preserves capitalization, e.g. for proper nouns (see #3256).
๐ The keyword argument
n_threads
on the.pipe
methods is now deprecated, as the v2.x models cannot release the global interpreter lock. (Future versions may introduce an_process
argument for parallel inference via multiprocessing.)๐ The
Doc.print_tree
method is not deprecated in favour of a unifiedDoc.to_json
method, which outputs data in the same format as the expected JSON training data.๐ The built-in rule-based sentence boundary detector is now only called
'sentencizer'
โ the name'sbd'
is deprecated.- sentence_splitter = nlp.create_pipe('sbd')+ sentence_splitter = nlp.create_pipe('sentencizer')
The
is_sent_start
attribute of the first token in aDoc
now correctly defaults toTrue
. It previously defaulted toNone
.๐ The
spacy train
command now lets you specify a comma-separated list of pipeline component names, instead of separate flags like--no-parser
to disable components. This is more flexible and also handles custom components out-of-the-box.- $ spacy train en /output train_data.json dev_data.json --no-parser+ $ spacy train en /output train_data.json dev_data.json --pipeline tagger,ner
The
spacy init-model
command now uses a--jsonl-loc
argument to pass in a a newline-delimited JSON (JSONL) file containing one lexical entry per line instead of a separate--freqs-loc
and--clusters-loc
.- $ spacy init-model en ./model --freqs-loc ./freqs.txt --clusters-loc ./clusters.txt+ $ spacy init-model en ./model --jsonl-loc ./vocab.jsonl
Also note that some of the model licenses have changed:
it_core_news_sm
is now correctly licensed under CC BY-NC-SA 3.0, and all English and German models are now published under the MIT license.
๐ Benchmarks
Model Language Version UAS LAS POS NER F Vec Size en_core_web_sm
English 2.1.0a7 91.6 89.7 96.8 85.5 ๐ 10 MB en_core_web_md
English 2.1.0a7 91.8 90.0 96.9 86.3 โ 90 MB en_core_web_lg
English 2.1.0a7 91.9 90.1 97.0 86.6 โ 788 MB de_core_news_sm
German 2.1.0a7 91.7 89.5 97.3 83.4 ๐ 10 MB de_core_news_md
German 2.1.0a7 92.3 90.4 97.4 83.8 โ 210 MB es_core_news_sm
Spanish 2.1.0a7 90.2 87.1 97.0 89.1 ๐ 10 MB es_core_news_md
Spanish 2.1.0a7 91.2 88.4 97.2 89.4 โ 69 MB pt_core_news_sm
Portuguese 2.1.0a7 89.5 86.2 80.1 89.0 ๐ 12 MB fr_core_news_sm
French 2.1.0a7 87.3 84.4 94.7 83.0 ๐ 14 MB fr_core_news_md
French 2.1.0a7 89.1 86.2 95.3 83.3 โ 82 MB it_core_news_sm
Italian 2.1.0a7 91.1 87.2 96.0 86.3 ๐ 10 MB nl_core_news_sm
Dutch 2.1.0a7 83.9 77.6 91.5 87.0 ๐ 10 MB el_core_news_sm
Greek 2.1.0a7 85.1 81.5 94.5 73.3 ๐ 10 MB el_core_news_md
Greek 2.1.0a7 88.2 85.1 96.7 78.1 โ 126 MB xx_ent_wiki_sm
Multi 2.1.0a7 - - - 81.6 ๐ 3 MB ๐ฑ > ๐ฌ UAS: Unlabelled dependencies (parser). LAS: Labelled dependencies (parser). POS: Part-of-speech tags (fine-grained tags, i.e.
Token.tag_
). NER F: Named entities (F-score). Vec: Model contains word vectors. Size: Model file size (zipped archive).๐ ๐ Documentation and examples
Although it looks pretty much the same, we've rebuilt the entire documentation using Gatsby and MDX. It's now an even faster progressive web app and allows us to write all content entirely in Markdown , without having to compromise on easy-to-use custom UI components. We're hoping that the Markdown source will make it even easier to contribute to the documentation. For more details, check out the styleguide and source.
โ๏ธ While converting the pages to Markdown, we've also fixed a bunch of typos, improved the existing pages and added some new content:
- Usage Guide: Rule-based Matching. How to use the
Matcher
,PhraseMatcher
and the newEntityRuler
, and write powerful components to combine statistical models and rules. - Usage Guide: Saving and Loading. Everything you need to know about serialization, and how to save and load pipeline components, package your spaCy models as Python modules and use entry points.
- Usage Guide: Merging and Splitting. How to retokenize a
Doc
using the newretokenize
context manager and merge spans into single tokens and split single tokens into multiple. - Universe: Videos and Podcasts
- API:
EntityRuler
- API:
SentenceSegmenter
- API: Pipeline functions
๐ฅ Contributors
Thanks to @DuyguA, @giannisdaras, @mgogoulos, @louridas, @skrcode, @gavrieltal, @svlandeg, @jarib, @alvaroabascar, @kbulygin, @moreymat, @mirfan899, @ozcankasal, @willprice, @alvations, @amperinet, @retnuh, @Loghijiaha, @DeNeutoy, @gavrieltal, @boena, @BramVanroy, @pganssle, @foufaster, @adrianeboyd, @maknotavailable, @pierremonico, @lauraBaakman, @juliamakogon, @Gizzio, @Abhijit-2592, @akki2825, @grivaz, @roshni-b, @mpuig, @mikelibg, @danielkingai2 and @adrienball for the pull requests and contributions.
- ๐ NEW: Experimental ULMFit/BERT/Elmo-like pretraining (see #2931) via the new