You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 8, 2022. It is now read-only.
Copy file name to clipboardexpand all lines: docs/_sources/model_zoo.rst.txt
+11-11
Original file line number
Diff line number
Diff line change
@@ -27,27 +27,27 @@ NLP Architect Model Zoo
27
27
- Links
28
28
* - :doc:`Sparse GNMT <sparse_gnmt>`
29
29
- 90% sparse GNMT model and a 2x2 block sparse translating German to English trained on Europarl-v7 [#]_ , Common Crawl and News Commentary 11 datasets
|`2x2 block sparse model <https://d2zs9tzlek599f.cloudfront.net/models/sparse_gnmt/gnmt_blocksparse2x2.zip>`_
32
32
* - :doc:`Intent Extraction <intent>`
33
33
- A :py:class:`MultiTaskIntentModel <nlp_architect.models.intent_extraction.MultiTaskIntentModel>` intent extraction and slot tagging model, trained on SNIPS NLU dataset
Copy file name to clipboardexpand all lines: docs/_sources/term_set_expansion.rst.txt
+5-5
Original file line number
Diff line number
Diff line change
@@ -84,19 +84,19 @@ size, min_count, window and hs hyperparameters. Please refer to the np2vec modul
84
84
--corpus_format txt
85
85
86
86
87
-
A `pretrained model <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/term_set/enwiki-20171201_pretrained_set_expansion.txt.tar.gz>`__
87
+
A `pretrained model <https://d2zs9tzlek599f.cloudfront.net/models/term_set/enwiki-20171201_pretrained_set_expansion.txt.tar.gz>`__
88
88
on English Wikipedia dump (``enwiki-20171201-pages-articles-multistream.xml.bz2``) is available under
89
89
Apache 2.0license. It has been trained with hyperparameters values
90
-
recommended above. Full English Wikipedia `raw corpus <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/term_set/enwiki-20171201.txt.gz>`_ and
91
-
`marked corpus <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/term_set/enwiki-20171201_spacy_marked.txt.tar.gz>`_
90
+
recommended above. Full English Wikipedia `raw corpus <https://d2zs9tzlek599f.cloudfront.net/models/term_set/enwiki-20171201.txt.gz>`_ and
91
+
`marked corpus <https://d2zs9tzlek599f.cloudfront.net/models/term_set/enwiki-20171201_spacy_marked.txt.tar.gz>`_
A `pretrained model with grouping <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/term_set/enwiki-20171201_grouping_pretrained_set_expansion.tar.gz>`__
95
+
A `pretrained model with grouping <https://d2zs9tzlek599f.cloudfront.net/models/term_set/enwiki-20171201_grouping_pretrained_set_expansion.tar.gz>`__
96
96
on the same English Wikipedia dump is also
97
97
available under
98
98
Apache 2.0license. It has been trained with hyperparameters values
99
-
recommended above. `Marked corpus <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/term_set/enwiki-20171201_grouping_marked.txt.tar.gz>`_
99
+
recommended above. `Marked corpus <https://d2zs9tzlek599f.cloudfront.net/models/term_set/enwiki-20171201_grouping_marked.txt.tar.gz>`_
Copy file name to clipboardexpand all lines: docs/_sources/trend_analysis.rst.txt
+1-1
Original file line number
Diff line number
Diff line change
@@ -37,7 +37,7 @@ In this stage, the algorithm will also train a W2V model on the joint corpora to
37
37
In the second stage the topic lists are being compared and analyzed.
38
38
Finally the UI reads the analysis data and generates automatic reports for extracted topics, “Hot” and “Cold” trends, and topic clustering in 2D space.
39
39
40
-
The noun phrase extraction module is using a pre-trained `model <https://s3-us-west-2.amazonaws.com/nlp-architect-data/models/chunker/model.h5>`__ which is available under the Apache 2.0 license.
40
+
The noun phrase extraction module is using a pre-trained `model <https://d2zs9tzlek599f.cloudfront.net/models/chunker/model.h5>`__ which is available under the Apache 2.0 license.
<td>90% sparse GNMT model and a 2x2 block sparse translating German to English trained on Europarl-v7 <aclass="footnote-reference" href="#id9" id="id1">[1]</a> , Common Crawl and News Commentary 11 datasets</td>
0 commit comments