|
1 | 1 | # Listeners |
2 | 2 |
|
3 | | -1. [Overview](#1-overview) |
4 | | -2. [Initialization](#2-initialization) |
5 | | - - [A. Linking listeners to the embedding component](#2a-linking-listeners-to-the-embedding-component) |
6 | | - - [B. Shape inference](#2b-shape-inference) |
7 | | -3. [Internal communication](#3-internal-communication) |
8 | | - - [A. During prediction](#3a-during-prediction) |
9 | | - - [B. During training](#3b-during-training) |
10 | | - - [C. Frozen components](#3c-frozen-components) |
11 | | -4. [Replacing listener with standalone](#4-replacing-listener-with-standalone) |
| 3 | +- [1. Overview](#1-overview) |
| 4 | +- [2. Initialization](#2-initialization) |
| 5 | + - [2A. Linking listeners to the embedding component](#2a-linking-listeners-to-the-embedding-component) |
| 6 | + - [2B. Shape inference](#2b-shape-inference) |
| 7 | +- [3. Internal communication](#3-internal-communication) |
| 8 | + - [3A. During prediction](#3a-during-prediction) |
| 9 | + - [3B. During training](#3b-during-training) |
| 10 | + - [Training with multiple listeners](#training-with-multiple-listeners) |
| 11 | + - [3C. Frozen components](#3c-frozen-components) |
| 12 | + - [The Tok2Vec or Transformer is frozen](#the-tok2vec-or-transformer-is-frozen) |
| 13 | + - [The upstream component is frozen](#the-upstream-component-is-frozen) |
| 14 | +- [4. Replacing listener with standalone](#4-replacing-listener-with-standalone) |
12 | 15 |
|
13 | 16 | ## 1. Overview |
14 | 17 |
|
@@ -62,7 +65,7 @@ of this `find_listener()` method will specifically identify sublayers of a model |
62 | 65 |
|
63 | 66 | If it's a Transformer-based pipeline, a |
64 | 67 | [`transformer` component](https://github.com/explosion/spacy-transformers/blob/master/spacy_transformers/pipeline_component.py) |
65 | | -has a similar implementation but its `find_listener()` function will specifically look for `TransformerListener` |
| 68 | +has a similar implementation but its `find_listener()` function will specifically look for `TransformerListener` |
66 | 69 | sublayers of downstream components. |
67 | 70 |
|
68 | 71 | ### 2B. Shape inference |
@@ -154,7 +157,7 @@ as a tagger or a parser. This used to be impossible before 3.1, but has become s |
154 | 157 | embedding component in the [`annotating_components`](https://spacy.io/usage/training#annotating-components) |
155 | 158 | list of the config. This works like any other "annotating component" because it relies on the `Doc` attributes. |
156 | 159 |
|
157 | | -However, if the `Tok2Vec` or `Transformer` is frozen, and not present in `annotating_components`, and a related |
| 160 | +However, if the `Tok2Vec` or `Transformer` is frozen, and not present in `annotating_components`, and a related |
158 | 161 | listener isn't frozen, then a `W086` warning is shown and further training of the pipeline will likely end with `E954`. |
159 | 162 |
|
160 | 163 | #### The upstream component is frozen |
@@ -216,5 +219,17 @@ new_model = tok2vec_model.attrs["replace_listener"](new_model) |
216 | 219 | ``` |
217 | 220 |
|
218 | 221 | The new config and model are then properly stored on the `nlp` object. |
219 | | -Note that this functionality (running the replacement for a transformer listener) was broken prior to |
| 222 | +Note that this functionality (running the replacement for a transformer listener) was broken prior to |
220 | 223 | `spacy-transformers` 1.0.5. |
| 224 | + |
| 225 | +In spaCy 3.7, `Language.replace_listeners` was updated to pass the following additional arguments to the `replace_listener` callback: |
| 226 | +the listener to be replaced and the `tok2vec`/`transformer` pipe from which the new model was copied. To maintain backwards-compatiblity, |
| 227 | +the method only passes these extra arguments for callbacks that support them: |
| 228 | + |
| 229 | +``` |
| 230 | +def replace_listener_pre_37(copied_tok2vec_model): |
| 231 | + ... |
| 232 | +
|
| 233 | +def replace_listener_post_37(copied_tok2vec_model, replaced_listener, tok2vec_pipe): |
| 234 | + ... |
| 235 | +``` |
0 commit comments