Skip to content

Commit 0e57b30

Browse files
[Ready to merge] Pruned Transducer Stateless2 for WenetSpeech (char-based) (k2-fsa#349)
* add char-based pruned-rnnt2 for wenetspeech * style check * style check * change for export.py * do some changes * do some changes * a small change for .flake8 * solve the conflicts
1 parent 2f1e23c commit 0e57b30

29 files changed

+4134
-0
lines changed

README.md

+33
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,8 @@ We provide 6 recipes at present:
2020
- [TIMIT][timit]
2121
- [TED-LIUM3][tedlium3]
2222
- [GigaSpeech][gigaspeech]
23+
- [Aidatatang_200zh][aidatatang_200zh]
24+
- [WenetSpeech][wenetspeech]
2325

2426
### yesno
2527

@@ -217,6 +219,33 @@ and [Pruned stateless RNN-T: Conformer encoder + Embedding decoder + k2 pruned R
217219
| fast beam search | 10.50 | 10.69 |
218220
| modified beam search | 10.40 | 10.51 |
219221

222+
### Aidatatang_200zh
223+
224+
We provide one model for this recipe: [Pruned stateless RNN-T: Conformer encoder + Embedding decoder + k2 pruned RNN-T loss][Aidatatang_200zh_pruned_transducer_stateless2].
225+
226+
#### Pruned stateless RNN-T: Conformer encoder + Embedding decoder + k2 pruned RNN-T loss
227+
228+
| | Dev | Test |
229+
|----------------------|-------|-------|
230+
| greedy search | 5.53 | 6.59 |
231+
| fast beam search | 5.30 | 6.34 |
232+
| modified beam search | 5.27 | 6.33 |
233+
234+
We provide a Colab notebook to run a pre-trained Pruned Transducer Stateless model: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1wNSnSj3T5oOctbh5IGCa393gKOoQw2GH?usp=sharing)
235+
236+
### WenetSpeech
237+
238+
We provide one model for this recipe: [Pruned stateless RNN-T: Conformer encoder + Embedding decoder + k2 pruned RNN-T loss][WenetSpeech_pruned_transducer_stateless2].
239+
240+
#### Pruned stateless RNN-T: Conformer encoder + Embedding decoder + k2 pruned RNN-T loss (trained with L subset)
241+
242+
| | Dev | Test-Net | Test-Meeting |
243+
|----------------------|-------|----------|--------------|
244+
| greedy search | 7.80 | 8.75 | 13.49 |
245+
| fast beam search | 7.94 | 8.74 | 13.80 |
246+
| modified beam search | 7.76 | 8.71 | 13.41 |
247+
248+
We provide a Colab notebook to run a pre-trained Pruned Transducer Stateless model: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1EV4e1CHa1GZgEF-bZgizqI9RyFFehIiN?usp=sharing)
220249

221250
## Deployment with C++
222251

@@ -243,10 +272,14 @@ Please see: [![Open In Colab](https://colab.research.google.com/assets/colab-bad
243272
[TED-LIUM3_pruned_transducer_stateless]: egs/tedlium3/ASR/pruned_transducer_stateless
244273
[GigaSpeech_conformer_ctc]: egs/gigaspeech/ASR/conformer_ctc
245274
[GigaSpeech_pruned_transducer_stateless2]: egs/gigaspeech/ASR/pruned_transducer_stateless2
275+
[Aidatatang_200zh_pruned_transducer_stateless2]: egs/aidatatang_200zh/ASR/pruned_transducer_stateless2
276+
[WenetSpeech_pruned_transducer_stateless2]: egs/wenetspeech/ASR/pruned_transducer_stateless2
246277
[yesno]: egs/yesno/ASR
247278
[librispeech]: egs/librispeech/ASR
248279
[aishell]: egs/aishell/ASR
249280
[timit]: egs/timit/ASR
250281
[tedlium3]: egs/tedlium3/ASR
251282
[gigaspeech]: egs/gigaspeech/ASR
283+
[aidatatang_200zh]: egs/aidatatang_200zh/ASR
284+
[wenetspeech]: egs/wenetspeech/ASR
252285
[k2]: https://github.com/k2-fsa/k2

egs/wenetspeech/ASR/README.md

+19
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
2+
# Introduction
3+
4+
This recipe includes some different ASR models trained with WenetSpeech.
5+
6+
[./RESULTS.md](./RESULTS.md) contains the latest results.
7+
8+
# Transducers
9+
10+
There are various folders containing the name `transducer` in this folder.
11+
The following table lists the differences among them.
12+
13+
| | Encoder | Decoder | Comment |
14+
|---------------------------------------|---------------------|--------------------|-----------------------------|
15+
| `pruned_transducer_stateless2` | Conformer(modified) | Embedding + Conv1d | Using k2 pruned RNN-T loss | |
16+
17+
The decoder in `transducer_stateless` is modified from the paper
18+
[Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419/).
19+
We place an additional Conv1d layer right after the input embedding layer.

egs/wenetspeech/ASR/RESULTS.md

+93
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
## Results
2+
3+
### WenetSpeech char-based training results (Pruned Transducer 2)
4+
5+
#### 2022-05-19
6+
7+
Using the codes from this PR https://github.com/k2-fsa/icefall/pull/349.
8+
9+
When training with the L subset, the WERs are
10+
11+
| | dev | test-net | test-meeting | comment |
12+
|------------------------------------|-------|----------|--------------|------------------------------------------|
13+
| greedy search | 7.80 | 8.75 | 13.49 | --epoch 10, --avg 2, --max-duration 100 |
14+
| modified beam search (beam size 4) | 7.76 | 8.71 | 13.41 | --epoch 10, --avg 2, --max-duration 100 |
15+
| fast beam search (set as default) | 7.94 | 8.74 | 13.80 | --epoch 10, --avg 2, --max-duration 1500 |
16+
17+
The training command for reproducing is given below:
18+
19+
```
20+
export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
21+
22+
./pruned_transducer_stateless2/train.py \
23+
--lang-dir data/lang_char \
24+
--exp-dir pruned_transducer_stateless2/exp \
25+
--world-size 8 \
26+
--num-epochs 15 \
27+
--start-epoch 0 \
28+
--max-duration 180 \
29+
--valid-interval 3000 \
30+
--model-warm-step 3000 \
31+
--save-every-n 8000 \
32+
--training-subset L
33+
```
34+
35+
The tensorboard training log can be found at
36+
https://tensorboard.dev/experiment/wM4ZUNtASRavJx79EOYYcg/#scalars
37+
38+
The decoding command is:
39+
```
40+
epoch=10
41+
avg=2
42+
43+
## greedy search
44+
./pruned_transducer_stateless2/decode.py \
45+
--epoch $epoch \
46+
--avg $avg \
47+
--exp-dir ./pruned_transducer_stateless2/exp \
48+
--lang-dir data/lang_char \
49+
--max-duration 100 \
50+
--decoding-method greedy_search
51+
52+
## modified beam search
53+
./pruned_transducer_stateless2/decode.py \
54+
--epoch $epoch \
55+
--avg $avg \
56+
--exp-dir ./pruned_transducer_stateless2/exp \
57+
--lang-dir data/lang_char \
58+
--max-duration 100 \
59+
--decoding-method modified_beam_search \
60+
--beam-size 4
61+
62+
## fast beam search
63+
./pruned_transducer_stateless2/decode.py \
64+
--epoch $epoch \
65+
--avg $avg \
66+
--exp-dir ./pruned_transducer_stateless2/exp \
67+
--lang-dir data/lang_char \
68+
--max-duration 1500 \
69+
--decoding-method fast_beam_search \
70+
--beam 4 \
71+
--max-contexts 4 \
72+
--max-states 8
73+
```
74+
75+
When training with the M subset, the WERs are
76+
77+
| | dev | test-net | test-meeting | comment |
78+
|------------------------------------|--------|-----------|---------------|-------------------------------------------|
79+
| greedy search | 10.40 | 11.31 | 19.64 | --epoch 29, --avg 11, --max-duration 100 |
80+
| modified beam search (beam size 4) | 9.85 | 11.04 | 18.20 | --epoch 29, --avg 11, --max-duration 100 |
81+
| fast beam search (set as default) | 10.18 | 11.10 | 19.32 | --epoch 29, --avg 11, --max-duration 1500 |
82+
83+
84+
When training with the S subset, the WERs are
85+
86+
| | dev | test-net | test-meeting | comment |
87+
|------------------------------------|--------|-----------|---------------|-------------------------------------------|
88+
| greedy search | 19.92 | 25.20 | 35.35 | --epoch 29, --avg 24, --max-duration 100 |
89+
| modified beam search (beam size 4) | 18.62 | 23.88 | 33.80 | --epoch 29, --avg 24, --max-duration 100 |
90+
| fast beam search (set as default) | 19.31 | 24.41 | 34.87 | --epoch 29, --avg 24, --max-duration 1500 |
91+
92+
93+
A pre-trained model and decoding logs can be found at <https://huggingface.co/luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2>
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
../../../librispeech/ASR/local/compute_fbank_musan.py
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
#!/usr/bin/env python3
2+
# Copyright 2021 Johns Hopkins University (Piotr Żelasko)
3+
# Copyright 2021 Xiaomi Corp. (Fangjun Kuang)
4+
#
5+
# See ../../../../LICENSE for clarification regarding multiple authors
6+
#
7+
# Licensed under the Apache License, Version 2.0 (the "License");
8+
# you may not use this file except in compliance with the License.
9+
# You may obtain a copy of the License at
10+
#
11+
# http://www.apache.org/licenses/LICENSE-2.0
12+
#
13+
# Unless required by applicable law or agreed to in writing, software
14+
# distributed under the License is distributed on an "AS IS" BASIS,
15+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16+
# See the License for the specific language governing permissions and
17+
# limitations under the License.
18+
19+
import logging
20+
from pathlib import Path
21+
22+
import torch
23+
from lhotse import (
24+
CutSet,
25+
KaldifeatFbank,
26+
KaldifeatFbankConfig,
27+
LilcomHdf5Writer,
28+
)
29+
30+
# Torch's multithreaded behavior needs to be disabled or
31+
# it wastes a lot of CPU and slow things down.
32+
# Do this outside of main() in case it needs to take effect
33+
# even when we are not invoking the main (e.g. when spawning subprocesses).
34+
torch.set_num_threads(1)
35+
torch.set_num_interop_threads(1)
36+
37+
38+
def compute_fbank_wenetspeech_dev_test():
39+
in_out_dir = Path("data/fbank")
40+
# number of workers in dataloader
41+
num_workers = 42
42+
43+
# number of seconds in a batch
44+
batch_duration = 600
45+
46+
subsets = ("S", "M", "DEV", "TEST_NET", "TEST_MEETING")
47+
48+
device = torch.device("cpu")
49+
if torch.cuda.is_available():
50+
device = torch.device("cuda", 0)
51+
extractor = KaldifeatFbank(KaldifeatFbankConfig(device=device))
52+
53+
logging.info(f"device: {device}")
54+
55+
for partition in subsets:
56+
cuts_path = in_out_dir / f"cuts_{partition}.jsonl.gz"
57+
if cuts_path.is_file():
58+
logging.info(f"{cuts_path} exists - skipping")
59+
continue
60+
61+
raw_cuts_path = in_out_dir / f"cuts_{partition}_raw.jsonl.gz"
62+
63+
logging.info(f"Loading {raw_cuts_path}")
64+
cut_set = CutSet.from_file(raw_cuts_path)
65+
66+
logging.info("Computing features")
67+
68+
cut_set = cut_set.compute_and_store_features_batch(
69+
extractor=extractor,
70+
storage_path=f"{in_out_dir}/feats_{partition}",
71+
num_workers=num_workers,
72+
batch_duration=batch_duration,
73+
storage_type=LilcomHdf5Writer,
74+
)
75+
cut_set = cut_set.trim_to_supervisions(
76+
keep_overlapping=False, min_duration=None
77+
)
78+
79+
logging.info(f"Saving to {cuts_path}")
80+
cut_set.to_file(cuts_path)
81+
82+
83+
def main():
84+
formatter = (
85+
"%(asctime)s %(levelname)s [%(filename)s:%(lineno)d] %(message)s"
86+
)
87+
logging.basicConfig(format=formatter, level=logging.INFO)
88+
89+
compute_fbank_wenetspeech_dev_test()
90+
91+
92+
if __name__ == "__main__":
93+
main()

0 commit comments

Comments
 (0)