-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Lattice-faster-decoder-combine #3061
Open
LvHang
wants to merge
29
commits into
kaldi-asr:master
Choose a base branch
from
LvHang:async-decoder
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 25 commits
Commits
Show all changes
29 commits
Select commit
Hold shift + click to select a range
5503ba1
Combine ProcessEmitting() and ProcessNonemitting()
LvHang 4b30697
add test binary
LvHang 2538a32
add test2
LvHang d011342
Update design and comments
LvHang a758ba4
update comments and the functions about PNE()
LvHang 85da998
change recvoer—â_map to token_orig_cost and document
LvHang 18e8758
add a simple test script
LvHang b223bb7
small fix
LvHang dca20d4
fix
LvHang 06d38cb
change queue for speeding up
LvHang 603b705
add hashlist version for test
LvHang 2a08463
iterator singly-list
768fd96
small fix.
f9bab34
Heap method head file
a4a2ddc
bucketqueue
226a698
bucketqueue without GetCutoff
8220656
small fix and class member queue
d7a3a9d
some fix and remove SetBegin
a70f34b
minor fix
b636119
small fix
b6abf43
first_nonempty_bucket_index_ and first_nonempty_bucket_
25907d8
remove RecoverLastFrame()
26b378a
do ProcessNonemitting if final-probs are requested
c66f1bb
fix
79b0071
small fix
c359fe2
fix according to the comments
896c5c8
resize the BucketQueue when a weird long one was caused
6e2d27a
small fix
7dd2ca2
1.2 tolerance
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,128 @@ | ||
#!/bin/bash | ||
|
||
# Copyright 2012 Johns Hopkins University (Author: Daniel Povey) | ||
# Apache 2.0 | ||
|
||
# Begin configuration. | ||
nj=4 | ||
cmd=run.pl | ||
maxactive=7000 | ||
beam=15.0 | ||
lattice_beam=8.0 | ||
expand_beam=30.0 | ||
acwt=1.0 | ||
skip_scoring=false | ||
combine_version=false | ||
|
||
stage=0 | ||
online_ivector_dir= | ||
post_decode_acwt=10.0 | ||
extra_left_context=0 | ||
extra_right_context=0 | ||
extra_left_context_initial=0 | ||
extra_right_context_final=0 | ||
chunk_width=140,100,160 | ||
use_gpu=no | ||
# End configuration. | ||
|
||
echo "$0 $@" # Print the command line for logging | ||
|
||
[ -f ./path.sh ] && . ./path.sh; # source the path. | ||
. parse_options.sh || exit 1; | ||
|
||
if [ $# != 3 ]; then | ||
echo "Usage: steps/decode_combine_test.sh [options] <graph-dir> <data-dir> <decode-dir>" | ||
echo "... where <decode-dir> is assumed to be a sub-directory of the directory" | ||
echo " where the model is." | ||
echo "e.g.: steps/decode_combine_test.sh exp/mono/graph_tgpar data/test_dev93 exp/mono/decode_dev93_tgpr" | ||
echo "" | ||
echo "This script works on CMN + (delta+delta-delta | LDA+MLLT) features; it works out" | ||
echo "what type of features you used (assuming it's one of these two)" | ||
echo "" | ||
echo "main options (for others, see top of script file)" | ||
echo " --config <config-file> # config containing options" | ||
echo " --nj <nj> # number of parallel jobs" | ||
echo " --cmd (utils/run.pl|utils/queue.pl <queue opts>) # how to run jobs." | ||
exit 1; | ||
fi | ||
|
||
|
||
graphdir=$1 | ||
data=$2 | ||
dir=$3 | ||
|
||
srcdir=`dirname $dir`; # The model directory is one level up from decoding directory. | ||
sdata=$data/split$nj; | ||
splice_opts=`cat $srcdir/splice_opts 2>/dev/null` | ||
cmvn_opts=`cat $srcdir/cmvn_opts 2>/dev/null` | ||
delta_opts=`cat $srcdir/delta_opts 2>/dev/null` | ||
|
||
mkdir -p $dir/log | ||
[[ -d $sdata && $data/feats.scp -ot $sdata ]] || split_data.sh $data $nj || exit 1; | ||
echo $nj > $dir/num_jobs | ||
|
||
|
||
for f in $sdata/1/feats.scp $sdata/1/cmvn.scp $srcdir/final.mdl $graphdir/HCLG.fst; do | ||
[ ! -f $f ] && echo "decode_combine_test.sh: no such file $f" && exit 1; | ||
done | ||
|
||
|
||
if [ -f $srcdir/final.mat ]; then feat_type=lda; else feat_type=delta; fi | ||
echo "decode_combine_test.sh: feature type is $feat_type" | ||
|
||
feats="ark,s,cs:apply-cmvn $cmvn_opts --utt2spk=ark:$sdata/JOB/utt2spk scp:$sdata/JOB/cmvn.scp scp:$sdata/JOB/feats.scp ark:- |" | ||
|
||
posteriors="ark,scp:$sdata/JOB/posterior.ark,$sdata/JOB/posterior.scp" | ||
posteriors_scp="scp:$sdata/JOB/posterior.scp" | ||
|
||
if [ ! -z "$online_ivector_dir" ]; then | ||
ivector_period=$(cat $online_ivector_dir/ivector_period) || exit 1; | ||
ivector_opts="--online-ivectors=scp:$online_ivector_dir/ivector_online.scp --online-ivector-period=$ivector_period" | ||
fi | ||
|
||
if [ "$post_decode_acwt" == 1.0 ]; then | ||
lat_wspecifier="ark:|gzip -c >$dir/lat.JOB.gz" | ||
else | ||
lat_wspecifier="ark:|lattice-scale --acoustic-scale=$post_decode_acwt ark:- ark:- | gzip -c >$dir/lat.JOB.gz" | ||
fi | ||
|
||
frame_subsampling_opt= | ||
if [ -f $srcdir/frame_subsampling_factor ]; then | ||
# e.g. for 'chain' systems | ||
frame_subsampling_opt="--frame-subsampling-factor=$(cat $srcdir/frame_subsampling_factor)" | ||
fi | ||
|
||
frames_per_chunk=$(echo $chunk_width | cut -d, -f1) | ||
# generate log-likelihood | ||
if [ $stage -le 1 ]; then | ||
$cmd JOB=1:$nj $dir/log/nnet_compute.JOB.log \ | ||
nnet3-compute $ivector_opts $frame_subsampling_opt \ | ||
--acoustic-scale=$acwt \ | ||
--extra-left-context=$extra_left_context \ | ||
--extra-right-context=$extra_right_context \ | ||
--extra-left-context-initial=$extra_left_context_initial \ | ||
--extra-right-context-final=$extra_right_context_final \ | ||
--frames-per-chunk=$frames_per_chunk \ | ||
--use-gpu=$use_gpu --use-priors=true \ | ||
$srcdir/final.mdl "$feats" "$posteriors" | ||
fi | ||
|
||
if [ $stage -le 2 ]; then | ||
suffix= | ||
if $combine_version ; then | ||
suffix="-combine" | ||
fi | ||
$cmd JOB=1:$nj $dir/log/decode.JOB.log \ | ||
latgen-faster-mapped$suffix --max-active=$maxactive --beam=$beam --lattice-beam=$lattice_beam \ | ||
--acoustic-scale=$acwt --allow-partial=true --word-symbol-table=$graphdir/words.txt \ | ||
$srcdir/final.mdl $graphdir/HCLG.fst "$posteriors_scp" "$lat_wspecifier" || exit 1; | ||
fi | ||
|
||
if ! $skip_scoring ; then | ||
[ ! -x local/score.sh ] && \ | ||
echo "Not scoring because local/score.sh does not exist or not executable." && exit 1; | ||
local/score.sh --cmd "$cmd" $data $graphdir $dir || | ||
{ echo "$0: Scoring failed. (ignore by '--skip-scoring true')"; exit 1; } | ||
fi | ||
|
||
exit 0; |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,179 @@ | ||
// bin/latgen-faster-mapped.cc | ||
|
||
// Copyright 2009-2012 Microsoft Corporation, Karel Vesely | ||
// 2013 Johns Hopkins University (author: Daniel Povey) | ||
// 2014 Guoguo Chen | ||
|
||
// See ../../COPYING for clarification regarding multiple authors | ||
// | ||
// Licensed under the Apache License, Version 2.0 (the "License"); | ||
// you may not use this file except in compliance with the License. | ||
// You may obtain a copy of the License at | ||
// | ||
// http://www.apache.org/licenses/LICENSE-2.0 | ||
// | ||
// THIS CODE IS PROVIDED *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY | ||
// KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED | ||
// WARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE, | ||
// MERCHANTABLITY OR NON-INFRINGEMENT. | ||
// See the Apache 2 License for the specific language governing permissions and | ||
// limitations under the License. | ||
|
||
|
||
#include "base/kaldi-common.h" | ||
#include "util/common-utils.h" | ||
#include "tree/context-dep.h" | ||
#include "hmm/transition-model.h" | ||
#include "fstext/fstext-lib.h" | ||
#include "decoder/decoder-wrappers.h" | ||
#include "decoder/decodable-matrix.h" | ||
#include "base/timer.h" | ||
|
||
|
||
int main(int argc, char *argv[]) { | ||
try { | ||
using namespace kaldi; | ||
typedef kaldi::int32 int32; | ||
using fst::SymbolTable; | ||
using fst::Fst; | ||
using fst::StdArc; | ||
|
||
const char *usage = | ||
"Generate lattices, reading log-likelihoods as matrices\n" | ||
" (model is needed only for the integer mappings in its transition-model)\n" | ||
"Usage: latgen-faster-mapped [options] trans-model-in (fst-in|fsts-rspecifier) loglikes-rspecifier" | ||
" lattice-wspecifier [ words-wspecifier [alignments-wspecifier] ]\n"; | ||
ParseOptions po(usage); | ||
Timer timer; | ||
bool allow_partial = false; | ||
BaseFloat acoustic_scale = 0.1; | ||
LatticeFasterDecoderCombineConfig config; | ||
|
||
std::string word_syms_filename; | ||
config.Register(&po); | ||
po.Register("acoustic-scale", &acoustic_scale, "Scaling factor for acoustic likelihoods"); | ||
|
||
po.Register("word-symbol-table", &word_syms_filename, "Symbol table for words [for debug output]"); | ||
po.Register("allow-partial", &allow_partial, "If true, produce output even if end state was not reached."); | ||
|
||
po.Read(argc, argv); | ||
|
||
if (po.NumArgs() < 4 || po.NumArgs() > 6) { | ||
po.PrintUsage(); | ||
exit(1); | ||
} | ||
|
||
std::string model_in_filename = po.GetArg(1), | ||
fst_in_str = po.GetArg(2), | ||
feature_rspecifier = po.GetArg(3), | ||
lattice_wspecifier = po.GetArg(4), | ||
words_wspecifier = po.GetOptArg(5), | ||
alignment_wspecifier = po.GetOptArg(6); | ||
|
||
TransitionModel trans_model; | ||
ReadKaldiObject(model_in_filename, &trans_model); | ||
|
||
bool determinize = config.determinize_lattice; | ||
CompactLatticeWriter compact_lattice_writer; | ||
LatticeWriter lattice_writer; | ||
if (! (determinize ? compact_lattice_writer.Open(lattice_wspecifier) | ||
: lattice_writer.Open(lattice_wspecifier))) | ||
KALDI_ERR << "Could not open table for writing lattices: " | ||
<< lattice_wspecifier; | ||
|
||
Int32VectorWriter words_writer(words_wspecifier); | ||
|
||
Int32VectorWriter alignment_writer(alignment_wspecifier); | ||
|
||
fst::SymbolTable *word_syms = NULL; | ||
if (word_syms_filename != "") | ||
if (!(word_syms = fst::SymbolTable::ReadText(word_syms_filename))) | ||
KALDI_ERR << "Could not read symbol table from file " | ||
<< word_syms_filename; | ||
|
||
double tot_like = 0.0; | ||
kaldi::int64 frame_count = 0; | ||
int num_success = 0, num_fail = 0; | ||
|
||
if (ClassifyRspecifier(fst_in_str, NULL, NULL) == kNoRspecifier) { | ||
SequentialBaseFloatMatrixReader loglike_reader(feature_rspecifier); | ||
// Input FST is just one FST, not a table of FSTs. | ||
Fst<StdArc> *decode_fst = fst::ReadFstKaldiGeneric(fst_in_str); | ||
timer.Reset(); | ||
|
||
{ | ||
LatticeFasterDecoderCombine decoder(*decode_fst, config); | ||
|
||
for (; !loglike_reader.Done(); loglike_reader.Next()) { | ||
std::string utt = loglike_reader.Key(); | ||
Matrix<BaseFloat> loglikes (loglike_reader.Value()); | ||
loglike_reader.FreeCurrent(); | ||
if (loglikes.NumRows() == 0) { | ||
KALDI_WARN << "Zero-length utterance: " << utt; | ||
num_fail++; | ||
continue; | ||
} | ||
|
||
DecodableMatrixScaledMapped decodable(trans_model, loglikes, acoustic_scale); | ||
|
||
double like; | ||
if (DecodeUtteranceLatticeFasterCombine( | ||
decoder, decodable, trans_model, word_syms, utt, | ||
acoustic_scale, determinize, allow_partial, &alignment_writer, | ||
&words_writer, &compact_lattice_writer, &lattice_writer, | ||
&like)) { | ||
tot_like += like; | ||
frame_count += loglikes.NumRows(); | ||
num_success++; | ||
} else num_fail++; | ||
} | ||
} | ||
delete decode_fst; // delete this only after decoder goes out of scope. | ||
} else { // We have different FSTs for different utterances. | ||
SequentialTableReader<fst::VectorFstHolder> fst_reader(fst_in_str); | ||
RandomAccessBaseFloatMatrixReader loglike_reader(feature_rspecifier); | ||
for (; !fst_reader.Done(); fst_reader.Next()) { | ||
std::string utt = fst_reader.Key(); | ||
if (!loglike_reader.HasKey(utt)) { | ||
KALDI_WARN << "Not decoding utterance " << utt | ||
<< " because no loglikes available."; | ||
num_fail++; | ||
continue; | ||
} | ||
const Matrix<BaseFloat> &loglikes = loglike_reader.Value(utt); | ||
if (loglikes.NumRows() == 0) { | ||
KALDI_WARN << "Zero-length utterance: " << utt; | ||
num_fail++; | ||
continue; | ||
} | ||
LatticeFasterDecoderCombine decoder(fst_reader.Value(), config); | ||
DecodableMatrixScaledMapped decodable(trans_model, loglikes, acoustic_scale); | ||
double like; | ||
if (DecodeUtteranceLatticeFasterCombine( | ||
decoder, decodable, trans_model, word_syms, utt, acoustic_scale, | ||
determinize, allow_partial, &alignment_writer, &words_writer, | ||
&compact_lattice_writer, &lattice_writer, &like)) { | ||
tot_like += like; | ||
frame_count += loglikes.NumRows(); | ||
num_success++; | ||
} else num_fail++; | ||
} | ||
} | ||
|
||
double elapsed = timer.Elapsed(); | ||
KALDI_LOG << "Time taken "<< elapsed | ||
<< "s: real-time factor assuming 100 frames/sec is " | ||
<< (elapsed*100.0/frame_count); | ||
KALDI_LOG << "Done " << num_success << " utterances, failed for " | ||
<< num_fail; | ||
KALDI_LOG << "Overall log-likelihood per frame is " << (tot_like/frame_count) << " over " | ||
<< frame_count<<" frames."; | ||
|
||
delete word_syms; | ||
if (num_success != 0) return 0; | ||
else return 1; | ||
} catch(const std::exception &e) { | ||
std::cerr << e.what(); | ||
return -1; | ||
} | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is OK for testing but eventually this should just be a change to lattice-faster-decoder.cc. It's just a small optimization.