Resolve tokenization issues causing BitFunnel parser crashes #6
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The corpus as processed by the current version of Workbench contains
characters (mostly punctuation) that cause the BitFunnel parser to
crash. This commit will cause Workbench to handle these cases correctly.
There are 2 issues at the root of this problem: first, the Lucene
analyzer (which we use to generate the BitFunnel chunk files) attempts
to preserve URLs, and so colons are not removed from the middle of a
term such as
Wikipedia:dump
. This causes our parser to crash. SinceLuecene does remove the colon when it does not seem to appear in a URI,
we simply have removed colons from all terms.
Second, we are not using the Lucene tokenizer to process article titles.
This leaves a wide variety of puntuation in the corpus which crashes the
tokenizer. In the new version of the corpus, the title is tokenized to
avoid such problems.