-
-
Notifications
You must be signed in to change notification settings - Fork 24
Open
Description
The key to any speedup and/or reduction in memory consumption is likely to involve replacing the Trie.
The Trie only exists because we need to be able to do longest-prefix retrieval on a dictionary.
We could use a regular Python dictionary if incoming strings were tokenised by collation key.
What I'm basically thinking is:
- build a regular python dictionary for the collation table (some keys of which will consist of multiple characters)
- build a regex using the keys in the collation table
- use that regex to tokenise incoming strings (this has the effect of a Trie)
- look the tokens up (some of which will be multiple characters) in the python dictionary
It will be worth having some sort of timed test to see if this approach actually makes a (positive) difference but I suspect it might.
Metadata
Metadata
Assignees
Labels
No labels