Skip to content

rethinking the Trie #14

@jtauber

Description

@jtauber

The key to any speedup and/or reduction in memory consumption is likely to involve replacing the Trie.

The Trie only exists because we need to be able to do longest-prefix retrieval on a dictionary.

We could use a regular Python dictionary if incoming strings were tokenised by collation key.

What I'm basically thinking is:

  1. build a regular python dictionary for the collation table (some keys of which will consist of multiple characters)
  2. build a regex using the keys in the collation table
  3. use that regex to tokenise incoming strings (this has the effect of a Trie)
  4. look the tokens up (some of which will be multiple characters) in the python dictionary

It will be worth having some sort of timed test to see if this approach actually makes a (positive) difference but I suspect it might.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions