|
| 1 | +# Tutorials |
| 2 | + |
| 3 | +## Building an index and populating it |
| 4 | + |
| 5 | +```python |
| 6 | +import tantivy |
| 7 | + |
| 8 | +# Declaring our schema. |
| 9 | +schema_builder = tantivy.SchemaBuilder() |
| 10 | +schema_builder.add_text_field("title", stored=True) |
| 11 | +schema_builder.add_text_field("body", stored=True) |
| 12 | +schema_builder.add_integer_field("doc_id",stored=True) |
| 13 | +schema = schema_builder.build() |
| 14 | + |
| 15 | +# Creating our index (in memory) |
| 16 | +index = tantivy.Index(schema) |
| 17 | +``` |
| 18 | + |
| 19 | +To have a persistent index, use the path |
| 20 | +parameter to store the index on the disk, e.g: |
| 21 | + |
| 22 | +```python |
| 23 | +index = tantivy.Index(schema, path=os.getcwd() + '/index') |
| 24 | +``` |
| 25 | + |
| 26 | +By default, tantivy offers the following tokenizers |
| 27 | +which can be used in tantivy-py: |
| 28 | + - `default` |
| 29 | +`default` is the tokenizer that will be used if you do not |
| 30 | + assign a specific tokenizer to your text field. |
| 31 | + It will chop your text on punctuation and whitespaces, |
| 32 | + removes tokens that are longer than 40 chars, and lowercase your text. |
| 33 | + |
| 34 | +- `raw` |
| 35 | + Does not actual tokenizer your text. It keeps it entirely unprocessed. |
| 36 | + It can be useful to index uuids, or urls for instance. |
| 37 | + |
| 38 | +- `en_stem` |
| 39 | + |
| 40 | + In addition to what `default` does, the `en_stem` tokenizer also |
| 41 | + apply stemming to your tokens. Stemming consists in trimming words to |
| 42 | + remove their inflection. This tokenizer is slower than the default one, |
| 43 | + but is recommended to improve recall. |
| 44 | + |
| 45 | +to use the above tokenizers, simply provide them as a parameter to `add_text_field`. e.g. |
| 46 | +```python |
| 47 | +schema_builder.add_text_field("body", stored=True, tokenizer_name='en_stem') |
| 48 | +``` |
| 49 | + |
| 50 | +## Adding one document. |
| 51 | + |
| 52 | +```python |
| 53 | +writer = index.writer() |
| 54 | +writer.add_document(tantivy.Document( |
| 55 | + doc_id=1, |
| 56 | + title=["The Old Man and the Sea"], |
| 57 | + body=["""He was an old man who fished alone in a skiff in the Gulf Stream and he had gone eighty-four days now without taking a fish."""], |
| 58 | +)) |
| 59 | +# ... and committing |
| 60 | +writer.commit() |
| 61 | +``` |
| 62 | + |
| 63 | +## Building and Executing Queries |
| 64 | + |
| 65 | +First you need to get a searcher for the index |
| 66 | + |
| 67 | +```python |
| 68 | +# Reload the index to ensure it points to the last commit. |
| 69 | +index.reload() |
| 70 | +searcher = index.searcher() |
| 71 | +``` |
| 72 | + |
| 73 | +Then you need to get a valid query object by parsing your query on the index. |
| 74 | + |
| 75 | +```python |
| 76 | +query = index.parse_query("fish days", ["title", "body"]) |
| 77 | +(best_score, best_doc_address) = searcher.search(query, 3).hits[0] |
| 78 | +best_doc = searcher.doc(best_doc_address) |
| 79 | +assert best_doc["title"] == ["The Old Man and the Sea"] |
| 80 | +print(best_doc) |
| 81 | +``` |
| 82 | + |
0 commit comments