Skip to content
This repository was archived by the owner on May 16, 2025. It is now read-only.
This repository was archived by the owner on May 16, 2025. It is now read-only.

Inconsistencies in recrawling/reindexing framework #82

Open
@jayaddison

Description

@jayaddison

Problem

I discovered a large-ish number of duplicate recipes in the search index today, and have been tracing back through the (re)crawling and (re)indexing logic to figure out how this could have happened.

I think that there are perhaps three logical inconsistencies in the code that cause problems:

  • Unlike the crawl_recipe worker method, the index_recipe worker method does not check for duplicate recipes before (re)indexing content -- and as a result, manual reindexing of recipes may insert duplicates if the operator is not careful when selecting a set of recipes.
  • We had what looked to me like a bug that meant that crawl_recipe may not have been matching recipes to their earliest-known crawled equivalent. This has been addressed by commit d1e0901 (and follow-up bugfix b2e0ef3, suggesting the need for some code refactoring here).
  • There seems to be a tight coupling between the URL requested for recrawling/reindexing by the crawler reciperadar/recipes.py and the RecipeURL constructed by the backend recipe worker(s). This doesn't seem great; we should be able to accept oldest-known, latest-known, or anywhere in-between and expect reliable behaviour.

Analysis

Deduplication

The Celery task workflow during recrawling is crawl_recipe -> index_recipe, compared to the reindexing workflow of simply index_recipe. Sometimes it's more convenient and performant to use the latter, especially because it does not involve any calls to the crawler service and (potentially) external HTTP requests, generally meaning that it can be much faster.

However, it's difficult to strictly enforce the set of recipes that may be selected for reindexing by operators, and we should make the backend robust against mistakes there (because they will happen). This implies that we should move the de-duplcation logic (that detects duplicates, and remaps the src and dst on the Recipe so that duplicates merge) into the index_recipe worker method.

Refactoring

It may make sense to relocate the find_earliest_crawl and find_latest_crawl utility methods from RecipeURL to Recipe, and to always drive the latter based on the resolved URL of the recipe.

The goals here are multiple, and roughly in this priority order:

  • To ensure that latest-and-earliest URL lookup for a recipe are handled correctly, robustly and with the latest-known information at indexing-time so that recipe duplication should not occur.
  • To minimize the number of database queries required to perform the lookups (currently this requires query/construction of a RecipeURL object, for example, and the lookups themselves query the CrawlURL model).
  • To organize the code in a comprehensible way.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions