You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Unstructured-inference lazily downloads models which is likely the better choice for most use cases, however there are scenarios where the consumer would like to prefetch models.
Currently, this can be achieved for the default layout parser models, e.g. typically used for PDF's with:
from unstructured_inference.models.detectron2 import MODEL_TYPES
MODEL_TYPES[None]['model_path']
MODEL_TYPES[None]['config_path']
but, it be nice if there was a simple function (with parameter(s) to allow warming different models) call the user could make to ensure any needed artifacts are downloaded.
The text was updated successfully, but these errors were encountered:
Unstructured-inference lazily downloads models which is likely the better choice for most use cases, however there are scenarios where the consumer would like to prefetch models.
Currently, this can be achieved for the default layout parser models, e.g. typically used for PDF's with:
but, it be nice if there was a simple function (with parameter(s) to allow warming different models) call the user could make to ensure any needed artifacts are downloaded.
The text was updated successfully, but these errors were encountered: