-
Notifications
You must be signed in to change notification settings - Fork 202
Open
Description
Missing mlserver-mllib runtime in base MLServer Docker image
Summary
The base MLServer Docker image (docker.io/seldonio/mlserver@sha256:07890828601515d48c0fb73842aaf197cbcf245a5c855c789e890282b15ce390
) does not include the mlserver-mllib
runtime package, causing deployment failures when trying to serve Spark MLlib models.
Problem Description
When deploying MLServer with Spark MLlib models using the official base image, the following error occurs:
ModuleNotFoundError: No module named 'mlserver_mllib'
Steps to Reproduce
- Use the base MLServer image:
docker.io/seldonio/mlserver@sha256:07890828601515d48c0fb73842aaf197cbcf245a5c855c789e890282b15ce390
- Create a model configuration with:
{ "name": "mllib-model", "implementation": "mlserver_mllib.MLlibModel", "parameters": { "uri": ".", "version": "v0.1.0" } }
- Deploy the model
- Observe the
ModuleNotFoundError
Expected Result
The base MLServer image should include common runtimes like mlserver-mllib
so users can deploy models without needing custom images.
Actual Result
Environment tarball not found at '/mnt/models/environment.tar.gz'
Environment not found at './envs/environment'
2025-06-20 09:50:56,895 [mlserver] WARNING - Model name 'spark-mllib-model' is different than model's folder name 'models'.
2025-06-20 09:50:56,939 [mlserver.parallel] DEBUG - Starting response processing loop...
2025-06-20 09:50:56,940 [mlserver.rest] INFO - HTTP server running on http://0.0.0.0:8080
INFO: Started server process [1]
INFO: Waiting for application startup.
2025-06-20 09:50:56,956 [mlserver.metrics] INFO - Metrics server running on http://0.0.0.0:8082
2025-06-20 09:50:56,956 [mlserver.metrics] INFO - Prometheus scraping endpoint can be accessed on http://0.0.0.0:8082/metrics
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
2025-06-20 09:50:56,958 [mlserver.grpc] INFO - gRPC server running on http://0.0.0.0:9000
INFO: Application startup complete.
2025-06-20 09:50:56,958 [mlserver] ERROR - Some of the models failed to load during startup!
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/mlserver/server.py", line 125, in start
await asyncio.gather(
File "/opt/conda/lib/python3.10/site-packages/mlserver/registry.py", line 299, in load
return await self._models[model_settings.name].load(model_settings)
File "/opt/conda/lib/python3.10/site-packages/mlserver/registry.py", line 144, in load
new_model = self._model_initialiser(model_settings)
File "/opt/conda/lib/python3.10/site-packages/mlserver/parallel/registry.py", line 196, in model_initialiser
return model_initialiser(model_settings)
File "/opt/conda/lib/python3.10/site-packages/mlserver/registry.py", line 52, in model_initialiser
model_class = model_settings.implementation
File "/opt/conda/lib/python3.10/site-packages/mlserver/settings.py", line 388, in implementation
_reload_module(self.implementation_)
File "/opt/conda/lib/python3.10/site-packages/mlserver/settings.py", line 62, in _reload_module
module = importlib.import_module(module_path)
File "/opt/conda/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'mlserver_mllib'
2025-06-20 09:50:56,958 [mlserver.parallel] INFO - Waiting for shutdown of default inference pool...
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
INFO: Uvicorn running on http://0.0.0.0:8082 (Press CTRL+C to quit)
2025-06-20 09:50:58,666 [mlserver.parallel] INFO - Shutdown of default inference pool complete
2025-06-20 09:50:58,666 [mlserver.grpc] INFO - Waiting for gRPC server shutdown
2025-06-20 09:50:58,667 [mlserver.grpc] INFO - gRPC server shutdown complete
INFO: Shutting down
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [1]
INFO: Application shutdown complete.
INFO: Finished server process [1]
2025-06-20 09:50:58,867 [mlserver.parallel] INFO - Waiting for shutdown of default inference pool...
2025-06-20 09:50:58,867 [mlserver.parallel] INFO - Shutdown of default inference pool complete
2025-06-20 09:50:58,867 [mlserver.parallel] INFO - Waiting for shutdown of default inference pool...
2025-06-20 09:50:58,867 [mlserver.parallel] INFO - Shutdown of default inference pool complete
Environment
- MLServer Image:
docker.io/seldonio/mlserver@sha256:07890828601515d48c0fb73842aaf197cbcf245a5c855c789e890282b15ce390
- Deployment Platform: OpenShift/KServe
- Model Type: Spark MLlib
- Error:
ModuleNotFoundError: No module named 'mlserver_mllib'
Metadata
Metadata
Assignees
Labels
No labels