-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add E2E Promeheus metrics to applications #845
Conversation
Signed-off-by: Eero Tamminen <[email protected]>
748b6fa
to
64eca15
Compare
Regarding the duplicate Creating multiple While => I think Note: PS. |
Signed-off-by: Eero Tamminen <[email protected]>
Signed-off-by: Eero Tamminen <[email protected]>
Unlike apps, CI tests create multiple of them. Signed-off-by: Eero Tamminen <[email protected]>
Rebased pre-commit changes to earlier commits, and pushed above described solution to the CI issue on enabling I'm currently testing whether I could get somewhat similar metric (reliably!) also from If that works, enabling the "inprogress" metrics for EDIT: And on further grepping tests seem to be testing on the unwanted |
Creating multiple MicroService()s creates multiple HTTPService()s which creates multiple Prometheus fastapi instrumentor instances. While latter handled that fine for ChatQnA and normal HTTP metrics, that was not the case for its "inprogress" metrics in CI. Therefore MicroService constructor name argument is now mandatory, so that it can be used to make "inprogress" metrics for HTTPService instances unique. PS. instrumentor requires HTTPService instance specific Starlette instance, so it cannot be made singleton. Signed-off-by: Eero Tamminen <[email protected]>
Signed-off-by: Eero Tamminen <[email protected]>
for more information, see https://pre-commit.ci
Codecov ReportAttention: Patch coverage is
|
LGTM |
@Spycsh, @lvliang-intel Any suggestions where the new metrics should be documented; in GenAIExamples, or GenAIInfra repo? Or is it enough to add add Prometheus serviceMonitors to Helm chats for (rest of) the OPEA applications, and some Grafana dashboards for them? |
Hi @eero-t , GenAIEval https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark/grafana does track some Prometheus metrics and provide the naive measurement of first token latency and avg token latency, which are on the client side instead of through Prometheus. Welcome to add some documents there in the future. |
Eval repo is for evaluating and benchmarking, whereas metrics provided by the service "frontend", are (also) for operational monitoring, normal, every day usage of the service. I think most appropriate place would be the Infra repo, as it already includes monitoring support both with Helm charts [1], and separate manifest files + couple of Grafana dashboards [2], but that's rather Kubernetes specific. [1] https://github.com/opea-project/GenAIInfra/blob/main/helm-charts/monitoring.md |
Sure. Thanks for pointing out. |
Description
PR does following changes:
Issues
opea-project/GenAIExamples#391
Type of change
List the type of change like below. Please delete options that are not relevant.
Dependencies
No new ones.
(
prometheus_fastapi_instrumentator
imported forHttpService
already importedprometheus_client
module to apps.)Tests
Verified manually that produced metrics match ones from a benchmark that stresses the ChatQnA application.
Potential future changes (other PRs)
*_created
metrics (prometheus_client.disable_created_metrics()
)?ServiceOrchestrator
object and all applications and tests creating them to provide unique name for the orchestrator instance, and use that as metric prefix. Instead of all orchestrator instances sharing the same set ofmegaservice_
prefixed singleton metrics...