Skip to content

RHEcosystemAppEng/dynatrace

Repository files navigation

Observing LLM Model deployed on Red Hat Openshift AI with OpenTelemetry and Dynatrace

collboration

This Helm chart enables the streamlined deployment of LLM models (such as vLLM) with an integrated OpenTelemetry Collector sidecar. It is designed to provide real-time observability by collecting Prometheus-formatted metrics from each model and exporting them to platforms like Dynatrace.

Key Features

  1. Deploy LLM models using KServe's InferenceService
  2. Automatically inject an OpenTelemetry Collector as a sidecar for each model
  3. Scrape and process Prometheus metrics from the model
  4. Export metrics using OTLP over HTTP to Dynatrace or any OTLP-compatible backend
  5. Secure integration using Kubernetes secrets for API tokens and endpoints
  6. Configure otel-secrets with your Dynatrace endpoint and API_TOKEN.

Once that's configured, deploy the helm charts.

Prerequisites

Deployment

cd Dynatrace/deploy/helm
make install NAMESPACE=dynatrace LLM=llama-3-2-3b-instruct LLM_TOLERATION="nvidia.com/gpu" 

This would deploy -

  1. llama-3-2-3b-instruct model
  2. Sidecar container as part of the LLM deployment

This would export the VLLM metrics to Dynatrace.

Dashboard

D1 D2

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •