Skip to content

Dynatrace/obslab-llm-observability

EasyTravel GPT Travel Advisor

Demo application for giving travel advice written in Python. Observability signals by OpenTelemetry.

Uses Ollama and PineCone to generate advice for a given destination.

Note This product is not officially supported by Dynatrace!

Try it yourself

See a live demo

Configure Pinecone

Head over to https://app.pinecone.io/ and login into your account.

  1. Create a new index called travel-advisor with the dimensions of 3200 and a cosine metric.

    The index will store our knowledge source, which the RAG pipeline will use to augment the LLM's output of the travel recommendation. The parameter 3200 is because for this demo, we are using the embedding model orca-mini:3b which returns vector of 3200 elements.

    Pinecone Index Creation

  2. After creating and running the index, we can create an API key to connect.

    Follow the Pinecone documentation on authentication to get the API key to connect to your Pinecone index and store it as Kubernetes secrets with the following command:

Try it out yourself

Open "RAG" version in GitHub Codespaces

Developer Information Below

Run Locally

Start Ollama locally by running ollama serve. For this example, we'll use a simple model, orca-mini:3b. You can pull it running ollama run orca-mini:3b. Afterwards, you can start the application locally by running the following command.

export PINECONE_API_KEY=<YOUR_PINECONE_KEY> 
export OTEL_ENDPOINT=https://<YOUR_DT_TENANT>.live.dynatrace.com/api/v2/otlp
export API_TOKEN=<YOUR_DT_TOKEN>
python app.py

Deploy on a Local K8S Cluster

You will need Docker or Podman installed.

Create a cluster if you do not already have one:

kind create cluster --config .devcontainer/kind-cluster.yml --wait 300s

Customise and set some environment variables

export PINECONE_API_KEY=<YOUR_PINECONE_KEY> 
export DT_ENDPOINT=https://<YOUR_DT_TENANT>.live.dynatrace.com
export DT_TOKEN=<YOUR_DT_TOKEN>

Run the deployment script:

.devcontainer/deployment.sh