Demo application for giving travel advice written in Python. Observability signals by OpenTelemetry.
Uses Ollama and PineCone to generate advice for a given destination.
Note This product is not officially supported by Dynatrace!
- Explore our sample dashboards on the Dynatrace Playground.
- Implement AI observability in your environments with our detailed Dynatrace Documentation.
Head over to https://app.pinecone.io/ and login into your account.
-
Create a new index called
travel-advisor
with the dimensions of 3200 and acosine
metric.The index will store our knowledge source, which the RAG pipeline will use to augment the LLM's output of the travel recommendation. The parameter 3200 is because for this demo, we are using the embedding model
orca-mini:3b
which returns vector of 3200 elements. -
After creating and running the index, we can create an API key to connect.
Follow the Pinecone documentation on authentication to get the API key to connect to your Pinecone index and store it as Kubernetes secrets with the following command:
Start Ollama locally by running ollama serve
.
For this example, we'll use a simple model, orca-mini:3b
.
You can pull it running ollama run orca-mini:3b
.
Afterwards, you can start the application locally by running the following command.
export PINECONE_API_KEY=<YOUR_PINECONE_KEY>
export OTEL_ENDPOINT=https://<YOUR_DT_TENANT>.live.dynatrace.com/api/v2/otlp
export API_TOKEN=<YOUR_DT_TOKEN>
python app.py
You will need Docker or Podman installed.
Create a cluster if you do not already have one:
kind create cluster --config .devcontainer/kind-cluster.yml --wait 300s
Customise and set some environment variables
export PINECONE_API_KEY=<YOUR_PINECONE_KEY>
export DT_ENDPOINT=https://<YOUR_DT_TENANT>.live.dynatrace.com
export DT_TOKEN=<YOUR_DT_TOKEN>
Run the deployment script:
.devcontainer/deployment.sh