This repository contains the source code for the paper π Benchmarking Function Hook Latency in Cloud-Native Environments which we published at the 14th Symposium on Software Performance (SSP) in 2023. The presentation slides of that paper can be found here.
Note This project is not officially supported by Dynatrace.
The repository is structured as follows:
/benchmark
contains the Locust load generator, the system under test (SUT), and the Kubernetes manifests for deploying them/hook
contains the source code of the function hook (and a pre-built binary) that we inject into the SUT/results
contains the data from our experiment, and a Jupyter notebook to analyze and visualize it
If you are only interested in the raw data from our experiments, look into the /results/data
directory.
Besides following empirical standards for software benchmarking (Ralph et al., 2021) and methodological principles for performance evaluation in cloud computing (Papadopoulos et al., 2019), we recommend researchers and engineers who benchmark function hook latency in cloud-native environments to also consider the following recommendations:
- Place the load generator and the system under test in separate containers, but within the same pod.
- If that is not possible, at least ensure that both pods are deployed on the same physical node.
- Weigh the benefits of introducing a service mesh against its additional network overhead.
- Generally avoid benchmarking in multi-tenancy clusters.
- Place the monitoring tool as close as possible to the layer where the hook is injected.
- Describe if the benchmark measures the specific hooking overhead in isolation (micro benchmark) or represents a real-world application with a hook injected into it (macro benchmark).
- Describe how the hooked function is typically used by applications.
- Ensure that your servers do not hit any resource limits during the experiment.
- Use a high number of repetitions to regain statistical power over the high variance that cloud-native environments introduce.
- Conduct experiments in differently configured environments.
π¬ If you have suggestions on how to improve these recommendations, please let us know by opening an issue or a pull request.
If this is useful for your work, you can cite our pre-print as follows. The conference version will be available at a later point in time.
@inproceedings{Kahlhofer2023:BenchmarkingFunctionHookLatency,
title = {Benchmarking Function Hook Latency in Cloud-Native Environments},
author = {Kahlhofer, Mario and Kern, Patrick and Henning, S{\"o}ren and Rass, Stefan},
booktitle = {Softwaretechnik-Trends Band 43, Heft 4},
eventtitle = {14th Symposium on Software Performance},
publisher = {Gesellschaft f{\"u}r Informatik e.V.},
location = {Karlsruhe, Germany},
series = {SSP '23},
pages = {11--13},
year = {2023},
month = nov,
issn = {0720-8928},
url = {https://dl.gi.de/handle/20.500.12116/43246}
}
In the following, we demonstrate how to reproduce the experiments of our paper. As a prerequisite, you will need to install the following tools:
- Docker for running containers locally
- Kind for running Kubernetes clusters locally
- kubectl for interacting with Kubernetes clusters
- (optional, for AWS) An AWS account for running experiments in EKS
- (optional, for AWS) The AWS CLI for interacting with AWS
This repository already includes a pre-built version of the function hook in /hook/out/readhook.so
making this step optional.
For building, we use a rather old gcc:7.5.0
image so that we build the hook against an older version of the C standard library (GLIBC 2.28).
This way, we have greater backwards compatibility with applications that use older versions of the C standard library.
In Bash, to build the hook in a container and copy it to the host system, run:
cd hook
docker build -t readhook .
id=$(docker create readhook)
docker cp $id:/out/readhook.so ./out/readhook.so
docker rm -v $id
In PowerShell, to build the hook in a container and copy it to the host system, run:
cd hook
docker build -t readhook .
$Id = docker create readhook
docker cp "$($Id):/out/readhook.so" ./out/readhook.so
docker rm -v $Id
We prepared a docker-compose.yaml
file that sets up the following:
- Container for the SUT, on port
8080
- Container for the SUT, with
LD_PRELOAD=/opt/hook/readhook.so
set, mounted from/hook/out
in this repository, on port8081
- Container for the Locust load generator, with
/benchmark/benchmark_results
mounted into this repository, on port8089
First, let Compose build and start the containers:
cd benchmark
docker compose up -d
Then, browse to http://localhost:8089 and start two benchmarks:
- One with the host
http://host.docker.internal:8080
(no trailing slash) to test the SUT without the hook - One with the host
http://host.docker.internal:8081
(no trailing slash) to test the SUT with the hook
Results will be placed into /tmp/benchmark_results
in the Locust container, which is mounted locally to ./benchmark/benchmark_results
.
To clean up again, run:
docker compose down
First, create a Kind cluster with our kind-cluster-config.yaml
that also mounts some host paths:
cd benchmark
kind create cluster --name benchmark --config ./k8s-manifests/kind-cluster-config.yaml
kubectl create namespace benchmark
Then, build and push the images to the Kind cluster with the help of Compose (this may take some time):
docker compose build
kind load docker-image -n benchmark system-under-test system-under-test
kind load docker-image -n benchmark test-bench test-bench
Next, apply a Deployment
and Service
resource in that cluster, wait for it, and port-forward the Locust UI:
kubectl apply -n benchmark -f ./k8s-manifests/kind-single-pod.yaml
kubectl rollout status deployment tb-single-pod -n benchmark --timeout=60s
kubectl port-forward -n benchmark service/locust 8089:8089
Then, browse to http://localhost:8089 and start two benchmarks:
- One with the host
http://localhost:8080
(no trailing slash) to test the SUT without the hook - One with the host
http://localhost:8081
(no trailing slash) to test the SUT with the hook
Results will be placed into /tmp/benchmark_results
in the Locust container, which is mounted locally to ./benchmark/benchmark_results
.
To clean up again, run:
kind delete cluster --name benchmark
For experiment 3 and 4, we presume that your AWS EKS cluster is already set up and that you have kubectl
configured to talk to it.
Typically, this can be done with the AWS CLI, like so:
aws eks update-kubeconfig --name ${CLUSTER_NAME} --region ${REGION}
We need to push our container images into AWS ECR. First, login into the container registry (ECR):
aws ecr get-login-password --region ${REGION} | docker login --username AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com
Next, create repositories for the images:
aws ecr create-repository --repository-name system-under-test
aws ecr create-repository --repository-name test-bench
aws ecr create-repository --repository-name readhook
Then, build the images locally (also the scratch readhook
container), tag them, and push them to ECR:
cd benchmark
docker compose --profile with-readhook build
docker tag system-under-test ${AWS_ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/system-under-test
docker tag test-bench ${AWS_ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/test-bench
docker tag readhook ${AWS_ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/readhook
docker push ${AWS_ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/system-under-test
docker push ${AWS_ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/test-bench
docker push ${AWS_ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/readhook
Note that we have a special readhook
container image which just contains the /out/readhook.so
file so that we can mount it without volume claims.
Before continuing, make sure that you read the prerequisites for experiments in AWS EKS from before.
Let's start by creating a namespace for our benchmark:
kubectl create namespace benchmark
Then, we take the aws-single-pod.template.yaml
manifest and need to change the REPOSITORY_URL
variable and deploy it.
If you use Windows, change this value manually in the file. With Bash, you can use the following command:
cd benchmark
export AWS_ACCOUNT_ID=YOUR_AWS_ACCOUNT_ID
export REGION=YOUR_AWS_REGION
cat ./k8s-manifests/aws-single-pod.template.yaml \
| sed -e 's@${REPOSITORY_URL}@'"${AWS_ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com"'@g' \
| kubectl apply -n benchmark -f -
Next, wait for the deployment and then port-forward the Locust UI locally:
kubectl rollout status deployment tb-single-pod -n benchmark --timeout=60s
kubectl port-forward -n benchmark service/locust 8089:8089
Then, browse to http://localhost:8089 and start two benchmarks:
- One with the host
http://localhost:8080
(no trailing slash) to test the SUT without the hook - One with the host
http://localhost:8081
(no trailing slash) to test the SUT with the hook
Results will be placed into /tmp/benchmark_results
in the Locust container. We need to copy this manually to our local machine.
In Bash, run the following:
podname=$(kubectl get pods -n benchmark --selector=app.kubernetes.io/name=tb-single-pod --no-headers -o custom-columns=":metadata.name")
kubectl cp -c test-bench "benchmark/$($PodName):/tmp/benchmark_results" ./benchmark_results
In PowerShell, run the following:
$PodName = kubectl get pods -n benchmark --selector=app.kubernetes.io/name=tb-single-pod --no-headers -o custom-columns=":metadata.name"
kubectl cp -c test-bench "benchmark/$($PodName):/tmp/benchmark_results" ./benchmark_results
To clean up again, run:
kubectl delete all --all -n benchmark
π± Experiment 4: AWS EKS with SUT and load generator in separate pods, each pod on a different node
Before continuing, make sure that you read the prerequisites for experiments in AWS EKS from before.
Let's start by creating a namespace for our benchmark:
kubectl create namespace benchmark
We need to note down the hostnames of at least two different nodes in our cluster:
kubectl get nodes --no-headers -o custom-columns=":metadata.name"
export NODE_HOSTNAME_FOR_LOCUST=MANUALLY_COPY_THE_FIRST_HOSTNAME_FROM_ABOVE
export NODE_HOSTNAME_FOR_SUT=MANUALLY_COPY_THE_SECOND_HOSTNAME_FROM_ABOVE
Then, we take the aws-different-nodes.template.yaml
manifest and need to change the REPOSITORY_URL
, NODE_HOSTNAME_FOR_LOCUST
, and NODE_HOSTNAME_FOR_SUT
variables and deploy it.
If you use Windows, change these values manually in the file. With Bash, you can use the following command:
cd benchmark
export AWS_ACCOUNT_ID=YOUR_AWS_ACCOUNT_ID
export REGION=YOUR_AWS_REGION
cat ./k8s-manifests/aws-different-nodes.template.yaml \
| sed -e 's@${REPOSITORY_URL}@'"${AWS_ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com"'@g' \
| sed -e 's@${NODE_HOSTNAME_FOR_LOCUST}@'"${NODE_HOSTNAME_FOR_LOCUST}"'@g' \
| sed -e 's@${NODE_HOSTNAME_FOR_SUT}@'"${NODE_HOSTNAME_FOR_SUT}"'@g' \
| kubectl apply -n benchmark -f -
Next, wait for the deployment and then port-forward the Locust UI locally:
kubectl rollout status deployment tb-locust-node -n benchmark --timeout=60s
kubectl port-forward -n benchmark service/locust 8089:8089
Then, browse to http://localhost:8089 and start two benchmarks:
- One with the host
http://sut.benchmark.svc.cluster.local:8080
(no trailing slash) to test the SUT without the hook - One with the host
http://sut.benchmark.svc.cluster.local:8081
(no trailing slash) to test the SUT with the hook
Results will be placed into /tmp/benchmark_results
in the Locust container. We need to copy this manually to our local machine.
In Bash, run the following:
podname=$(kubectl get pods -n benchmark --selector=app.kubernetes.io/name=tb-locust-node --no-headers -o custom-columns=":metadata.name")
kubectl cp -c test-bench "benchmark/$($PodName):/tmp/benchmark_results" ./benchmark_results
In PowerShell, run the following:
$PodName = kubectl get pods -n benchmark --selector=app.kubernetes.io/name=tb-locust-node --no-headers -o custom-columns=":metadata.name"
kubectl cp -c test-bench "benchmark/$($PodName):/tmp/benchmark_results" ./benchmark_results
To clean up again, run:
kubectl delete all --all -n benchmark