-
Notifications
You must be signed in to change notification settings - Fork 352
Description
What happened:
Although it is not recommended to use openmatch without a game frontend, first we wanted to test openmatch for our project without a game frontend.
We are using openmatch frontend directly from our clients with a simple flow:
-> CreateTicket
-> GetTicket until there is an assignment. (one per every X seconds)
-> DeleteTicket after we got an assignment.
We were able to stabilize the open file descriptors (that are used for connections to redis) and go routines by using production values here, but we could not understand why memory usage of frontend keeps increasing until the pod is killed by kubernetes. It looks like the only thing it does is talking to redis and returning ticket information. Every other metric except memory usage of frontend looks normal.
Appending some graphs below that illustrates the issue better than words.
Memory usage of processes (as you can see frontend keeps increasing until the pod is killed):

Open FDs and Go Routines:

Client and Server request rates(as you can see they are pretty much at the same rate):

What you expected to happen:
We expect the frontend memory usage should be stabilized at some point because there is no other change in any of the other metrics and query rate is the same.
How to reproduce it (as minimally and precisely as possible):
Send requests to openmatch frontend from many clients with the flow below:
-> CreateTicket
-> GetTicket until there is an assignment. (once per every X seconds)
-> DeleteTicket after we got an assignment.
Output of kubectl version:
Client Version: v1.28.4
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.3-eks-adc7111
Cloud Provider/Platform (AKS, GKE, Minikube etc.):
EKS
Open Match Release Version:
1.8.1
Install Method(yaml/helm):
helm