Skip to content

Support for postgres databases with table sharding #403

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
dklenke opened this issue Apr 10, 2025 · 0 comments · May be fixed by #404
Open

Support for postgres databases with table sharding #403

dklenke opened this issue Apr 10, 2025 · 0 comments · May be fixed by #404

Comments

@dklenke
Copy link
Contributor

dklenke commented Apr 10, 2025

When running postgres in a production environment the documentation recommends using table sharding https://tyk.io/docs/api-management/tyk-pump/#sharding

I could not get this to work in my kubernetes tyk environment so I opened a ticket with support and they told me the following variables need to be set:

TYK_DB_STORAGE_MAIN_TABLESHARDING=true
TYK_DB_ENABLEAGGREGATELOOKUPS=true

... plus some extra variables for tyk-pump but they also added the following:
"Please make sure that in the Dashboard, TYK_DB_STORAGE_ANALYTICS_TYPE and TYK_DB_STORAGE_LOGS_TYPE are unset . This is because the configuration above will work if you haven't set these 2 config ."

This presents a problem in the helm chart as TYK_DB_STORAGE_ANALYTICS_TYPE is hard coded to postgres when the helm value global.storageType is postgres, which of course it is in my case. I manually patched the deployment resource and this does in fact fix the original issue.

I understand that requiring TYK_DB_STORAGE_ANALYTICS_TYPE to be unset would probably be considered a work around and in some future tyk-dashboard version this would not be required but until then I would love to be able to optionally unset this variable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant