Description
I am not and will not use the tobs
CLI because I can't put any of that in my Infrastructure-As-Code repo to manage and version the config. I'm using Pulumi to (try to) deploy this chart:
https://www.pulumi.com/registry/packages/kubernetes/api-docs/helm/v3/chart/
My code looks like this at the moment:
import { helm } from '@pulumi/kubernetes'
new helm.v3.Chart(
"prometheus",
{
chart: "tobs",
repo: "timescale",
namespace: "prometheus",
version: "0.8.0",
values: {
namespaceOverride: "prometheus",
"timescaledb-single": {
enabled: false
},
promscale: {
connectionSecretName: "prometheus-stage",
connection: {
uri: "", // just in case it isn't clear to the template
host: "redacted.cloud.timescale.com",
port: redacted,
database: "redacted",
user: "redacted",
password: "redacted"
}
},
"kube-prometheus-stack": {
enabled: true,
grafana: {
enabled: true,
timescale: {
database: {
enabled: true,
host: "redacted.tsdb.cloud.timescale.com",
user: "redacted",
pass: "redacted",
port: redacted,
dbName: "redacted",
},
adminUser: "redacted",
adminPassSecret: "prometheus-stage"
}
}
}
}
}
)
On to the ordeals. I'll try to keep the salt to a minimum, but I've burned a non-trivial amount of time trying to reverse-engineer how this chart works so I can provide an excellent monitoring and metric visualization experience for my stakeholders, and I am not at all happy with the experience I've had with tobs
.
- Apparently, when you configure
promscale.connectionSecretName
, the Helm chart pulls all connection strings from the configured Secret and completely ignores everything in thepromscale.connection
block. This is not clear from the documentation, nor is it clear how to properly structure the keyvals in the Secret from the documentation alone.
Ok. Decipher how everything should be named and shove all the connection strings into the Secret for now, then, because trying to use promscale.connection.uri
also triggers errors for a completely new path of undocumented values that the chart seems to expect (starting with tobs.fullname
).
- Next,
kube-prometheus-stack.grafana.timescale.adminPassSecret
doesn't populatePGPASSWORD
on the Pod that thegrafana-db
Job launches.PGPASSWORD: <null>
on the Pod description. The Secret is properly configured, there's no RBAC in the way, everything else depending on the Secret populates and executes fine up to this point, but this Job never gets the password and throws an error accordingly in the logs (no password supplied
).
Ok. Override the password directly at runtime and we'll figure out security later.
- Next, the
prometheus-promscale
Pods don't inherit any of the port overrides properly withpromscale.connection.port
orpromscale.connectionSecretName
. So I can't point Promscale at my hosted TimescaleDB instance which isn't running on 5432, and I don't have the ability to specify the port at provision time in the Timescale hosted service to force 5432 on the server side.
guess_ill_die.jpeg
The documentation is severely lacking. Populate a Secret with a keyvals, it says, but it doesn't specify how to structure the data. There are malformed template paths all over the place that do not at all match how the template logic actually works. I have to dig around in the template files to figure out how to properly form objects/template paths to pass overrides for values.
Ok. I'll just not use this chart then and instead string together individual deployments of Prometheus, the Promscale Connector, and Grafana.
Wish me luck.