-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
is there anyway to use zitadel without giving database admin access? #142
Comments
There is also no way to open an issue that isn't a doc issue, but helm charts will continue to need improvements beyond docs as Kubernetes grows. I recommend having additional issue templates for reporting bugs other than direct security vulnerabilities. |
Hi @jessebot |
You have not added permissions for users to reopen issues, so I cannot do that. You should allow users more time than 1 minute to respond before closing an issue. If your metrics are calculated based on time to close an issue, your product manager should adjust them.
how do you do that via the helm chart though? It is not explained here |
Here is my current click me for values.yamlreplicaCount: 1
# Overrides the image tag to the latest version
# as kept up to date by renovateBot
image:
tag: "v2.35.0"
zitadel:
# See all defaults here:
# https://github.com/zitadel/zitadel/blob/main/cmd/defaults.yaml
configmapConfig:
DefaultInstance:
LoginPolicy:
# disable registration AKA signups
AllowRegister: false
Database:
Postgres:
Host: zitadel-postgres-rw.zitadel.svc
Port: 5432
Database: zitadel
User:
Username: zitadel
SSL:
Mode: verify-full
Admin:
SSL:
Mode: verify-full
ExternalDomain: myzitadel.example.com
TLS:
# off until https://github.com/zitadel/zitadel-charts/pull/141
# or a similar easy fix would be merged
Enabled: false
# specifies if ZITADEL is exposed externally through TLS this
# must be set to true even if TLS is not enabled on ZITADEL itself
# but TLS traffic is terminated on a reverse proxy
# !!! Changing this after initial setup breaks your system !!!
ExternalSecure: true
ExternalPort: 443
Machine:
Identification:
Hostname:
Enabled: true
Webhook:
Enabled: false
# setup ZITADEL with a service account
FirstInstance:
Org:
Machine:
Machine:
# Creates a service account with the name zitadel-admin-sa,
# which results in a secret 'zitadel-admin-sa' with a key 'zitadel-admin-sa.json'
Username: zitadel-admin-sa
Name: Admin
MachineKey:
Type: 1
# Reference the name of the secret that contains the masterkey.
# The key should be named "masterkey".
masterkeySecretName: "zitadel-core-key"
# The Secret containing the CA certificate at key ca.crt needed for establishing secure database connections
dbSslCaCrtSecret: "zitadel-postgres-server-cert"
# The db admins secret containing the client certificate and key at tls.crt and tls.key needed for establishing secure database connections
dbSslAdminCrtSecret: "zitadel-postgres-server-cert"
# The db users secret containing the client certificate and key at tls.crt and tls.key needed for establishing secure database connections
dbSslUserCrtSecret: "zitadel-postgres-zitadel-cert"
ingress:
enabled: true
className: "nginx"
annotations:
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: myzitadel.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: zitadel-tls
hosts:
- myzitadel.example.com
metrics:
enabled: false
serviceMonitor:
enabled: false
readinessProbe:
enabled: true
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
enabled: true
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6 |
Ok, unfortunately, I couldn't find an option to allow everybody to reopen issues 🙁 I will not close them immediately anymore.
Yes, if you don't intend to run
We are not maintaining and publishing such a list. I recommend you let ZITADEL initialize the DB and then remove the admin credentials. Else, you can also initialize a local database and record the SQL statements. |
thanks for looking!
I will try that.
wait, then how can I setup the zitadel database and user ahead of time if you don't tell me what permissions it needs? I don't want to let zitadel have admin access to my production cluster. This is a bit confusing. I just need to know what grants you need and what schemas you may need access to. If that is closed source info, this project is not fully open source. |
The scripts that ZITADEL uses are here and I see there is also a description about what they do. |
Excellent, Thanks for your sleuthing! I will try to set this up and report back later today. |
I think those commands are incomplete for postgresql specifically. If I run all of the commands in this directory as the postgresql user in my initDBScript for zitadel, I still get errors in the setup job: time="2023-11-14T10:38:52Z" level=info msg="setup started" caller="/home/runner/work/zitadel/zitadel/cmd/setup/setup.go:63"
time="2023-11-14T10:38:52Z" level=warning msg="postgres is currently in beta" caller="/home/runner/work/zitadel/zitadel/internal/database/postgres/config.go:65"
time="2023-11-14T10:38:52Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:39" name=01_tables
time="2023-11-14T10:38:52Z" level=info msg="query failed" caller="/home/runner/work/zitadel/zitadel/internal/eventstore/repository/sql/query.go:98" error="ERROR: column \"creation_date\" does not exist (SQLSTATE 42703)"
time="2023-11-14T10:38:52Z" level=fatal msg="unable to migrate step 1" caller="/home/runner/work/zitadel/zitadel/cmd/setup/setup.go:117" error="ID=SQL-KyeAx Message=unable to filter events Parent=(ERROR: column \"creation_date\" does not exist (SQLSTATE 42703))" zitadel-argocd-application-set.yaml for reference of all values passed in---
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: zitadel-web-app-set
namespace: argocd
annotations:
pref.argocd.argoproj.io/default-view: "network"
pref.argocd.argoproj.io/default-pod-sort: "topLevelResource"
spec:
goTemplate: true
# generator allows us to source specific values from an external k8s secret
generators:
- plugin:
configMapRef:
name: secret-var-plugin-generator
input:
parameters:
secret_vars:
- zitadel_hostname
- global_cluster_issuer
template:
metadata:
name: zitadel-web-app
annotations:
argocd.argoproj.io/sync-wave: "4"
spec:
project: zitadel
destination:
server: https://kubernetes.default.svc
namespace: zitadel
syncPolicy:
syncOptions:
- ApplyOutOfSyncOnly=true
automated:
prune: true
selfHeal: true
source:
repoURL: https://zitadel.github.io/zitadel-charts
chart: zitadel
targetRevision: 7.1.0
helm:
releaseName: zitadel
# https://github.com/zitadel/zitadel-charts/blob/main/charts/zitadel/values.yaml
values: |
replicaCount: 1
# Overrides the image tag to the latest version
# as kept up to date by renovateBot
image:
tag: "v2.35.0"
zitadel:
# See all defaults here:
# https://github.com/zitadel/zitadel/blob/main/cmd/defaults.yaml
configmapConfig:
DefaultInstance:
LoginPolicy:
# disable registration AKA signups
AllowRegister: false
Database:
Postgres:
Host: zitadel-postgres-rw.zitadel.svc
Port: 5432
Database: zitadel
User:
Username: zitadel
SSL:
Mode: verify-full
#Admin:
# Username: postgres
# SSL:
# Mode: verify-full
ExternalDomain: {{ .zitadel_hostname }}
TLS:
# off until https://github.com/zitadel/zitadel-charts/pull/141
# or a similar easy fix would be merged
Enabled: false
# specifies if ZITADEL is exposed externally through TLS this
# must be set to true even if TLS is not enabled on ZITADEL itself
# but TLS traffic is terminated on a reverse proxy
# !!! Changing this after initial setup breaks your system !!!
ExternalSecure: true
ExternalPort: 443
Machine:
Identification:
Hostname:
Enabled: true
Webhook:
Enabled: false
# setup ZITADEL with a service account
FirstInstance:
Org:
Machine:
Machine:
# Creates a service account with the name zitadel-admin-sa,
# which results in a secret 'zitadel-admin-sa' with a key 'zitadel-admin-sa.json'
Username: zitadel-admin-sa
Name: Admin
MachineKey:
Type: 1
# Reference the name of the secret that contains the masterkey.
# The key should be named "masterkey".
masterkeySecretName: "zitadel-core-key"
# The Secret containing the CA certificate at key ca.crt needed for establishing secure database connections
dbSslCaCrtSecret: "zitadel-postgres-server-ca-key-pair"
# The db admins secret containing the client certificate and key at tls.crt and tls.key needed for establishing secure database connections
# dbSslAdminCrtSecret: "zitadel-postgres-postgres-cert"
# The db users secret containing the client certificate and key at tls.crt and tls.key needed for establishing secure database connections
dbSslUserCrtSecret: "zitadel-postgres-zitadel-cert"
initJob:
# Once ZITADEL is installed, the initJob can be disabled.
enabled: false
ingress:
enabled: true
className: "nginx"
annotations:
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: {{ .global_cluster_issuer }}
hosts:
- host: {{ .zitadel_hostname }}
paths:
- path: /
pathType: Prefix
tls:
- secretName: zitadel-tls
hosts:
- {{ .zitadel_hostname }}
metrics:
enabled: false
serviceMonitor:
enabled: false
readinessProbe:
enabled: true
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6
livenessProbe:
enabled: true
initialDelaySeconds: 20
periodSeconds: 15
failureThreshold: 6 This is after also making sure to pass in additional commands related to schema ownership. Here's the full set of commands I run using the CloudNativePG operator for creating postgresql cluster: postgresql-cluster-crd-argocd-applicationset.yaml---
# webapp is deployed 2nd because we need secrets and persistent volumes up 1st
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: zitadel-postgres-app-set
namespace: argocd
spec:
goTemplate: true
# generator allows us to source specific values from an external k8s secret
generators:
- plugin:
configMapRef:
name: secret-var-plugin-generator
input:
parameters:
secret_vars:
- zitadel_s3_endpoint
- zitadel_s3_bucket
template:
metadata:
name: zitadel-postgres-cluster
namespace: zitadel
annotations:
argocd.argoproj.io/sync-wave: "3"
spec:
project: zitadel
destination:
server: "https://kubernetes.default.svc"
namespace: zitadel
syncPolicy:
syncOptions:
- ApplyOutOfSyncOnly=true
automated:
prune: true
selfHeal: true
source:
repoURL: https://small-hack.github.io/cloudnative-pg-cluster-chart
chart: cnpg-cluster
targetRevision: 0.3.9
helm:
releaseName: zitadel-postgres-cluster
values: |
name: zitadel-postgres
instances: 1
bootstrap:
initdb:
database: zitadel
owner: zitadel
postInitApplicationSQLRefs:
secretRefs:
- name: zitadel-postgres-init-script
key: init.sql
enableSuperuserAccess: true
backup:
# barman is a utility for backing up postgres to s3
barmanObjectStore:
destinationPath: "s3://{{ .zitadel_s3_bucket }}"
endpointURL: "https://{{ .zitadel_s3_endpoint }}"
s3Credentials:
accessKeyId:
name: zitadel-db-credentials
key : "ACCESS_KEY"
secretAccessKey:
name: zitadel-db-credentials
key : "SECRET_KEY"
retentionPolicy: "30d"
certificates:
server:
enabled: true
generate: true
client:
enabled: true
generate: true
user:
enabled: true
username:
- zitadel
- postgres
scheduledBackup:
name: zitadel-pg-backup
spec:
schedule: "0 0 0 * * *"
backupOwnerReference: self
cluster:
name: pg-backup
monitoring:
enablePodMonitor: false
postgresql:
pg_hba:
- hostnossl all all 0.0.0.0/0 reject
- hostssl all all 0.0.0.0/0 cert clientcert=verify-full It automatically creates a user named zitadel and a database named zitadel that the user zitadel owns and default has all permissions on. It uses this secret for init sql statements that it runs as the postgres super user (had to change init.sql secretapiVersion: v1
kind: Secret
metadata:
name: zitadel-postgres-init-script
type: Opaque
stringData:
init.sql: |
BEGIN;
CREATE SCHEMA IF NOT EXISTS eventstore;
CREATE SCHEMA IF NOT EXISTS projections;
CREATE SCHEMA IF NOT EXISTS system;
CREATE TABLE IF NOT EXISTS system.encryption_keys (id TEXT NOT NULL, key TEXT NOT NULL, PRIMARY KEY (id));
CREATE TABLE IF NOT EXISTS eventstore.events (
instance_id TEXT NOT NULL
, aggregate_type TEXT NOT NULL
, aggregate_id TEXT NOT NULL
, event_type TEXT NOT NULL
, "sequence" BIGINT NOT NULL
, revision SMALLINT NOT NULL
, created_at TIMESTAMPTZ NOT NULL
, payload JSONB
, creator TEXT NOT NULL
, "owner" TEXT NOT NULL
, "position" DECIMAL NOT NULL
, in_tx_order INTEGER NOT NULL
, PRIMARY KEY (instance_id, aggregate_type, aggregate_id, "sequence"));
CREATE INDEX IF NOT EXISTS es_active_instances ON eventstore.events (created_at DESC, instance_id);
CREATE INDEX IF NOT EXISTS es_wm ON eventstore.events (aggregate_id, instance_id, aggregate_type, event_type);
CREATE INDEX IF NOT EXISTS es_projection ON eventstore.events (
instance_id
, aggregate_type
, event_type
, "position");
CREATE SEQUENCE IF NOT EXISTS eventstore.system_seq;
CREATE TABLE IF NOT EXISTS eventstore.unique_constraints (
instance_id TEXT
, unique_type TEXT
, unique_field TEXT
, PRIMARY KEY (instance_id, unique_type, unique_field));
GRANT ALL ON SCHEMA system TO zitadel;
GRANT ALL ON ALL TABLES IN SCHEMA system TO zitadel;
GRANT ALL ON SCHEMA eventstore TO zitadel;
GRANT ALL ON ALL TABLES IN SCHEMA eventstore TO zitadel;
GRANT ALL ON SCHEMA projections TO zitadel;
GRANT ALL ON ALL TABLES IN SCHEMA projections TO zitadel;
COMMIT; I can also run all of those SQL commands directly as the postgres user against the postgres cluster and they all work, but still don't result in a functional zitadel install via the helm chart. I think it's because the init job and setup job are not properly encapsulated. I unfortunately am out of time for this project, so I will have to give zitadel admin access anyway, which may be an issue during an audit, as this zitadel helm chart is in Argo CD as an ApplicationSet and so we'd have to write some additional logic outside of the IaC to enable the init job declaratively, and then create a new git commit to disable the init job and remove the additional superuser cert secrets from both the zitadel ApplicationSet and the Cluster CRD that needed to be generated and then disable the admin user's external access to the cluster. This makes thing's a little harder and more manual. I will try to come back and look into this more later and post any solutions I come to. Maybe it makes sense to just have cert manager rotate the admin cert after this, that way it can be a Kubernetes Job still managed in the same declaritve repo and directly. |
@jessebot I would be interested in a solution for this as well. Right now, my workaround would be to have a separate Postgres server for Zitadel ... |
@lukasredev you can set the same credentials in the user and admin database configuration, then use the following values to configure the initJob: initJob:
command: zitadel zitadel-charts/charts/zitadel/values.yaml Lines 165 to 184 in 262325f
|
Preflight Checklist
Describe the docs your are missing or that are wrong
Thanks for continuing to work on zitadel!
I use a lot of other apps that use postgresql, and zitadel is the only one that requires database admin access, which seems like a security issue. I generally don't give admin access on clusters to applications, as it opens another hole in my infra. What is that you need the admin access for? If it's setting permissions on specific tables, this should be something we can setup ourselves ahead of time.
Additional Context
No response
The text was updated successfully, but these errors were encountered: