-
Notifications
You must be signed in to change notification settings - Fork 194
Description
Bug description
With capsule-proxy, we can't list namespaces on K9s. But with kubectl, I have no problem.
Error on K9s: Watcher failed for v1/namespace : [list] access denied on resource "":"v1/namespace".
This only concerns list/watch/get namespaces, as we can directly access specific namespaces with k9s -n <namespace>
or inside k9s: :namespace <namespace>
without autocomplete...
How to reproduce
Steps to reproduce the behavior:
- Capsule Chart configuration values.yaml :
crds:
install: true
nodeSelector:
node-role.kubernetes.io/control-plane: ""
manager:
kind: DaemonSet
options:
capsuleConfiguration: default
capsuleUserGroups:
%{ for tenantname, values in tenants }
# Tenant : ${tenantname}
%{ for group, val in values.tenant_groups }
- "oidc:${val}"
%{ endfor ~}%{ endfor ~}
# webhooks:
# hooks:
# nodes:
# failurePolicy: Ignore
proxy:
enabled: true
kind: DaemonSet
webhooks:
enabled: true
nodeSelector:
node-role.kubernetes.io/control-plane: ""
certManager:
generateCertificates: true
options:
generateCertificates: false
oidcUsernameClaim: "preferred_username"
extraArgs:
- "--feature-gates=ProxyClusterScoped=true"
- "--feature-gates=ProxyAllNamespaced=true"
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
service:
type: ClusterIP
port: 9001
ingress:
enabled: true
className: "nginx-private"
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/server-snippet: |
proxy_ssl_verify off;
# nginx.ingress.kubernetes.io/ssl-redirect: "true"
hosts:
- host: "capsule-proxy.${cluster.ingress_private_domain}"
paths: ["/"]
tls:
- secretName: capsule-proxy-tls
hosts:
- capsule-proxy.${cluster.ingress_private_domain}
auth:
tokenAuth:
enabled: true
- Provide all managed Kubernetes resources
I've created two tenants and users have access to this two tenants through oidc connexion.
apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
name: ${tenant_name}
spec:
owners:
- kind: Group
name: oidc:${tenant_group_owners}
clusterRoles:
- admin
- capsule-namespace-deleter
- terraform:tenant-${tenant_name}-default
- kind: Group
name: oidc:${tenant_group_users}
clusterRoles:
- admin
- terraform:tenant-${tenant_name}-default # with or without this, same error
- kind: Group
name: oidc:${tenant_group_viewers}
clusterRoles:
- view
preventDeletion: true
containerRegistries:
allowed:
- docker.io
allowedRegex: '.mycompany.com'
namespaceOptions:
quota: ${tenant_namespace_count}
resourceQuotas:
scope: Tenant
items:
- hard:
limits.cpu: "${tenant_limits_cpu}"
limits.memory: "${tenant_limits_memory}"
requests.cpu: "${tenant_requests_cpu}"
requests.memory: "${tenant_requests_memory}"
- hard:
requests.storage: "${tenant_requests_storage}"
persistentvolumeclaims: ${tenant_pvc_count}
- hard:
pods: ${tenant_pods_count}
storageClasses:
allowed:
%{for class in tenant_storageclasses }
- "${class}"
%{ endfor ~}
- Capsule-system namespace have this label : pod-security.kubernetes.io/enforce: privileged
Expected behavior
Apparently, k9s doesn't work the same way as kubectl.
Perhaps there's a problem with cluster-scoped resources? (Perhaps "selfsubjectaccessreviews", "selfsubjectrulesreviews", or "v1/namespace").
I don't use k9s daily, but users are complaining about this issue.
Logs
No particular Logs
Additional context
- Capsule version: 0.10.8
- Helm Chart version: 0.10.8
- Kubernetes version: 1.33.4
- Kubectl version: 1.34.1
- K9S version : 0.50.12