Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce Namespace Selectors to Control Which Namespaces Argo CD Monitors for Changes #21835

Open
nebojsa-prodana opened this issue Feb 10, 2025 · 4 comments · May be fixed by #21846
Open

Introduce Namespace Selectors to Control Which Namespaces Argo CD Monitors for Changes #21835

nebojsa-prodana opened this issue Feb 10, 2025 · 4 comments · May be fixed by #21846
Labels
enhancement New feature or request proposal:in-progress Proposal is being discussed. Code implementation should wait until approved.

Comments

@nebojsa-prodana
Copy link

nebojsa-prodana commented Feb 10, 2025

Summary

Extend Argo CD’s cluster secrets to allow namespace selection using a label selector.

This would enable dynamic filtering of namespaces instead of maintaining a static list.

Motivation

We are in the process of moving our workloads to be deployed through ArgoCD and would like to be able to monitor only the namespaces that have adopted ArgoCD.

In environments with multiple Argo CD instances, such as:

  • Workload Argo CD: Deploys user workloads and should only monitor namespaces containing user applications.
  • Platform Argo CD: Manages platform components (e.g., Istio, Argo Rollouts, Prometheus).

Currently, Workload Argo CD observes all events cluster-wide by default, including those unrelated to user applications.

We are already making use of resource.exclusions to filter out CRDs that do not constitute user workloads, but we'd also like to limit the namespaces where monitoring occurs.

For instance, we would want to avoid observing deployments or pods events of a large workload (300+ pods) that still has to be onboarded to ArgoCD.

Furthermore, different clusters have different namespaces provisioned. Maintaining the list of provisioned namespaces for each cluster secret could be quite a chore.

Proposal

Introduce a namespaceSelector field in the cluster secret, which would function as an alternative to the existing namespaces list (mutually exclusive).

apiVersion: v1
kind: Secret
metadata:
  name: mycluster-secret
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
  name: mycluster.example.com
  server: https://mycluster.example.com
  config: ...

  # clusterResources can be set to false when namespaceSelector is present
  # and it will be applied.
  clusterResources: "false"

  # These will be ignored if namespaceSelector is present
  namespaces: "namespace1,namespace2"

  namespaceSelector: |
    {
      "matchLabels": {
        "purpose": "user-workloads"
      },
      "matchExpressions": [
        {
          "key": "environment",
          "operator": "In",
          "values": ["production", "staging"]
        }
    }

This would allow us to:

  • Preserve existing behaviour while offering a more flexible selection mechanism.
  • Decouple namespace provisioning from Argo CD configuration.
  • Reduce the need for external processes or manual updates when new namespaces are provisioned.

This enhancement would streamline namespace management in large-scale deployments and improve Argo CD’s reconciliation efficiency.

@nebojsa-prodana nebojsa-prodana added the enhancement New feature or request label Feb 10, 2025
@nebojsa-prodana nebojsa-prodana linked a pull request Feb 11, 2025 that will close this issue
14 tasks
@leoluz leoluz added the proposal:in-progress Proposal is being discussed. Code implementation should wait until approved. label Feb 13, 2025
@zachaller
Copy link
Contributor

We would also maybe want to consider supporting matchLabels and matchExpressions as well from the standard k8s type of metav1.LabelSelector

@nebojsa-prodana
Copy link
Author

We would also maybe want to consider supporting matchLabels and matchExpressions as well from the standard k8s type of metav1.LabelSelector

Yep, I am using that already in the PoC PR - https://github.com/argoproj/argo-cd/pull/21846/files#diff-1637a10eecce380802dc8c0a10ac6870c8afeabb4fa679845a21341fcd80faf9R427

@leoluz
Copy link
Collaborator

leoluz commented Feb 13, 2025

Hi @nebojsa-prodana, we discussed about your proposal in today's contributors meeting.

I'd like to understand a bit better what your main goal is. What is your main concern with Argo CD keeping track of all cluster resources? Performance, security, something else?

@nebojsa-prodana
Copy link
Author

Hi @leoluz

What is your main concern with Argo CD keeping track of all cluster resources? Performance, security, something else?

Right now, the primary concern is performance and improved security from ArgoCD not having access to cluster-level resources is a bonus. Our security teams are aware of ArgoCD being overprivileged though, and there is work planned to harden our ArgoCD instances.

Regarding performance:

This ArgoCD instance really does not need to monitor cluster-resources. It is dedicated to deploying user workloads to specific namespaces. We worked around this by using resource.inclusions and resource.exclusions to limit what is being monitored. Ignoring cluster-level resources alone helped significantly

Image

Image

However, as mentioned in the motivation section, in our setup we cannot easily specify which namespaces should be monitored in the cluster secrets as the users are enabled to create their own namespaces:

Image

Image

This is with ignoreDifferences configured quite aggressively already, and as you can see we are ignoring a majority of the events successfully, but still - we shouldn't be observing e.g 190 HPA events/s. We are observing this many because ArgoCD is monitoring namespaces where no ArgoCD Application CR is deployed and that are used for stress/load testing or other purposes.

We also had to exclude pods from being tracked by ArgoCD as it was causing ArgoCD to slow down to crawl given that some services are running with hundreds of pods. We are planning to upgrade to v2.14 soon which will enable us to ignore dependent resources. The above screenshots were taken on ArgoCD v2.12.4+27d1e64.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request proposal:in-progress Proposal is being discussed. Code implementation should wait until approved.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants