Skip to content

[target-allocator] Target Allocator Pod gives errors about missing access to scrapeconfigs and probes resources #1754

Open
@tarunfeb27

Description

@tarunfeb27

Using chart opentelemetry-target-allocator.
Seen following error in the TA pod, though I am only using ServiceMonitor and PodMonitor

{"level":"info","ts":"2025-07-09T06:35:24Z","msg":"pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:251: failed to list *v1alpha1.ScrapeConfig: scrapeconfigs.monitoring.coreos.com is forbidden: User \"system:serviceaccount:prometheus:target-allocator-sa\" cannot list resource \"scrapeconfigs\" in API group \"monitoring.coreos.com\" at the cluster scope: Azure does not have opinion for this user."}
{"level":"error","ts":"2025-07-09T06:35:24Z","msg":"Unhandled Error","logger":"UnhandledError","error":"pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:251: Failed to watch *v1alpha1.ScrapeConfig: failed to list *v1alpha1.ScrapeConfig: scrapeconfigs.monitoring.coreos.com is forbidden: User \"system:serviceaccount:prometheus:target-allocator-sa\" cannot list resource \"scrapeconfigs\" in API group \"monitoring.coreos.com\" at the cluster scope: Azure does not have opinion for this user.","stacktrace":"k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:166\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:316\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:226\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:227\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:314\nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:55\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:72"}
{"level":"info","ts":"2025-07-09T06:35:30Z","msg":"pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:251: failed to list *v1.Probe: probes.monitoring.coreos.com is forbidden: User \"system:serviceaccount:prometheus:target-allocator-sa\" cannot list resource \"probes\" in API group \"monitoring.coreos.com\" at the cluster scope: Azure does not have opinion for this user."}
{"level":"error","ts":"2025-07-09T06:35:30Z","msg":"Unhandled Error","logger":"UnhandledError","error":"pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:251: Failed to watch *v1.Probe: failed to list *v1.Probe: probes.monitoring.coreos.com is forbidden: User \"system:serviceaccount:prometheus:target-allocator-sa\" cannot list resource \"probes\" in API group \"monitoring.coreos.com\" at the cluster scope: Azure does not have opinion for this user.","stacktrace":"k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:166\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:316\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:226\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/backoff.go:227\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:314\nk8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:55\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\t/home/runner/go/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:72"}

Looks like the default clusterRole.yaml does not include the scrapeconfigs and probes resources.

It is possible to overwrite the ServiceAccount name, but not ClusterRole like it is for opentelemetry-collector

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions