Skip to content

Bug: slack_api_url_file not respected by mimir alertmanager so secrets are required in the values file #12153

@jessebot

Description

@jessebot

What is the bug?

Hoi Grafana team!

Grafana Mimir's pod in Kubernetes throws the following in the logs:

ts=2025-07-21T13:24:33.9987957Z caller=multitenant.go:725 level=warn component=MultiTenantAlertmanager msg="error applying config" err="unable to load fallback configuration for anonymous: no global Slack API URL set" user=anonymous

Even though I have the global.slack_api_url_file parameter set in my fallback config. Slack alerts are not working because of this, even though the individual receivers do have their slack api url files also set.

Please let me know if you need any further info and if this isn't intended to be supported, what I should do to hide secret data instead?

Thank you for any help you can provide 🙏

How to reproduce it?

Via the latest helm chart values, I use this fallback config:

alertmanager:
  enabled: true

  # secrets with slackURLs
  extraVolumes:
    - name: alertmanager-api-urls
      secret:
        secretName: alertmanager-api-urls

  extraVolumeMounts:
    - name: alertmanager-api-urls
      mountPath: /etc/secrets/alertmanager
      readOnly: true

  fallbackConfig: |
    global:
      slack_api_url_file: '/etc/secrets/alertmanager/slack_url'

# let me know if you need my whole fallbackconfig but I've added the most crucial part

the secret that has the url looks like this (but I've redacted the actual secret):

apiVersion: v1
kind: Secret
metadata:
  name: alertmanager-api-urls
  namespace: monitoring
type: Opaque
data:
  slack_url: redacted

What did you think would happen?

It should respect the global.slack_api_url_file parameter as this is respected in Prometheus alertmanager (which I have running in the same cluster working fine). The reason it matters is that this parameter contains sensitive information. I expect this based on the docs that say:

Each tenant has an Alertmanager configuration that defines notifications receivers and alerting routes. The Mimir Alertmanager uses the same configuration file that the Prometheus Alertmanager uses.

What was your environment?

kubernetes: v1.30.13-eks-5d4a308
Helm chart version: 5.8.0-weekly.347

Let me know if you need anything else!

Any additional context to share?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions