Skip to content

pagerduty_alert_grouping_setting Unexpectedly Removed or Reverted During Terraform Runs #1017

@nmistry24

Description

@nmistry24

Terraform Version
Terraform v1.11.4

PagerDuty Provider Version
v3.25.0


Affected Resource(s)

  • pagerduty_alert_grouping_setting

Terraform Configuration Files

resource "pagerduty_alert_grouping_setting" "main_alert" {
  name = "Alert Grouping - ${pagerduty_service.main.name}"
  type = "content_based"
  config {
    fields      = var.alert_grouping_fields
    aggregate   = var.alert_grouping_aggregate # Accepts either 'any' or 'all'
    time_window = 86400                        # 24 hours
  }
  services = [pagerduty_service.main.id]
}

Expected Behavior

  • Terraform should preserve the specified pagerduty_alert_grouping_setting with:
    • type = "content_based"
    • time_window = 86400 (24 hours)
  • No unexpected diffs in terraform plan unless the configuration was explicitly changed.

Actual Behavior

  • Periodically (often after upgrading the PagerDuty provider), the terraform plan shows:
    • pagerduty_alert_grouping_setting being removed from some services.
    • time_window reverting to 300 seconds for others.
  • These changes are not reflected in the actual Terraform configuration.
  • Manual re-apply restores the correct configuration.

Steps to Reproduce

  1. Configure any number of services with pagerduty_alert_grouping_setting as shown above.
  2. Apply the configuration.
  3. Upgrade or reinitialize the PagerDuty provider (e.g., after some time or version bump).
  4. Run terraform plan.
  5. Observe unexpected diffs in pagerduty_alert_grouping_setting.

Likely Cause

  • Inconsistencies between Terraform state and the PagerDuty API response regarding alert grouping settings.
  • Possibly due to:
    • Internal default values applied by PagerDuty but not surfaced correctly via the API.
    • Changes in provider behavior during upgrades (e.g., handling of time_window defaults).

Temporary Workaround

  • Manually re-apply the Terraform plan whenever this issue appears.

Impact of the Issue

  • Loss of expected alert grouping behavior, resulting in:
    • Alert noise.
    • Reduced signal clarity for on-call engineers.
  • Repeated manual remediation required to restore desired configurations.

Suggested Resolution

  • Clarify in provider documentation how pagerduty_alert_grouping_setting is managed and surfaced via the API.
  • Fix any inconsistencies in how defaults or deletions are handled during refresh.
  • Consider more predictable behavior or flags to maintain time_window persistence.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions