Skip to content

Bug: topologySpreadConstraints uses incorrect instance label for DM Discovery pods #6149

Open
@shunki-fujita

Description

@shunki-fujita

Bug Report

What version of Kubernetes are you using?
1.30

What version of TiDB Operator are you using?
1.6.1

What did you do?
In the DMCluster resource, the dm-discovery deployment is created with a topologySpreadConstraints that does not match the actual labels on the Pod.

The name of the discovery pod in a DMCluster is suffixed with -dm at the following location:

case *v1alpha1.DMCluster:
// NOTE: for DmCluster, add a `-dm` prefix for discovery to avoid name conflicts.
name = fmt.Sprintf("%s-dm", cluster.GetName())
instanceName := fmt.Sprintf("%s-dm", cluster.GetInstanceName())
ownerRef = controller.GetDMOwnerRef(cluster) // TODO: refactor to unify methods
discoveryLabel = label.NewDM().Instance(instanceName).Discovery()

However, the TopologySpreadConstraints configuration uses DMCluster.name directly as the instanceLabel:

instanceLabelVal := a.name
if v, ok := tsc.MatchLabels[label.InstanceLabelKey]; ok {
instanceLabelVal = v
}
l[label.InstanceLabelKey] = instanceLabelVal

What did you expect to see?
The topologySpreadConstraints for the dm-discovery pod should match the pod's actual labels, including the app.kubernetes.io/instance label with the -dm suffix.

What did you see instead?
The topologySpreadConstraints uses the DMCluster.name as the instance label without the -dm suffix, which causes a mismatch with the actual label on the dm-discovery pod.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions