Skip to content

Conversation

eleo007
Copy link
Contributor

@eleo007 eleo007 commented Oct 3, 2025

K8SPSMDB-1483 Powered by Pull Request Badge

CHANGE DESCRIPTION

Problem:
Short explanation of the problem.

Cause:
Short explanation of the root cause of the issue if applicable.

Solution:
Short explanation of the solution we are providing with this PR.

CHECKLIST

Jira

  • Is the Jira ticket created and referenced properly?
  • Does the Jira ticket have the proper statuses for documentation (Needs Doc) and QA (Needs QA)?
  • Does the Jira ticket link to the proper milestone (Fix Version field)?

Tests

  • Is an E2E test/test case added for the new feature/change?
  • Are unit tests added where appropriate?
  • Are OpenShift compare files changed for E2E tests (compare/*-oc.yml)?

Config/Logging/Testability

  • Are all needed new/changed options added to default YAML files?
  • Are all needed new/changed options added to the Helm Chart?
  • Did we add proper logging messages for operator actions?
  • Did we ensure compatibility with the previous version or cluster upgrade process?
  • Does the change support oldest and newest supported MongoDB version?
  • Does the change support oldest and newest supported Kubernetes version?

@pull-request-size pull-request-size bot added the size/XXL 1000+ lines label Oct 3, 2025
@github-actions github-actions bot added tests dependencies Pull requests that update a dependency file ci labels Oct 3, 2025
Comment on lines +10 to +11
echo 'Creating secrets and start client'
kubectl_bin apply -f "$test_dir/conf/secrets.yml" -f "$conf_dir/client.yml"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
echo 'Creating secrets and start client'
kubectl_bin apply -f "$test_dir/conf/secrets.yml" -f "$conf_dir/client.yml"
echo 'Creating secrets and start client'
kubectl_bin apply -f "$test_dir/conf/secrets.yml" -f "$conf_dir/client.yml"

Comment on lines +15 to +27
local timeout="$1"
shift

local elapsed=0
until "$@"; do
if (( elapsed >= timeout )); then
echo "Timeout after ${timeout}s: command '$*' did not succeed"
exit 1
fi
sleep 1 && elapsed=$((elapsed + interval))
done

return 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
local timeout="$1"
shift
local elapsed=0
until "$@"; do
if (( elapsed >= timeout )); then
echo "Timeout after ${timeout}s: command '$*' did not succeed"
exit 1
fi
sleep 1 && elapsed=$((elapsed + interval))
done
return 0
local timeout="$1"
shift
local elapsed=0
until "$@"; do
if ((elapsed >= timeout)); then
echo "Timeout after ${timeout}s: command '$*' did not succeed"
exit 1
fi
sleep 1 && elapsed=$((elapsed + interval))
done
return 0

Comment on lines +31 to +33
run_mongo 'db.getUser("myApp")' \
"userAdmin:userAdmin123456@$cluster-rs0.$namespace" \
| grep -q '"user" : "myApp"'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
run_mongo 'db.getUser("myApp")' \
"userAdmin:userAdmin123456@$cluster-rs0.$namespace" \
| grep -q '"user" : "myApp"'
run_mongo 'db.getUser("myApp")' \
"userAdmin:userAdmin123456@$cluster-rs0.$namespace" \
| grep -q '"user" : "myApp"'

}

delete_data() {
local data="$1"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
local data="$1"
local data="$1"

Comment on lines +49 to +51
run_mongo \
"use myApp\n db.test.deleteOne({ x: \"$data\" })" \
"myApp:myPass@$cluster-rs0.$namespace"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
run_mongo \
"use myApp\n db.test.deleteOne({ x: \"$data\" })" \
"myApp:myPass@$cluster-rs0.$namespace"
run_mongo \
"use myApp\n db.test.deleteOne({ x: \"$data\" })" \
"myApp:myPass@$cluster-rs0.$namespace"

}

verify_sts_not_ready() {
local sts_name="${1:-some-name-rs0}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
local sts_name="${1:-some-name-rs0}"
local sts_name="${1:-some-name-rs0}"

Comment on lines +97 to +100
if is_sts_ready "$sts_name"; then
echo "StatefulSet $sts_name is ready during the backup, failing..."
exit 1
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
if is_sts_ready "$sts_name"; then
echo "StatefulSet $sts_name is ready during the backup, failing..."
exit 1
fi
if is_sts_ready "$sts_name"; then
echo "StatefulSet $sts_name is ready during the backup, failing..."
exit 1
fi

Comment on lines +104 to +121
local sts_name="${1:-some-name-rs0}"
local timeout="${2:-60}"
local pod_name="${sts_name}-1"
local interval=2
local elapsed=0

echo "Updating cluster with invalid image..."
update_with_invalid_db_image

echo -n "Wait for statefulset $sts_name to become not ready..."
until ! is_sts_ready "$sts_name"; do
if (( elapsed >= timeout )); then
echo "Timeout reached: statefulSet $sts_name still has ready replicas"
exit 1
fi
sleep $interval && (( elapsed += interval ))
echo -n .
done
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
local sts_name="${1:-some-name-rs0}"
local timeout="${2:-60}"
local pod_name="${sts_name}-1"
local interval=2
local elapsed=0
echo "Updating cluster with invalid image..."
update_with_invalid_db_image
echo -n "Wait for statefulset $sts_name to become not ready..."
until ! is_sts_ready "$sts_name"; do
if (( elapsed >= timeout )); then
echo "Timeout reached: statefulSet $sts_name still has ready replicas"
exit 1
fi
sleep $interval && (( elapsed += interval ))
echo -n .
done
local sts_name="${1:-some-name-rs0}"
local timeout="${2:-60}"
local pod_name="${sts_name}-1"
local interval=2
local elapsed=0
echo "Updating cluster with invalid image..."
update_with_invalid_db_image
echo -n "Wait for statefulset $sts_name to become not ready..."
until ! is_sts_ready "$sts_name"; do
if ((elapsed >= timeout)); then
echo "Timeout reached: statefulSet $sts_name still has ready replicas"
exit 1
fi
sleep $interval && ((elapsed += interval))
echo -n .
done

pbm_binary=/opt/percona/pbm
fi

kubectl_bin exec ${cluster}-rs0-0 -c ${container} -- ${pbm_binary} profile show ${profile} > ${tmp_dir}/pbm_profile_${profile}.yml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
kubectl_bin exec ${cluster}-rs0-0 -c ${container} -- ${pbm_binary} profile show ${profile} > ${tmp_dir}/pbm_profile_${profile}.yml
kubectl_bin exec ${cluster}-rs0-0 -c ${container} -- ${pbm_binary} profile show ${profile} >${tmp_dir}/pbm_profile_${profile}.yml

pbm_binary=/opt/percona/pbm
fi

kubectl_bin exec ${cluster}-rs0-0 -c ${container} -- ${pbm_binary} config > ${tmp_dir}/pbm_config.yml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
kubectl_bin exec ${cluster}-rs0-0 -c ${container} -- ${pbm_binary} config > ${tmp_dir}/pbm_config.yml
kubectl_bin exec ${cluster}-rs0-0 -c ${container} -- ${pbm_binary} config >${tmp_dir}/pbm_config.yml

@eleo007 eleo007 closed this Oct 3, 2025
@JNKPercona
Copy link
Collaborator

Test name Status
arbiter failure
balancer failure
cross-site-sharded failure
custom-replset-name failure
custom-tls failure
custom-users-roles failure
custom-users-roles-sharded failure
data-at-rest-encryption failure
data-sharded failure
demand-backup failure
demand-backup-eks-credentials-irsa skipped
demand-backup-fs skipped
demand-backup-if-unhealthy skipped
demand-backup-incremental skipped
demand-backup-incremental-sharded skipped
demand-backup-physical-parallel skipped
demand-backup-physical-aws skipped
demand-backup-physical-azure skipped
demand-backup-physical-gcp-s3 skipped
demand-backup-physical-gcp-native skipped
demand-backup-physical-minio skipped
demand-backup-physical-sharded-parallel skipped
demand-backup-physical-sharded-aws skipped
demand-backup-physical-sharded-azure skipped
demand-backup-physical-sharded-gcp-native skipped
demand-backup-physical-sharded-minio skipped
demand-backup-sharded skipped
expose-sharded skipped
finalizer skipped
ignore-labels-annotations skipped
init-deploy skipped
ldap skipped
ldap-tls skipped
limits skipped
liveness skipped
mongod-major-upgrade skipped
mongod-major-upgrade-sharded skipped
monitoring-2-0 skipped
monitoring-pmm3 skipped
multi-cluster-service skipped
multi-storage skipped
non-voting-and-hidden skipped
one-pod skipped
operator-self-healing-chaos skipped
pitr skipped
pitr-physical skipped
pitr-sharded skipped
pitr-to-new-cluster skipped
pitr-physical-backup-source skipped
preinit-updates skipped
pvc-resize skipped
recover-no-primary skipped
replset-overrides skipped
rs-shard-migration skipped
scaling skipped
scheduled-backup skipped
security-context skipped
self-healing-chaos skipped
service-per-pod skipped
serviceless-external-nodes skipped
smart-update skipped
split-horizon skipped
stable-resource-version skipped
storage skipped
tls-issue-cert-manager skipped
upgrade skipped
upgrade-consistency skipped
upgrade-consistency-sharded-tls skipped
upgrade-sharded skipped
users skipped
version-service skipped
We run 10 out of 71

commit: bb7550a
image: perconalab/percona-server-mongodb-operator:PR-2077-bb7550a9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci dependencies Pull requests that update a dependency file size/XXL 1000+ lines tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants