Skip to content

Conversation

@jvpasinatto
Copy link
Contributor

@jvpasinatto jvpasinatto commented Nov 4, 2025

CLOUD-901 Powered by Pull Request Badge

CHANGE DESCRIPTION

  • Update helm 3.18.3 -> 3.19.0
  • Replace deprecated egrep with grep -E
  • Remove unnecessary \ in grep command
  • Update aws credentials and junit methods in Jenkins
  • Update docker login to use more secure --password-stdin

CHECKLIST

Jira

  • Is the Jira ticket created and referenced properly?
  • Does the Jira ticket have the proper statuses for documentation (Needs Doc) and QA (Needs QA)?
  • Does the Jira ticket link to the proper milestone (Fix Version field)?

Tests

  • Is an E2E test/test case added for the new feature/change?
  • Are unit tests added where appropriate?
  • Are OpenShift compare files changed for E2E tests (compare/*-oc.yml)?

Config/Logging/Testability

  • Are all needed new/changed options added to default YAML files?
  • Are all needed new/changed options added to the Helm Chart?
  • Did we add proper logging messages for operator actions?
  • Did we ensure compatibility with the previous version or cluster upgrade process?
  • Does the change support oldest and newest supported MongoDB version?
  • Does the change support oldest and newest supported Kubernetes version?

@pull-request-size pull-request-size bot added the size/M 30-99 lines label Nov 4, 2025
@github-actions github-actions bot added the tests label Nov 4, 2025
echo "checking member priorities"
run_mongo \
"rs.conf().members.map(m => m.priority)" \
"databaseAdmin:databaseAdmin123456@${cluster}-rs0.${namespace}" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
"databaseAdmin:databaseAdmin123456@${cluster}-rs0.${namespace}" \
"databaseAdmin:databaseAdmin123456@${cluster}-rs0.${namespace}" \

"databaseAdmin:databaseAdmin123456@${cluster}-rs0.${namespace}" \
| egrep -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' \
| grep -E -v 'I NETWORK|W NETWORK|F NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:' \
> ${tmp_dir}/priorities.json
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

Suggested change
> ${tmp_dir}/priorities.json
>${tmp_dir}/priorities.json

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[shfmt] reported by reviewdog 🐶

local endpoint="$1"
local rsName="$2"
local target_count=$3
local nodes_count=0
until [[ ${nodes_count} == ${target_count} ]]; do
nodes_count=$(run_mongos 'rs.conf().members.length' "clusterAdmin:clusterAdmin123456@$endpoint" "mongodb" ":27017" \
| grep -E -v 'I NETWORK|W NETWORK|Error saving history file|Percona Server for MongoDB|connecting to:|Unable to reach primary for set|Implicit session:|versions do not match|Error saving history file:|bye' \
| $sed -re 's/ObjectId\("[0-9a-f]+"\)//; s/-[0-9]+.svc/-xxx.svc/')
echo -n "waiting for all members to be configured in ${rsName}"
let retry+=1
if [ $retry -ge 15 ]; then
echo "Max retry count ${retry} reached. something went wrong with mongo cluster. Config for endpoint ${endpoint} has ${nodes_count} but expected ${target_count}."
exit 1
fi
echo .
sleep 10
done

@JNKPercona
Copy link
Collaborator

Test Name Result Time
arbiter passed 00:00:00
balancer passed 00:00:00
cross-site-sharded passed 00:00:00
custom-replset-name passed 00:00:00
custom-tls passed 00:00:00
custom-users-roles passed 00:00:00
custom-users-roles-sharded passed 00:00:00
data-at-rest-encryption passed 00:00:00
data-sharded passed 00:00:00
demand-backup passed 00:00:00
demand-backup-eks-credentials-irsa passed 00:00:00
demand-backup-fs passed 00:00:00
demand-backup-if-unhealthy passed 00:00:00
demand-backup-incremental passed 00:00:00
demand-backup-incremental-sharded passed 00:00:00
demand-backup-physical-parallel passed 00:00:00
demand-backup-physical-aws passed 00:00:00
demand-backup-physical-azure passed 00:00:00
demand-backup-physical-gcp-s3 passed 00:00:00
demand-backup-physical-gcp-native failure 00:57:54
demand-backup-physical-minio passed 00:00:00
demand-backup-physical-sharded-parallel passed 00:00:00
demand-backup-physical-sharded-aws passed 00:00:00
demand-backup-physical-sharded-azure passed 00:00:00
demand-backup-physical-sharded-gcp-native failure 01:02:12
demand-backup-physical-sharded-minio passed 00:00:00
demand-backup-sharded passed 00:00:00
expose-sharded passed 00:00:00
finalizer passed 00:00:00
ignore-labels-annotations passed 00:00:00
init-deploy passed 00:00:00
ldap passed 00:00:00
ldap-tls passed 00:00:00
limits passed 00:00:00
liveness passed 00:00:00
mongod-major-upgrade passed 00:00:00
mongod-major-upgrade-sharded passed 00:00:00
monitoring-2-0 passed 00:00:00
monitoring-pmm3 passed 00:00:00
multi-cluster-service passed 00:00:00
multi-storage passed 00:00:00
non-voting-and-hidden passed 00:00:00
one-pod passed 00:00:00
operator-self-healing-chaos passed 00:00:00
pitr passed 00:00:00
pitr-physical passed 00:00:00
pitr-sharded passed 00:00:00
pitr-to-new-cluster passed 00:00:00
pitr-physical-backup-source passed 00:00:00
preinit-updates passed 00:00:00
pvc-resize passed 00:00:00
recover-no-primary passed 00:00:00
replset-overrides passed 00:00:00
rs-shard-migration passed 00:00:00
scaling passed 00:00:00
scheduled-backup passed 00:00:00
security-context passed 00:00:00
self-healing-chaos passed 00:00:00
service-per-pod passed 00:00:00
serviceless-external-nodes passed 00:00:00
smart-update passed 00:00:00
split-horizon passed 00:00:00
stable-resource-version passed 00:00:00
storage passed 00:00:00
tls-issue-cert-manager passed 00:00:00
upgrade passed 00:00:00
upgrade-consistency passed 00:00:00
upgrade-consistency-sharded-tls passed 00:00:00
upgrade-sharded passed 00:00:00
upgrade-partial-backup passed 00:00:00
users passed 00:00:00
version-service passed 00:00:00
Summary Value
Tests Run 72/72
Job Duration 02:26:59
Total Test Time 02:00:06

commit: 591a1b5
image: perconalab/percona-server-mongodb-operator:PR-2104-591a1b50

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size/M 30-99 lines tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants