You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
spec.version: 7.0.15 (although i've tried version 6.0.5 and had the same behaviour)
spec.members: 1 (after testing with 2 - same behaviour independent of amount)
statefulSet.spec.volumeClaimTempates: added definitions for data-volume and logs-volume to give them some meaningfull sizes, had the same behaviour without the PVC templates.
What did you expect?
kubectl -n mongodb get mongodbcommunity
NAME PHASE VERSION
mongodb Running 7.0.15
kubectl -n mongodb get pod
NAME READY STATUS RESTARTS AGE
mongodb-0 2/2 Running 0 9m44s
mongodb-kubernetes-operator-7c967f54d4-vrhk4 1/1 Running 0 2d19h
What happened instead?
kubectl -n mongodb get mongodbcommunity
NAME PHASE VERSION
mongodb Pending
kubectl -n mongodb get pod
NAME READY STATUS RESTARTS AGE
mongodb-0 1/2 Running 0 9m44s
mongodb-kubernetes-operator-7c967f54d4-vrhk4 1/1 Running 0 2d19h
kubectl -n mongodb describe pod mongodb-0
(some output ommited)
Name: mongodb-0
Namespace: mongodb
Priority: 0
Service Account: mongodb-database
Status: Running
Containers:
mongod:
Image: docker.io/mongodb/mongodb-community-server:7.0.15-ubi8
Image ID: docker.io/mongodb/mongodb-community-server@sha256:bd2e8e00a36d89eeb67eb7886630eaeb68c445c8474fc8ed95286ee82456d44f
State: Running
Ready: True
Mounts:
/data from data-volume (rw)
/healthstatus from healthstatus (rw)
/hooks from hooks (rw)
/tmp from tmp (rw)
/var/lib/mongodb-mms-automation/authentication from mongodb-keyfile (rw)
/var/log/mongodb-mms-automation from logs-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p89bx (ro)
mongodb-agent:
Image: quay.io/mongodb/mongodb-agent-ubi:108.0.2.8729-1
Image ID: quay.io/mongodb/mongodb-agent-ubi@sha256:dda6762d4b53da3230c8acc925aeaaa45fc2b3e4c38e180a83053ced1528306d
State: Running
Ready: False
Mounts:
/data from data-volume (rw)
/opt/scripts from agent-scripts (rw)
/tmp from tmp (rw)
/var/lib/automation/config from automation-config (ro)
/var/lib/mongodb-mms-automation/authentication from mongodb-keyfile (rw)
/var/log/mongodb-mms-automation from logs-volume (rw)
/var/log/mongodb-mms-automation/healthstatus from healthstatus (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p89bx (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-volume-mongodb-0
ReadOnly: false
logs-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: logs-volume-mongodb-0
ReadOnly: false
agent-scripts:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
automation-config:
Type: Secret (a volume populated by a Secret)
SecretName: mongodb-config
Optional: false
healthstatus:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
hooks:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
mongodb-keyfile:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-p89bx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned mongodb/mongodb-0 to k3s-master-1-pi4
Normal SuccessfulAttachVolume 10m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-f5a8a0e6-61c7-439d-a6d7-6cfd693e012c"
Normal SuccessfulAttachVolume 10m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-73984433-b0c0-4f5b-b2ec-e568e2352e11"
Normal Pulling 10m kubelet Pulling image "quay.io/mongodb/mongodb-kubernetes-operator-version-upgrade-post-start-hook:1.0.9"
Normal Pulled 10m kubelet Successfully pulled image "quay.io/mongodb/mongodb-kubernetes-operator-version-upgrade-post-start-hook:1.0.9" in 834ms (834ms including waiting). Image size: 55380047 bytes.
Normal Created 10m kubelet Created container mongod-posthook
Normal Started 10m kubelet Started container mongod-posthook
Normal Pulling 10m kubelet Pulling image "quay.io/mongodb/mongodb-kubernetes-readinessprobe:1.0.22"
Normal Pulled 10m kubelet Successfully pulled image "quay.io/mongodb/mongodb-kubernetes-readinessprobe:1.0.22" in 613ms (613ms including waiting). Image size: 56850989 bytes.
Normal Created 10m kubelet Created container mongodb-agent-readinessprobe
Normal Started 10m kubelet Started container mongodb-agent-readinessprobe
Normal Pulling 10m kubelet Pulling image "docker.io/mongodb/mongodb-community-server:7.0.15-ubi8"
Normal Pulled 8m2s kubelet Successfully pulled image "docker.io/mongodb/mongodb-community-server:7.0.15-ubi8" in 2m10.069s (2m10.069s including waiting). Image size: 382255288 bytes.
Normal Created 8m2s kubelet Created container mongod
Normal Started 8m2s kubelet Started container mongod
Normal Pulling 8m2s kubelet Pulling image "quay.io/mongodb/mongodb-agent-ubi:108.0.2.8729-1"
Normal Pulled 8m1s kubelet Successfully pulled image "quay.io/mongodb/mongodb-agent-ubi:108.0.2.8729-1" in 776ms (776ms including waiting). Image size: 259631097 bytes.
Normal Created 8m1s kubelet Created container mongodb-agent
Normal Started 8m1s kubelet Started container mongodb-agent
Warning Unhealthy 7m49s kubelet Readiness probe failed: {"level":"info","ts":"2025-03-23T17:31:17.529Z","msg":"logging configuration: &{Filename:/var/log/mongodb-mms-automation/readiness.log MaxSize:5 MaxAge:0 MaxBackups:5 LocalTime:false Compress:false size:0 file:<nil> mu:{state:0 sema:0} millCh:<nil> startMill:{done:{_:{} v:0} m:{state:0 sema:0}}}"}
{"level":"info","ts":"2025-03-23T17:31:17.632Z","msg":"Mongod is not ready"}
{"level":"info","ts":"2025-03-23T17:31:17.632Z","msg":"Reached the end of the check. Returning not ready."}
2025-03-23 17:31:17.52957058 +0000 UTC m=+0.576564662 write error: can't open new logfile: open /var/log/mongodb-mms-automation/readiness.log: permission denied
2025-03-23 17:31:17.632262316 +0000 UTC m=+0.679256139 write error: can't open new logfile: open /var/log/mongodb-mms-automation/readiness.log: permission denied
2025-03-23 17:31:17.63252335 +0000 UTC m=+0.679517173 write error: can't open new logfile: open /var/log/mongodb-mms-automation/readiness.log: permission denied
Warning Unhealthy 7m49s kubelet Readiness probe failed: {"level":"info","ts":"2025-03-23T17:31:17.726Z","msg":"logging configuration: &{Filename:/var/log/mongodb-mms-automation/readiness.log MaxSize:5 MaxAge:0 MaxBackups:5 LocalTime:false Compress:false size:0 file:<nil> mu:{state:0 sema:0} millCh:<nil> startMill:{done:{_:{} v:0} m:{state:0 sema:0}}}"}
{"level":"info","ts":"2025-03-23T17:31:17.782Z","msg":"Mongod is not ready"}
{"level":"info","ts":"2025-03-23T17:31:17.783Z","msg":"Reached the end of the check. Returning not ready."}
Operator Information
Operator Version 0.12.0
MongoDB Image used 7.0.15, 6.0.5 (same behaviour on both)
Kubernetes Cluster Information
Distribution: k3s on RPi (arm64, two nodes, RPi4 master and RPi3 worker)
Defaulted container "mongod" out of: mongod, mongodb-agent, mongod-posthook (init), mongodb-agent-readinessprobe (init)
2025-03-23T17:31:05.666Z INFO versionhook/main.go:33 Running version change post-start hook
2025-03-23T17:31:05.670Z INFO versionhook/main.go:40 Waiting for agent health status...
2025-03-23T17:31:06.671Z INFO versionhook/main.go:46 Agent health status file not found, mongod will start
{
"statuses":{
"mongodb-0":{
"IsInGoalState":false,
"LastMongoUpTime":0,
"ExpectedToBeUp":true,
"ReplicationStatus":-1
}
},
"mmsStatus":{
"mongodb-0":{
"name":"mongodb-0",
"lastGoalVersionAchieved":-1,
"plans":[
{
"automationConfigVersion":1,
"started":"2025-03-23T17:31:06.349704425Z",
"completed":null,
"moves":[
{
"move":"Start",
"moveDoc":"Start the process",
"steps":[
{
"step":"StartFresh",
"stepDoc":"Start a mongo instance (start fresh)",
"isWaitStep":false,
"started":"2025-03-23T17:31:06.349778998Z",
"completed":null,
"result":"error"
}
]
},{
"move":"WaitAllRsMembersUp",
"moveDoc":"Wait until all members of this process' repl set are up",
"steps":[
{
"step":"WaitAllRsMembersUp",
"stepDoc":"Wait until all members of this process' repl set are up",
"isWaitStep":true,
"started":null,
"completed":null,
"result":""
}
]
},{
"move":"RsInit",
"moveDoc":"Initialize a replica set including the current MongoDB process",
"steps":[
{
"step":"RsInit",
"stepDoc":"Initialize a replica set",
"isWaitStep":false,
"started":null,
"completed":null,
"result":""
}
]
},{
"move":"WaitFeatureCompatibilityVersionCorrect",
"moveDoc":"Wait for featureCompatibilityVersion to be right",
"steps":[
{
"step":"WaitFeatureCompatibilityVersionCorrect",
"stepDoc":"Wait for featureCompatibilityVersion to be right",
"isWaitStep":true,
"started":null,
"completed":null,
"result":""
}
]
}
]
}
],
"errorCode":0,
"errorString":"\u003cmongodb-0\u003e [18:04:55.151] Plan execution failed on step StartFresh as part of move Start : \u003cmongodb-0\u003e [18:04:55.151] Failed to apply action. Result = \u003cnil\u003e : \u003cmongodb-0\u003e [18:04:55.151] Error starting mongod : \u003cmongodb-0\u003e [18:04:55.151] Error getting start process cmd for executable=mongod, stip=[args=
{
"net":{
"bindIp":"0.0.0.0",
"port":27017
},
"replication":{
"replSetName":"mongodb"
},
"security":{
"authorization":"enabled",
"keyFile":"/var/lib/mongodb-mms-automation/authentication/keyfile"
},
"setParameter":{
"authenticationMechanisms":"SCRAM-SHA-256"
},
"storage":{
"dbPath":"/data",
"wiredTiger":{
"engineConfig":{
"journalCompressor":"zlib"
}
}
}
}[],
confPath=/data/automation-mongod.conf,version=7.0.15-(),isKmipRotateMasterKey=false,useOldConfFile=false] : \u003cmongodb-0\u003e [18:04:55.150] Failed to create conf file : \u003cmongodb-0\u003e [18:04:55.150] Failed to create file /data/automation-mongod.conf : \u003cmongodb-0\u003e [18:04:55.150] Error creating /data/automation-mongod.conf : open /data/automation-mongod.conf: permission denied","waitDetails":{
"RunSetParameter":"process not up",
"UpdateFeatureCompatibilityVersion":"process isn't up",
"WaitAllRsMembersUp":"[]",
"WaitCannotBecomePrimary":"Wait until the process is reconfigured with priority=0 by a different process",
"WaitClusterReadyForFCVUpdate":"process isn't up",
"WaitDefaultRWConcernCorrect":"waiting for the primary to update defaultRWConcern",
"WaitForResyncPrimaryManualInterventionStep":"A resync was requested on a primary. This requires manual intervention",
"WaitHealthyMajority":"[]",
"WaitMultipleHealthyNonArbiters":"[]",
"WaitNecessaryRsMembersUpForReconfig":"[]",
"WaitPrimary":"This process is expected to be the primary member. Check that the replica set state allows a primary to be elected",
"WaitProcessUp":"The process is running, but not yet responding to agent calls",
"WaitResetPlacementHistory":"config servers haven't seen the marker"
}
}
}
}
What did you do to encounter the bug?
Steps to reproduce the behavior:
The only differences from https://github.com/mongodb/mongodb-kubernetes-operator/blob/master/config/samples/mongodb.com_v1_mongodbcommunity_cr.yaml
What did you expect?
kubectl -n mongodb get mongodbcommunity
kubectl -n mongodb get pod
What happened instead?
kubectl -n mongodb get mongodbcommunity
kubectl -n mongodb get pod
kubectl -n mongodb describe pod mongodb-0
(some output ommited)
Operator Information
Kubernetes Cluster Information
Additional context
Possibly same problem as: #1384 #1143 #949
The volumes are RWO, correctly provisioned and bound.
kubectl -n mongodb get mdbc -oyaml
kubectl -n mongodb get sts -oyaml
kubectl -n mongodb get pods -oyaml
kubectl -n mongodb logs mongodb-0
kubectl -n mongodb exec -it mongodb-0 -c mongodb-agent -- cat /var/lib/automation/config/cluster-config.json
kubectl -n mongodb exec -it mongodb-0 -c mongodb-agent -- cat /var/log/mongodb-mms-automation/healthstatus/agent-health-status.json
kubectl -n mongodb exec -it mongodb-0 -c mongodb-agent -- cat /var/log/mongodb-mms-automation/automation-agent-verbose.log
kubectl -n mongodb exec -it mongodb-0 -c mongodb-agent -- cat /var/log/mongodb-mms-automation/automation-agent.log
kubectl -n mongodb exec -it mongodb-0 -c mongodb-agent -- ls -al /var/log/mongodb-mms-automation/
kubectl -n mongodb exec -it mongodb-0 -c mongodb-agent -- ls -al /var/log/
kubectl -n mongodb exec -it mongodb-0 -c mongodb-agent -- ls -al /var/log/mongodb-mms-automation/healthstatus
kubectl -n mongodb exec -it mongodb-0 -c mongod -- ls -al /data
I'd expect the
/data
and/var/log/mongodb-mms-automation
to be owneb by uid=2000,gid=2000 or at least writiable by the group in both containers.Right now i see permission denied errors both from the
mongod
andmongodb-agent
containersmongod
:Failed to create file /data/automation-mongod.conf
mongodb-agent
:open /var/log/mongodb-mms-automation/readiness.log: permission denied
The text was updated successfully, but these errors were encountered: