Skip to content

Automatically detect and delete dangling nextflow pods #1

@bentsherman

Description

@bentsherman

When running nextflow on kubernetes, nextflow should be able to clean up it's worker pods if it terminates. However there seem to be some edge cases where it doesn't clean everything up, and the worker pods persist after the submitter pod has terminated.

I've written some bash code to detect these pods and delete them via kubectl:

PODS=$(kubectl get pods --no-headers | grep 'nf-' | awk '{ print $1 }')

for POD in ${PODS}; do
        RUN_NAME=$(kubectl get pod --output 'jsonpath={.metadata.labels.runName}' ${POD})

        kubectl get pod --no-headers ${RUN_NAME} > /dev/null

        if [[ $? != 0 ]]; then
                kubectl delete pod ${POD}
        fi
done

I think we could add this to kube-clean.sh, but I'm afraid that this code could be pretty destructive if any errors occur with kubectl itself. So I'd like to find a more robust way to determine whether the submitter pod still exists.

Also, we should think about whether this code could be automated, maybe with a CronJob?

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions