Skip to content

Enhance docker system prune performance via concurrent pruning #6048

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

iklobato
Copy link

@iklobato iklobato commented May 5, 2025

Enhance docker system prune performance via concurrent pruning

Refactored the runPrune function to execute pruning operations concurrently:

  • Added sync.WaitGroup to orchestrate goroutines
  • Used sync/atomic for thread-safe space reclaimed counter
  • Added mutex for safe slice operations (outputs and errors)
  • Maintained the same CLI behavior with improved performance
  • Order of outputs may differ from sequential execution

This change improves performance of the prune command, especially for systems with many Docker resources by executing independent pruning operations in parallel.

What I did

Modified the docker system prune command to execute all pruning operations (containers, networks, volumes, images, and build cache) concurrently instead of sequentially.

How I did it

  • Introduced sync primitives (WaitGroup, Mutex, atomic) for safe concurrent execution
  • Launched each pruning function in its own goroutine
  • Safely aggregated results (space reclaimed, outputs, errors)
  • Preserved the existing CLI behavior and output format

How to verify it

Run docker system prune on a system with many Docker resources and observe faster completion times compared to the previous sequential implementation.

Human readable description for the release notes

Improved performance of docker system prune by executing pruning operations concurrently

🐿️

Refactored the runPrune function to execute pruning operations concurrently:
- Added sync.WaitGroup to orchestrate goroutines
- Used sync/atomic for thread-safe space reclaimed counter
- Added mutex for safe slice operations (outputs and errors)
- Maintained the same CLI behavior with improved performance
- Order of outputs may differ from sequential execution

This change significantly improves performance of the prune command,
especially for systems with many Docker resources by executing
independent pruning operations in parallel.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <[email protected]>
Signed-off-by: iklobato <[email protected]>
if output != "" {
_, _ = fmt.Fprintln(dockerCli.Out(), output)
}
go func(pruneFn func(ctx context.Context, dockerCli command.Cli, all bool, filter opts.FilterOpt) (uint64, string, error)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for contributing; I'm not sure if this will work though. The reason these actions are done sequentially (and in a specific order) is that there may be references that must be cleaned up before content becomes available for pruning.

For example, a container may be using an image or network, so pruning containers before pruning networks will allow the network to be removed, whereas running those in parallel means that the network can't be removed if the container is not yet removed when it's executed.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey thanks for replying!

What I'm thinking is that if we use the -a flag, then the order of the dependent resources shouldn't really matter since the goal is to remove everything. I'm not sure if the Docker engine internally checks for dependencies and prevents removal in certain cases, but my idea is that when -a is specified, we could attempt to remove all resources in parallel to be faster.

Last time I had to wait like 20 min lol

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it's a tricky one; Even with the -a option, it may still take into account if an image is in use (although there is some "odd" logic for historic reasons; moby/moby#36295)

docker pull -q alpine
docker pull -q alpine:3
docker pull -q alpine:3.21

docker network create one
docker network create two
docker network create three

docker run -dit --network one --name one alpine
docker run -dit --network two --name two alpine
docker run -dit --network three --name three alpine

docker system prune -af
Deleted Images:
untagged: alpine:3
untagged: alpine:3.21

After stopping the containers (so that they can be pruned), it will also untag the remaining image(s) and remove networks (and/or volumes)

docker stop one two three

docker system prune -af
Deleted Containers:
6046e728a519837ae69d0be78348eb85ffee8d9822715436d1601afed206ca4b
17c8e92ccc98f7fe45472ae3716a2119bb5db07d856ad0e70155257d604b294e
44c2ad728cac661aff33e822d80fa4cc59f38df6f2750e0206b95baab9e16eeb

Deleted Networks:
one
two
three

Deleted Images:
untagged: alpine:latest
untagged: alpine@sha256:a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88c
deleted: sha256:8d591b0b7dea080ea3be9e12ae563eebf9869168ffced1cb25b2470a3d9fe15e
deleted: sha256:a16e98724c05975ee8c40d8fe389c3481373d34ab20a1cf52ea2accc43f71f4c

Total reclaimed space: 8.175MB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants