Skip to content

Enhance docker system prune performance via concurrent pruning #6048

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 45 additions & 11 deletions cli/command/system/prune.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ import (
"context"
"fmt"
"sort"
"sync"
"sync/atomic"
"text/template"

"github.com/docker/cli/cli"
Expand Down Expand Up @@ -72,8 +74,13 @@ const confirmationTemplate = `WARNING! This will remove:
{{end}}
Are you sure you want to continue?`

type pruneResult struct {
spaceReclaimed uint64
output string
err error
}

func runPrune(ctx context.Context, dockerCli command.Cli, options pruneOptions) error {
// TODO version this once "until" filter is supported for volumes
if options.pruneVolumes && options.filter.Value().Contains("until") {
return errors.New(`ERROR: The "until" filter is not supported with "--volumes"`)
}
Expand All @@ -99,19 +106,46 @@ func runPrune(ctx context.Context, dockerCli command.Cli, options pruneOptions)
}

var spaceReclaimed uint64

var mu sync.Mutex
var outputs []string
var errs []error

var wg sync.WaitGroup
wg.Add(len(pruneFuncs))

for _, pruneFn := range pruneFuncs {
spc, output, err := pruneFn(ctx, dockerCli, options.all, options.filter)
if err != nil {
return err
}
spaceReclaimed += spc
if output != "" {
_, _ = fmt.Fprintln(dockerCli.Out(), output)
}
go func(pruneFn func(ctx context.Context, dockerCli command.Cli, all bool, filter opts.FilterOpt) (uint64, string, error)) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for contributing; I'm not sure if this will work though. The reason these actions are done sequentially (and in a specific order) is that there may be references that must be cleaned up before content becomes available for pruning.

For example, a container may be using an image or network, so pruning containers before pruning networks will allow the network to be removed, whereas running those in parallel means that the network can't be removed if the container is not yet removed when it's executed.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey thanks for replying!

What I'm thinking is that if we use the -a flag, then the order of the dependent resources shouldn't really matter since the goal is to remove everything. I'm not sure if the Docker engine internally checks for dependencies and prevents removal in certain cases, but my idea is that when -a is specified, we could attempt to remove all resources in parallel to be faster.

Last time I had to wait like 20 min lol

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it's a tricky one; Even with the -a option, it may still take into account if an image is in use (although there is some "odd" logic for historic reasons; moby/moby#36295)

docker pull -q alpine
docker pull -q alpine:3
docker pull -q alpine:3.21

docker network create one
docker network create two
docker network create three

docker run -dit --network one --name one alpine
docker run -dit --network two --name two alpine
docker run -dit --network three --name three alpine

docker system prune -af
Deleted Images:
untagged: alpine:3
untagged: alpine:3.21

After stopping the containers (so that they can be pruned), it will also untag the remaining image(s) and remove networks (and/or volumes)

docker stop one two three

docker system prune -af
Deleted Containers:
6046e728a519837ae69d0be78348eb85ffee8d9822715436d1601afed206ca4b
17c8e92ccc98f7fe45472ae3716a2119bb5db07d856ad0e70155257d604b294e
44c2ad728cac661aff33e822d80fa4cc59f38df6f2750e0206b95baab9e16eeb

Deleted Networks:
one
two
three

Deleted Images:
untagged: alpine:latest
untagged: alpine@sha256:a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88c
deleted: sha256:8d591b0b7dea080ea3be9e12ae563eebf9869168ffced1cb25b2470a3d9fe15e
deleted: sha256:a16e98724c05975ee8c40d8fe389c3481373d34ab20a1cf52ea2accc43f71f4c

Total reclaimed space: 8.175MB

defer wg.Done()

spc, output, err := pruneFn(ctx, dockerCli, options.all, options.filter)

atomic.AddUint64(&spaceReclaimed, spc)

mu.Lock()
defer mu.Unlock()

if err != nil {
errs = append(errs, err)
}
if output != "" {
outputs = append(outputs, output)
}
}(pruneFn)
}


wg.Wait()

if len(errs) > 0 {
return errs[0]
}

for _, output := range outputs {
_, _ = fmt.Fprintln(dockerCli.Out(), output)
}

_, _ = fmt.Fprintln(dockerCli.Out(), "Total reclaimed space:", units.HumanSize(float64(spaceReclaimed)))

return nil
}

Expand Down