Skip to content

Looks like devspacehelper sync uses quite a lot of memory during initialize #1757

Open
@hvt

Description

@hvt

What happened?

This is not exactly a bug, since devspace handles it well, I just was amazed by the memory used at some point by the devspacehelper sync commands and was wondering whether this could be lowered somehow.

When running a project using devspace dev, the devspace sync processes are killed due to a cgroup memory violation. That looks like this:

[0:sync] Error: Sync Error on /home/hvt/dev/on2it/orchestrator-v1/container/job: Sync - connection lost to pod hvt/orchestrator-v1-68cd7d65c-f595r:  command terminated with exit code 137
[0:sync] Sync stopped
[0:sync] Restarting sync...
[0:sync] Waiting for pods...
...some seconds later...
[0:sync] Starting sync...
[0:sync] Inject devspacehelper into pod hvt/orchestrator-v1-5fb4b559d8-zdtch
[0:sync] Start syncing
[0:sync] Sync started on /home/hvt/dev/on2it/orchestrator-v1/container/job <-> /usr/local/on2it/job (Pod: hvt/orchestrator-v1-5fb4b559d8-zdtch)
[0:sync] Waiting for initial sync to complete
[0:sync] Helper - Use inotify as watching method in container
[0:sync] Downstream - Initial sync completed
[0:sync] Upstream - Initial sync completed
...

As you can see, devspace handles this well and retries injecting the devspacehelper sync processes after the OOM exit code, and after a few tries, sync works. However, I cannot lay a finger on why it works after a few tries, perhaps memory usage is lower after a few tries, due to less things to sync?

The sync configuration of this particular project was low volume, around 10 files of only a few KBs each, without anything to initially sync.

When the sync is running smoothly after a while, looking at the size of the devspacehelper sync processes, memory usage seemed normal to around 15MB per process. However, looking at peak memory usage, I was amazed to see:

$ ps
PID   USER     TIME  COMMAND
    1 www-data  0:00 {docker-cmd.sh} /bin/sh bin/docker-cmd.sh
   23 www-data  0:02 /tmp/devspacehelper sync downstream --exclude .gitignore --exclude /doc/ --exclud...
   24 www-data  0:00 /tmp/devspacehelper sync upstream --override-permissions --exclude .gitignore --e...
....
$ cat /proc/23/status
Name:	devspacehelper
...
VmPeak:	  716160 kB
VmRSS:	   15100 kB
...
$ cat /proc/24/status
Name:	devspacehelper
Umask:	0022
...
VmPeak:	  713216 kB
VmRSS:	   15120 kB
...

So both processes used around ~715MB some period in time, which seemed a little excessive to me. It might be that I'm looking at it wrong, so therefore just checking out what you think :].

What did you expect to happen instead?

The devspacehelper sync processes use not that much peak memory.

How can we reproduce the bug? (as minimally and precisely as possible)

Limit the container you're developing in k8s to say 50Mi, and configure a sync too.

Local Environment:

  • DevSpace Version: devspace version 5.16.2
  • Operating System: linux
  • Deployment method: kubectl apply

Kubernetes Cluster:

  • Cloud Provider: other
  • Kubernetes Version:
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.15", GitCommit:"58178e7f7aab455bc8de88d3bdd314b64141e7ee", GitTreeState:"clean", BuildDate:"2021-09-15T19:18:00Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

/kind bug

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/syncIssues related to the real-time code synchronization

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions