Skip to content

High memory consumption on nodes and possible memory leak #239

@k-mitevski

Description

@k-mitevski

We have an issue with a couple of nodes running on K8s, that look like they have memory leaks. Two nodes with a memory limit of 16Gb, and one with 120Gb eventually hit a limit and get OOMkilled. Increasing the limit just delays the OOM hit and provides more time before the restart happens.

Can this be checked out? We want to know if this may be a misconfiguration on our side and if we need to modify a flag to fix it, or if this is internal.

For example:

Image

Image

System information

Erigon version: erigon-2.60.10-0.8.1

OS & Version: Kubernetes v1.30.6

Erigon Command (with flags/config):

      command:
        - erigon
        - '--datadir=/home/node/data'
        - '--db.pagesize=4096'
        - '--ethash.dagdir=/home/node/data/dag'
        - '--db.size.limit=8TB'
        - '--chain=base-mainnet'
        - '--networkid=8453'
        - '--authrpc.jwtsecret=/home/node/jwt-secret.txt'
        - '--authrpc.vhosts=*'
        - '--http'
        - '--http.addr=0.0.0.0'
        - '--http.corsdomain=null'
        - '--http.vhosts=*'
        - '--http.api=eth,erigon,net,web3,debug,trace'
        - '--ws'
        - '--rollup.sequencerhttp=https://mainnet-sequencer.base.org'
        - '--rollup.historicalrpc='
        - '--rollup.historicalrpctimeout=5s'
        - '--maxpeers=0'
        - '--nodiscover'
        - '--identity=xxx'
        - '--rollup.disabletxpoolgossip=true'
        - '--metrics'
        - '--metrics.addr=0.0.0.0'
        - '--healthcheck'
        - '--private.api.addr=0.0.0.0:9090'
        - '--private.api.ratelimit=31872'
        - '--http.timeouts.read=300s'

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions