forked from protolambda/erigon
-
Notifications
You must be signed in to change notification settings - Fork 18
Open
Description
We have an issue with a couple of nodes running on K8s, that look like they have memory leaks. Two nodes with a memory limit of 16Gb, and one with 120Gb eventually hit a limit and get OOMkilled. Increasing the limit just delays the OOM hit and provides more time before the restart happens.
Can this be checked out? We want to know if this may be a misconfiguration on our side and if we need to modify a flag to fix it, or if this is internal.
For example:
System information
Erigon version: erigon-2.60.10-0.8.1
OS & Version: Kubernetes v1.30.6
Erigon Command (with flags/config):
command:
- erigon
- '--datadir=/home/node/data'
- '--db.pagesize=4096'
- '--ethash.dagdir=/home/node/data/dag'
- '--db.size.limit=8TB'
- '--chain=base-mainnet'
- '--networkid=8453'
- '--authrpc.jwtsecret=/home/node/jwt-secret.txt'
- '--authrpc.vhosts=*'
- '--http'
- '--http.addr=0.0.0.0'
- '--http.corsdomain=null'
- '--http.vhosts=*'
- '--http.api=eth,erigon,net,web3,debug,trace'
- '--ws'
- '--rollup.sequencerhttp=https://mainnet-sequencer.base.org'
- '--rollup.historicalrpc='
- '--rollup.historicalrpctimeout=5s'
- '--maxpeers=0'
- '--nodiscover'
- '--identity=xxx'
- '--rollup.disabletxpoolgossip=true'
- '--metrics'
- '--metrics.addr=0.0.0.0'
- '--healthcheck'
- '--private.api.addr=0.0.0.0:9090'
- '--private.api.ratelimit=31872'
- '--http.timeouts.read=300s'
avinashbo, Igorarg91, darinvhs, naviat and harshsingh-cs
Metadata
Metadata
Assignees
Labels
No labels