Skip to content

[BUG] proc-sys-fs-binfmt_misc.automount journal spam from agent process #41735

@strelok1

Description

@strelok1

Agent Environment
Agent 7.68.3 - Commit: 874cfce - Serialization version: v5.0.155 - Go version: go1.24.5

Agent running in EKS cluster, deployed with helm, running on EKS. This happens on AL2 and AL2023 hosts. This has been happening on previous agent versions as well.

Describe what happened:
The host journal logs are filled with hundreds of thousands log entries

Oct 07 09:18:06 ***87.ap-southeast-2.compute.internal systemd[1]: proc-sys-fs-binfmt_misc.automount: Automount point already active?
Oct 07 09:18:06 ***87.ap-southeast-2.compute.internal systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 7286 (agent)
Oct 07 09:18:06 ***87.ap-southeast-2.compute.internal systemd[1]: proc-sys-fs-binfmt_misc.automount: Automount point already active?
Oct 07 09:18:06 ***87.ap-southeast-2.compute.internal systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 7286 (agent)
Oct 07 09:18:06 ***87.ap-southeast-2.compute.internal systemd[1]: proc-sys-fs-binfmt_misc.automount: Automount point already active?
Oct 07 09:18:06 ***87ap-southeast-2.compute.internal systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 7286 (agent)

Describe what you expected:
I expect for these journal logs not to be generated in their hundreds of thousands. In one cluster we have mapped the host /var/log/journal as a journald logs integration and it's costing millions of ingested log entries per day.

On hosts where we're not ingesting host logs they're there as well.

Steps to reproduce the issue:
Just deploy datadog with pretty much quite minimal configuration for APM and host process collection

datadog:
  apiKey: ****
  apm:
    portEnabled: true
  clusterName: ***
  collectEvents: true
  containerExclude: image:.*
  containerExcludeLogs: image:.*
  containerInclude: image:***/*** image:***/***
  containerIncludeLogs: ***/*** image:***/***
  dogstatsd:
    useHostPort: true
  kubelet:
    host:
      valueFrom:
        fieldRef:
          fieldPath: status.hostIP
    tlsVerify: false
  logLevel: WARN
  logs:
    enabled: true
  processAgent:
    enabled: true
    processCollection: true
  serviceMonitoring:
    enabled: true
targetSystem: linux

Additional environment details (Operating System, Cloud provider, etc):
AWS, EKS, AL2/AL2023

Metadata

Metadata

Assignees

No one assigned

    Labels

    pendingLabel for issues waiting a Datadog member's response.team/triage

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions