Skip to content

Connection Issues with VPC CNI Network Policy Enforcement #414

@janavenkat

Description

@janavenkat

What happened:

We recently enabled network policy in AWS VPC CNI. After enabling it, we started experiencing intermittent connectivity issues when EKS pods attempt to connect to RabbitMQ hosted behind VPC peering. The issue occurs randomly, even though we do not have any deny network policies—only allow_all ingress and allow_all egress policies are in place.

Disabling network policy enforcement resolves the issue, while re-enabling it causes the problem to return. The network-policy-agent logs indicate a possible reversal in connection direction, suggesting a potential issue with conntrack.

We managed to get the DENY logs by enabling enablePolicyEventLogs

Also we're using Otterize to automate our network polices.

Attach logs

 {
       "@timestamp": "2025-02-04 10:36:49.034",
       "@message": "Node: REDACTED.compute.internal; SIP: <INTERNAL_IP>; SPORT: 53912; DIP: <EXTERNAL_IP>; DPORT: 5671; PROTOCOL: TCP; PolicyVerdict: ACCEPT"
     },
     {
       "@timestamp": "2025-02-04 10:36:48.986",
       "@message": "Node: REDACTED.compute.internal; SIP: <EXTERNAL_IP>; SPORT: 5671; DIP: <INTERNAL_IP>; DPORT: 53912; PROTOCOL: TCP; PolicyVerdict: DENY"
     }
}

What you expected to happen:

We shouldn't expect any denied connections, as we're allowing all ingress and egress rules

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): 1.29
  • CNI Version: 1.19.0
  • OS (e.g: cat /etc/os-release): Amazon linux 2
  • Kernel (e.g. uname -a): 5.10.233-233.887.amzn2.x86_64

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions