-
-
Notifications
You must be signed in to change notification settings - Fork 772
Open
Description
Bug Report
portmap fails silently while setting up SNAT.
Might be related to #9883
Description
I am trying to set up a "pod-gateway" that looks something like this:
------------ -------------
| Client pod | <------bridge------> | Gateway pod | <---macvlan---> LAN
------------ (L2 only, ------------- (SNAT here)
LAN IP gateway)
To achieve this, I am using Multus and doing CNI chaining. I've managed to set up the routing tables; however, the portmap binary seems to fail silently (I don't see errors in Multus logs, k8s events or dmesg) when setting up SNAT.
I am unsure why portmap fails. I tried using both the nftables and iptables backeds but both lead to the same result - nothing happens.
How to reproduce
- Install Multus
- Apply the following to the cluster (assuming single-node cluster, thus using
host-localIPAM)
yaml
apiVersion: v1
kind: Namespace
metadata:
name: multus-cni-config
labels:
pod-security.kubernetes.io/enforce: privileged
---
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: macvlan
namespace: multus-cni-config
spec:
config: '{
"cniVersion": "0.3.1",
"name": "macvlan",
"plugins": [
{
"type": "macvlan",
"master": "eth1",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "10.1.10.0/24",
"rangeStart": "10.1.10.3",
"rangeEnd": "10.1.10.9",
"gateway": "10.1.10.1",
"routes": [
{"dst": "1.1.1.1/32"},
{"dst": "8.8.8.8/32"}
]
}
},
{
"type": "tuning",
"sysctl": {
"net.ipv4.ip_forward": "1"
}
},
{
"type": "portmap",
"capabilities": {"portMappings": true},
"backend": "nftables",
"snat": true,
"masqAll": true
}
]
}'
---
# This will be assigned to the gateway pod
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: gateway
namespace: multus-cni-config
spec:
config: '{
"cniVersion": "0.3.1",
"name": "gateway",
"plugins": [
{
"type": "bridge",
"bridge": "gw-bridge0",
"ipam": {
"type": "host-local",
"subnet": "192.168.24.0/24",
"rangeStart": "192.168.24.254",
"rangeEnd": "192.168.24.254"
}
}
]
}'
---
# This will be assigned to client pods.
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: client
namespace: multus-cni-config
spec:
config: '{
"cniVersion": "0.3.1",
"name": "client",
"type": "bridge",
"bridge": "gw-bridge0",
"ipam": {
"type": "host-local",
"subnet": "192.168.24.0/24",
"rangeStart": "192.168.24.1",
"rangeEnd": "192.168.24.127",
"gateway": "192.168.24.254",
"routes": [
{"dst": "1.1.1.1/32"},
{"dst": "8.8.8.8/32"}
]
}
}'
---
apiVersion: v1
kind: Pod
metadata:
name: gateway-pod
namespace: multus-cni-config
annotations:
k8s.v1.cni.cncf.io/networks: |
[
{ "name": "gateway" },
{ "name": "macvlan", "mac": "02:aa:bb:cc:dd:ee" }
]
spec:
containers:
- name: netshoot-container
image: nicolaka/netshoot
securityContext:
capabilities:
add: ["NET_ADMIN"] # Required to set iptables entry manually
command:
- "sh"
- "-c"
- |
ip addr
echo
echo
ip r
echo '----- Waiting indefinitely -----'
sleep infinity
---
apiVersion: v1
kind: Pod
metadata:
name: client-pod
namespace: multus-cni-config
annotations:
k8s.v1.cni.cncf.io/networks: |
[
{ "name": "client" }
]
spec:
containers:
- name: netshoot-container
image: nicolaka/netshoot
command:
- "sh"
- "-c"
- |
ip addr
echo
echo
ip r
echo '----- Waiting indefinitely -----'
sleep infinity- Check the NFT rules in the gateway are empty and pinging 1.1.1.1 from the client fails:
$ kubectl -n multus-cni-config exec -it gateway-pod -- nft list ruleset
$ kubectl -n multus-cni-config exec -it client-pod -- ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
^C
--- 1.1.1.1 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4107ms- Add the SNAT rule manually and retry pinging from the client:
$ kubectl -n multus-cni-config exec -it gateway-pod -- iptables -t nat -A POSTROUTING -o net2 -j MASQUERADE
$ kubectl -n multus-cni-config exec -it gateway-pod -- nft list ruleset
# Warning: table ip nat is managed by iptables-nft, do not touch!
table ip nat {
chain POSTROUTING {
type nat hook postrouting priority srcnat; policy accept;
oifname "net2" counter packets 0 bytes 0 xt target "MASQUERADE"
}
}
$ kubectl -n multus-cni-config exec -it client-pod -- ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=69.0 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=58 time=68.9 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=58 time=68.8 ms
^C
--- 1.1.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 68.817/68.899/68.981/0.066 msLogs
N/A
Environment
- Talos version: 1.12.1
- Kubernetes version: 1.35.0
- Platform: bare-metal x64
Metadata
Metadata
Assignees
Labels
No labels