-
Notifications
You must be signed in to change notification settings - Fork 123
Description
Problem
Currently, Liqo enables seamless network connectivity between clusters, providing a unified network layer. However, this unrestricted connectivity may not always be desirable, as certain security policies should be enforced to limit traffic between clusters. This feature aims to extend filtering feature for defining and applying network filtering rules to control inter-cluster communications.
Why Network Security Policies Are Not Enough
Kubernetes NetworkPolicies control traffic at the IP and port level (OSI layer 3 and 4) but have key limitations for inter-cluster security:
- CNI Dependency – Enforcement depends on the CNI plugin, and not all CNIs support inter-cluster filtering, making enforcement inconsistent.
- Single-Cluster Scope – They are designed for single clusters. Inter-cluster policies rely on IP-based rules, which may be ignored or inconsistently enforced by different CNIs.
- Node-Level Traffic – Pods can always communicate with their hosting node, bypassing NetworkPolicies. 1
Additionally, Liqo encapsulates pod-to-node traffic, making it invisible to the CNI.
Using nftables at the system level ensures precise traffic filtering, enforcing security before packets reach the node’s network stack.
Describe the solution you would like
Objectives
The goal is to extend the CRD (networking.liqo.io/v1beta1/FirewallConfiguration
) to define filtering network policies, allowing rules based on IP, IP ranges, ports, CIDR, and protocols.
As we know, firewallconfiguration_controller reconciles the FirewallConfiguration CRD, which describes (along with others) the RulesSet
that has to be applied (natRule
, routeRule
, filterRule
).
More specifically, in the context of filterRule
three different filterAction
should be implemented: drop
, accept
, reject
(in addition to the existing one (ctmark
)).
FirewallConfiguration Changed Fields Explanation
chains
: List of firewall chainstype
: Chain type (filter
).
rules
: Defines filtering rules.filterRules
: List of rules.name
: Rule name (drop_traffic
).action
: Action to apply (drop
,accept
,reject
).match
: Defines match conditions.ip.value
: IP to matchSingle IP:
10.10.1.52
Range:10.10.1.52-10.10.1.53
CIDR:10.10.0.0/24
[counter]
: (Optional, default true) Tracks the number of times the rule is triggered.
How does it work?
To provide the current general context, we can briefly summarize the flow as shown in the following image:
The controller that reconciles over the previously applied CR will generate and apply the corresponding nft rules.
Example
Topology
For the sake of simplicity, the component that will run the firewallconfiguration_controller is the Liqo Gateway Pod on Cluster2.
The above image provides an overview of Liqo's default behavior.
The topology shows 2 clusters: Cluster1 and Cluster2 peered with Liqo.
Green arrows show permitted connections between pods, therefore since all connections are allowed, pods belonging to different clusters can contact each other without any restriction.
Ex.
Pod110.10.1.67
(Cluster1) can ping directly Pod210.20.1.2
(Cluster2)
We want to block the traffic between Pod1(Cluster1) and Pod2(Cluster2) by applying a FirewallConfiguration and specifying related IP addresses.
Applied FirewallConfiguration
table:
name: table_name
family: IPV4
chains:
- hook: forward
name: filter_chain_name
policy: accept
priority: 0
type: filter
rules:
filterRules:
- name: drop_traffic
counter: true
action: drop
match:
- ip:
value: 10.10.1.67
position: src
op: eq
- ip:
value: 10.20.1.2
position: dst
op: eq
Note:
The reverse traffic could be blocked by adding another match with the same ip address value, but with the oppositeposition
.
As a result of the implementation, we block the traffic coming from Pod1 in Cluster1 that is directed to Pod2 in Cluster2.
The following image illustrates the effect of the enforced security policies:
Ex.
Pod 10.10.1.67 (Cluster1) can no longer ping 10.20.1.2 (Cluster2), as the security policy prevents such communication.
Describe the user value of this feature
Which scenarios can NSE unlock?
The previously explained scenario was one of the simplest possible, but there are many more possibilities.
The idea is to provide a way to define the most flexible and extensible set of filtering rules, allowing users to customize inter-cluster traffic policies based on their specific security and connectivity requirements.
This means that there can be much more complex scenarios involving multiple clusters (and multiple offloaded namespaces).
Describe your proposed solution
No response
Do you volunteer to implement this feature?
- I want to implement this feature
Code of Conduct
- I agree to follow this project's Code of Conduct