-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to access service/ingress to a offloaded pod: 504 Gateway Time-out #2909
Labels
fix
Fixes a bug in the codebase.
Comments
it seems the lable |
Hi @remmen-io, can you give us more insight about how you configured your ingress? |
Hi @cheina97 Herer is the full deployment
|
Merged
Hi @remmen-io, I think we fixed your issue in this PR #2924. We found a bug in IPs remapping algorithm. Thanks for helping us to spot it. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What happened:
I've deployed a pod,svc and ingress on a cluster with network fabric and offloading enabled.
svc and ingress are excluded from the resource-reflection.
The pod is successfully started and I can see the logs, I can access the service or the pod direclty with
kubectl port-forward
Accessing the ingress I get a 504
What you expected to happen:
Access the service of the ingress
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
We are using cilium as CNI with native routing. There is no networkpolicy preventing any traffic
We have noticed, that the pod gets an IP in the range of 10.71.72.225 which is not in the 10.71.0.0/18 range.
Therefore we have seen that traffic from a pod to this ip gets routed over the default gateway, which we think is wrong
On the node where the debug pod was running (with curl on the the service/pod ip)
But even if adding manually a route, I still got no response. So we might be wrong
Additional Informations:
Provider: e1-k8s-mfmm-lab-t
Consumer: e1-k8s-mfmm-lab-b
Liqo Status
Deployment
Environment:
kubectl version
): v1.30.4The text was updated successfully, but these errors were encountered: