Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LB POD Crashloop #20

Open
infinitydon opened this issue May 19, 2019 · 7 comments
Open

LB POD Crashloop #20

infinitydon opened this issue May 19, 2019 · 7 comments

Comments

@infinitydon
Copy link

Hello,

I tried to deploy this on k8s 1.12.6 by changing an existing service type from NodePort to LoadBalancer. The operator deployment logs shows that it got the event:

{"level":"info","ts":1558264670.0868776,"logger":"service-lb-controller","msg":"Creating a new DS","Request.Namespace":"tqa","Request.Name":"tqa-loadtest-grafana","DS.Namespace":"tqa","DS.Name":"service-lb-tqa-loadtest-grafana"}
{"level":"info","ts":1558264670.2041252,"logger":"service-lb-controller","msg":"Reconciling Service","Request.Namespace":"tqa","Request.Name":"tqa-loadtest-grafana"}
{"level":"info","ts":1558264670.3062572,"logger":"service-lb-controller","msg":"Existing service addresses match, no need to update"}
{"level":"info","ts":1558264670.3077016,"logger":"service-lb-controller","msg":"Reconciling Service","Request.Namespace":"tqa","Request.Name":"tqa-loadtest-grafana"}
{"level":"info","ts":1558264670.3094041,"logger":"service-lb-controller","msg":"Existing service addresses match, no need to update"}

But the DS load balancer POD stayed in crashloopback state without any logs to see what maybe wrong:

service-lb-tqa-loadtest-grafana-jcf9d         0/1     CrashLoopBackOff   7          14m

Also the operator did not show any error logs.

Also when I changed the service type back to NodePort, the daemonset was not deleted.

Any idea what may be wrong? It will be helpful if the LB POD provides some logs as to what issue it may be facing.

@jnummelin
Copy link
Contributor

The only way for the LB pods to crash without any log entries is on this line: https://github.com/kontena/akrobateo/blob/master/lb-image/entrypoint.sh#L8

And yes, if that is the case it really should log something. :)

The change between loadbalancer and nodeport is currently unhandled as you noticed. We need to handle that too at some point. If possible, could you separate that into another issue to track it properly.

@infinitydon
Copy link
Author

Thanks for the reply.. I will check this again.

I have opened a separate case for the Daemonset not been deleted after service type is changed back to NodePort.

@jnummelin
Copy link
Contributor

#23 introduced some logging for the case of ip forwarding not enabled. could you test with the latest image?

@infinitydon
Copy link
Author

Hi,

Yes I can see some logs now, but strangely I am seeing 2 containers inside the service-lb pods (echo & echo2)..

service-lb-echoserver-7bltj   2/2     Running   0          13m
service-lb-echoserver-gpjdj   2/2     Running   0          13m
service-lb-echoserver-kw9zk   2/2     Running   0          13m
service-lb-echoserver-rm6hb   2/2     Running   0          13m
service-lb-echoserver-tdqrj   2/2     Running   0          13m

Is this normal?

@jnummelin
Copy link
Contributor

Do you have multiple ports defined in the service? The lb pods have a container per port

@infinitydon
Copy link
Author

No I am the test-service template in the git repo (echoserver), I did not modify the file at all

@jnummelin
Copy link
Contributor

The test-service does have two ports defined:

apiVersion: v1
kind: Service
metadata:
  name: echoserver
spec:
  type: LoadBalancer
  ports:
  - name: echo
    port: 8080
    targetPort: 8080
    protocol: TCP
  - name: echo2
    port: 9090
    targetPort: 8080
    protocol: TCP
  selector:
    app: echoserver

So Akrobateo will create the pods so that there's a container for each port, two in this case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants