We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When you really want to go crazy then add a classic AWS LB in front off the master nodes.
Then only 1 workernode out of 3 will have a working nginx pod.
STEP1: Create an classic LB with ports 6443, 8132 & 9443 listeners to the master k0s nodes.
STEP 2: Added to k0sctl yaml file :
spec: api: address: 172.31.0.2 externalAddress: k0s-cluster-elb-202014585.eu-central-1.elb.amazonaws.com k0sApiPort: 9443 port: 6443 sans: - 172.31.0.2
STEP 3: Run k0sctl
null_resource.launchpad (local-exec): ⠀⣿⣿⡇⠀⠀⢀⣴⣾⣿⠟⠁⢸⣿⣿⣿⣿⣿⣿⣿⡿⠛⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀█████████ █████████ ███ null_resource.launchpad (local-exec): ⠀⣿⣿⡇⣠⣶⣿⡿⠋⠀⠀⠀⢸⣿⡇⠀⠀⠀⣠⠀⠀⢀⣠⡆⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀███ ███ ███ null_resource.launchpad (local-exec): ⠀⣿⣿⣿⣿⣟⠋⠀⠀⠀⠀⠀⢸⣿⡇⠀⢰⣾⣿⠀⠀⣿⣿⡇⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀███ ███ ███ null_resource.launchpad (local-exec): ⠀⣿⣿⡏⠻⣿⣷⣤⡀⠀⠀⠀⠸⠛⠁⠀⠸⠋⠁⠀⠀⣿⣿⡇⠈⠉⠉⠉⠉⠉⠉⠉⠉⢹⣿⣿⠀███ ███ ███ null_resource.launchpad (local-exec): ⠀⣿⣿⡇⠀⠀⠙⢿⣿⣦⣀⠀⠀⠀⣠⣶⣶⣶⣶⣶⣶⣿⣿⡇⢰⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠀█████████ ███ ██████████ null_resource.launchpad (local-exec): k0sctl v0.15.4 Copyright 2023, k0sctl authors. null_resource.launchpad (local-exec): Anonymized telemetry of usage will be sent to the authors. null_resource.launchpad (local-exec): By continuing to use k0sctl you agree to these terms: null_resource.launchpad (local-exec): https://k0sproject.io/licenses/eula null_resource.launchpad (local-exec): level=warning msg="An old cache directory still exists at /home/ec2-user/.k0sctl/cache, k0sctl now uses /home/ec2-user/.cache/k0sctl" null_resource.launchpad (local-exec): level=info msg="==> Running phase: Connect to hosts" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.21.51:22: connected" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.199:22: connected" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.19.107:22: connected" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.17.232:22: connected" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.27.231:22: connected" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.107:22: connected" null_resource.launchpad (local-exec): level=info msg="==> Running phase: Detect host operating systems" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.19.107:22: is running Ubuntu 18.04.5 LTS" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.17.232:22: is running Ubuntu 18.04.5 LTS" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.27.231:22: is running Ubuntu 18.04.5 LTS" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.21.51:22: is running Ubuntu 18.04.5 LTS" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.199:22: is running Ubuntu 18.04.5 LTS" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.107:22: is running Ubuntu 18.04.5 LTS" null_resource.launchpad (local-exec): level=info msg="==> Running phase: Acquire exclusive host lock" null_resource.launchpad (local-exec): level=info msg="==> Running phase: Prepare hosts" null_resource.launchpad (local-exec): level=info msg="==> Running phase: Gather host facts" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.27.231:22: using ip-172-31-27-231 as hostname" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.21.51:22: using ip-172-31-21-51 as hostname" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.199:22: using ip-172-31-16-199 as hostname" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.17.232:22: using ip-172-31-17-232 as hostname" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.107:22: using ip-172-31-16-107 as hostname" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.19.107:22: using ip-172-31-19-107 as hostname" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.21.51:22: discovered ens5 as private interface" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.27.231:22: discovered ens5 as private interface" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.17.232:22: discovered ens5 as private interface" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.199:22: discovered ens5 as private interface" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.107:22: discovered ens5 as private interface" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.19.107:22: discovered ens5 as private interface" null_resource.launchpad (local-exec): level=info msg="==> Running phase: Validate hosts" null_resource.launchpad (local-exec): level=info msg="==> Running phase: Gather k0s facts" null_resource.launchpad (local-exec): level=info msg="==> Running phase: Validate facts" null_resource.launchpad (local-exec): level=info msg="==> Running phase: Download k0s on hosts" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.199:22: downloading k0s v1.28.4+k0s.0" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.17.232:22: downloading k0s v1.28.4+k0s.0" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.107:22: downloading k0s v1.28.4+k0s.0" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.21.51:22: downloading k0s v1.28.4+k0s.0" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.19.107:22: downloading k0s v1.28.4+k0s.0" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.27.231:22: downloading k0s v1.28.4+k0s.0" null_resource.launchpad: Still creating... [5m10s elapsed] null_resource.launchpad (local-exec): level=info msg="==> Running phase: Install k0s binaries on hosts" null_resource.launchpad (local-exec): level=info msg="==> Running phase: Configure k0s" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.27.231:22: validating configuration" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.21.51:22: validating configuration" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.17.232:22: validating configuration" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.27.231:22: configuration was changed" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.17.232:22: configuration was changed" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.21.51:22: configuration was changed" null_resource.launchpad (local-exec): level=info msg="==> Running phase: Initialize the k0s cluster" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.27.231:22: installing k0s controller" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.27.231:22: waiting for the k0s service to start" null_resource.launchpad: Still creating... [5m20s elapsed] null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.27.231:22: waiting for kubernetes api to respond" null_resource.launchpad (local-exec): level=info msg="==> Running phase: Install controllers" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.27.231:22: generating token" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.21.51:22: writing join token" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.21.51:22: installing k0s controller" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.21.51:22: starting service" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.21.51:22: waiting for the k0s service to start" null_resource.launchpad: Still creating... [5m30s elapsed] null_resource.launchpad: Still creating... [5m40s elapsed] null_resource.launchpad: Still creating... [5m50s elapsed] null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.21.51:22: waiting for kubernetes api to respond" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.27.231:22: generating token" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.17.232:22: writing join token" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.17.232:22: installing k0s controller" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.17.232:22: starting service" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.17.232:22: waiting for the k0s service to start" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.17.232:22: waiting for kubernetes api to respond" null_resource.launchpad (local-exec): level=info msg="==> Running phase: Install workers" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.19.107:22: validating api connection to https://k0s-cluster-elb-202014585.eu-central-1.elb.amazonaws.com:6443" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.107:22: validating api connection to https://k0s-cluster-elb-202014585.eu-central-1.elb.amazonaws.com:6443" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.199:22: validating api connection to https://k0s-cluster-elb-202014585.eu-central-1.elb.amazonaws.com:6443" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.27.231:22: generating token" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.19.107:22: writing join token" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.199:22: writing join token" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.107:22: writing join token" null_resource.launchpad: Still creating... [6m0s elapsed] null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.199:22: installing k0s worker" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.19.107:22: installing k0s worker" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.107:22: installing k0s worker" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.199:22: starting service" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.199:22: waiting for node to become ready" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.19.107:22: starting service" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.107:22: starting service" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.19.107:22: waiting for node to become ready" null_resource.launchpad (local-exec): level=info msg="[ssh] 172.31.16.107:22: waiting for node to become ready" null_resource.launchpad: Still creating... [6m10s elapsed] null_resource.launchpad: Still creating... [6m20s elapsed] null_resource.launchpad: Still creating... [6m30s elapsed] null_resource.launchpad: Still creating... [6m40s elapsed] null_resource.launchpad (local-exec): level=info msg="==> Running phase: Release exclusive host lock" null_resource.launchpad (local-exec): level=info msg="==> Running phase: Disconnect from hosts" null_resource.launchpad (local-exec): level=info msg="==> Finished in 1m44s" null_resource.launchpad (local-exec): level=info msg="k0s cluster version v1.28.4+k0s.0 is now installed" null_resource.launchpad (local-exec): level=info msg="Tip: To access the cluster you can now fetch the admin kubeconfig using:" null_resource.launchpad (local-exec): level=info msg=" k0sctl kubeconfig"
Everything looks fine here.
STEP 4: Connect to cluster
STEP 5: Everything seems to be working, but they are not.
kubectl create deployment mydep --image=nginx --replicas=3
[ec2-user@ip-172-31-21-103 6nodes_k0s_met_LB]$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default mydep-66c55fb688-krh7s 1/1 Running 0 6m27s default mydep-66c55fb688-m2t5b 1/1 Running 0 6m27s default mydep-66c55fb688-vt7rr 1/1 Running 0 6m27s kube-system coredns-85df575cdb-6d8qz 1/1 Running 0 3h12m kube-system coredns-85df575cdb-wcc6n 1/1 Running 0 3h12m kube-system konnectivity-agent-79tkm 1/1 Running 0 3h12m kube-system konnectivity-agent-7r7z9 1/1 Running 0 3h12m kube-system konnectivity-agent-gqj8w 1/1 Running 0 3h12m kube-system kube-proxy-5r272 1/1 Running 0 3h12m kube-system kube-proxy-xb48j 1/1 Running 0 3h12m kube-system kube-proxy-z7p8c 1/1 Running 0 3h12m kube-system kube-router-5jnxv 1/1 Running 0 3h12m kube-system kube-router-cg6nb 1/1 Running 0 3h12m kube-system kube-router-w799t 1/1 Running 0 3h12m kube-system metrics-server-7556957bb7-8b6t8 1/1 Running 0 3h12m
[ec2-user@ip-172-31-21-103 6nodes_k0s_met_LB]$ kubectl logs mydep-66c55fb688-krh7s Error from server: Get "https://172.31.16.107:10250/containerLogs/default/mydep-66c55fb688-krh7s/nginx": dial tcp 172.31.16.107:10250: i/o timeout
[ec2-user@ip-172-31-21-103 6nodes_k0s_met_LB]$ kubectl logs mydep-66c55fb688-m2t5b /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh /docker-entrypoint.sh: Configuration complete; ready for start up 2024/01/18 14:46:11 [notice] 1#1: using the "epoll" event method 2024/01/18 14:46:11 [notice] 1#1: nginx/1.25.3 2024/01/18 14:46:11 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14) 2024/01/18 14:46:11 [notice] 1#1: OS: Linux 5.4.0-1038-aws 2024/01/18 14:46:11 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 999999:999999 2024/01/18 14:46:11 [notice] 1#1: start worker processes 2024/01/18 14:46:11 [notice] 1#1: start worker process 29 2024/01/18 14:46:11 [notice] 1#1: start worker process 30 2024/01/18 14:46:11 [notice] 1#1: start worker process 31 2024/01/18 14:46:11 [notice] 1#1: start worker process 32 2024/01/18 14:46:11 [notice] 1#1: start worker process 33 2024/01/18 14:46:11 [notice] 1#1: start worker process 34 2024/01/18 14:46:11 [notice] 1#1: start worker process 35 2024/01/18 14:46:11 [notice] 1#1: start worker process 36
[ec2-user@ip-172-31-21-103 6nodes_k0s_met_LB]$ kubectl logs mydep-66c55fb688-vt7rr Error from server: Get "https://172.31.16.199:10250/containerLogs/default/mydep-66c55fb688-vt7rr/nginx": dial tcp 172.31.16.199:10250: i/o timeout
2 out of 3 cannot connect to there container log (nor do an exec on it), but one workernode works.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
When you really want to go crazy then add a classic AWS LB in front off the master nodes.
Then only 1 workernode out of 3 will have a working nginx pod.
STEP1:
Create an classic LB with ports 6443, 8132 & 9443 listeners to the master k0s nodes.
STEP 2:
Added to k0sctl yaml file :
STEP 3:
Run k0sctl
Everything looks fine here.
STEP 4:
Connect to cluster
STEP 5:
Everything seems to be working, but they are not.
kubectl create deployment mydep --image=nginx --replicas=3
2 out of 3 cannot connect to there container log (nor do an exec on it), but one workernode works.
The text was updated successfully, but these errors were encountered: