Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected errors in log for node addresses instance metadata #131

Open
kriswuollett opened this issue Sep 12, 2024 · 6 comments
Open

Unexpected errors in log for node addresses instance metadata #131

kriswuollett opened this issue Sep 12, 2024 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@kriswuollett
Copy link

I'm assuming there shouldn't be errors for a task like getting unexpected amount of metadata?

Logs:

I0912 21:11:50.519285  119283 controller.go:70] processing cluster nyc3-shared
I0912 21:11:50.519460  119283 controller.go:73] cluster nyc3-shared already exist
I0912 21:11:51.095307  119283 instances.go:47] Check instance metadata for nyc3-shared-control-plane
I0912 21:11:51.095548  119283 instances.go:47] Check instance metadata for nyc3-shared-worker
I0912 21:11:51.095943  119283 instances.go:47] Check instance metadata for nyc3-shared-worker2
I0912 21:11:51.096065  119283 instances.go:47] Check instance metadata for nyc3-shared-worker3
E0912 21:11:51.214039  119283 node_controller.go:281] Error getting instance metadata for node addresses: container addresses should have 2 values, got 5 values
E0912 21:11:51.303477  119283 node_controller.go:281] Error getting instance metadata for node addresses: container addresses should have 2 values, got 4 values
E0912 21:11:51.320311  119283 node_controller.go:281] Error getting instance metadata for node addresses: container addresses should have 2 values, got 5 values
E0912 21:11:51.341781  119283 node_controller.go:281] Error getting instance metadata for node addresses: container addresses should have 2 values, got 5 values

Environment:

$ uname -a
Linux REDACTED 6.1.0-25-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.106-3 (2024-08-26) x86_64 GNU/Linux
$ kind --version
kind version 0.24.0
$ nerdctl version
Client:
 Version:	v1.7.7
 OS/Arch:	linux/amd64
 Git commit:	5882c720f4e7f358fb26b759e514b3ae9dd8ea83
 buildctl:
  Version:	v0.15.2
  GitCommit:	9e14164a1099d3e41b58fc879cbdd6f2b2edb04e

Server:
 containerd:
  Version:	v1.7.22
  GitCommit:	7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
 runc:
  Version:	1.1.14
  GitCommit:	v1.1.14-0-g2c9f5602
@aojea
Copy link
Contributor

aojea commented Sep 12, 2024

Someone reported something similar the other day with podman, I think the errors comes from

func IPs(name string) (ipv4 string, ipv6 string, err error) {
// retrieve the IP address of the node using docker inspect
cmd := kindexec.Command(containerRuntime, "inspect",
"-f", "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}",
name, // ... against the "node" container
)
lines, err := kindexec.OutputLines(cmd)
if err != nil {
return "", "", fmt.Errorf("failed to get container details: %w", err)
}
if len(lines) != 1 {
return "", "", fmt.Errorf("file should only be one line, got %d lines: %w", len(lines), err)
}
ips := strings.Split(lines[0], ",")
if len(ips) != 2 {
return "", "", fmt.Errorf("container addresses should have 2 values, got %d values", len(ips))
}
return ips[0], ips[1], nil
}

If you can perform the docker inspect command manually to understand why it does not detect two ips it will help to debug

@kriswuollett
Copy link
Author

Control Plane:

# nerdctl inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}' 0c1ccb1a9660
10.4.1.2,fc00:f853:ccd:e793::210.244.0.1,10.244.0.1,10.244.0.1,

Workers example:

# nerdctl inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}' 78cb379f8a11
10.4.1.3,fc00:f853:ccd:e793::310.244.1.1,10.244.1.1,10.244.1.1,
# nerdctl inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}' 8cc8716a4db6
10.4.1.4,fc00:f853:ccd:e793::410.244.3.1,10.244.3.1,10.244.3.1,
# nerdctl inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}' e3e427961af0
10.4.1.5,fc00:f853:ccd:e793::510.244.2.1,10.244.2.1,

kind.yaml:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: nyc3-shared
nodes:
  - role: control-plane
  - role: worker
  - role: worker
  - role: worker

Started with:

kind create cluster --config /usr/local/etc/kind.yaml --wait 5m

@kriswuollett
Copy link
Author

I noticed that one of the workers had a different number of network settings than the others. It was not a copy and paste error on my part as far as I could tell. Here are all of the processes to match up with the inspect commands above:

# nerdctl ps
CONTAINER ID    IMAGE                                                                                             COMMAND                   CREATED         STATUS    PORTS                        NAMES
0c1ccb1a9660    docker.io/kindest/node@sha256:53df588e04085fd41ae12de0c3fe4c72f7013bba32a20e7325357a1ac94ba865    "/usr/local/bin/entr…"    13 hours ago    Up        127.0.0.1:34487->6443/tcp    nyc3-shared-control-plane
78cb379f8a11    docker.io/kindest/node@sha256:53df588e04085fd41ae12de0c3fe4c72f7013bba32a20e7325357a1ac94ba865    "/usr/local/bin/entr…"    13 hours ago    Up                                     nyc3-shared-worker
8cc8716a4db6    docker.io/kindest/node@sha256:53df588e04085fd41ae12de0c3fe4c72f7013bba32a20e7325357a1ac94ba865    "/usr/local/bin/entr…"    13 hours ago    Up                                     nyc3-shared-worker2
e3e427961af0    docker.io/kindest/node@sha256:53df588e04085fd41ae12de0c3fe4c72f7013bba32a20e7325357a1ac94ba865    "/usr/local/bin/entr…"    13 hours ago    Up                                     nyc3-shared-worker3

@aojea
Copy link
Contributor

aojea commented Sep 13, 2024

can you paste the entire inspect without the format statement? I'm curious about what has change to report so many IPs in a container
cc: @BenTheElder , the logic to parse ips is the same as in kind

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 12, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants