Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support both Proxy and VIP mode load balancing #77

Open
danwinship opened this issue May 22, 2024 · 6 comments
Open

Support both Proxy and VIP mode load balancing #77

danwinship opened this issue May 22, 2024 · 6 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@danwinship
Copy link
Contributor

A big reason for having cloud-provider-kind is to be able to test the kube-proxy end of load balancing, but there is more code that needs to be tested in the IPMode: VIP case than in the IPMode: Proxy case that cpkind currently uses. So we should support VIP-mode load balancing as well.

(Presumably we'd use an annotation to select which type we wanted? Not sure how this would work in the e2e suite exactly... probably at first we'd have to have [Feature:CloudProviderKind] or something.

@aojea aojea added the kind/feature Categorizes issue or PR as related to a new feature. label May 23, 2024
@BenTheElder
Copy link
Member

(Presumably we'd use an annotation to select which type we wanted? Not sure how this would work in the e2e suite exactly... probably at first we'd have to have [Feature:CloudProviderKind] or something.

It sounds like the tests should be keying on [Feature: IPMode VIP] (which wouldn't be kind specific?) OR the tests are generic to both and we should just run them twice, once with cloud-provider-kind --ipmode=vip and once with cloud-provider-kind --ipmode=proxy?

(TIL https://kubernetes.io/blog/2023/12/18/kubernetes-1-29-feature-loadbalancer-ip-mode-alpha/)

@aojea
Copy link
Contributor

aojea commented May 23, 2024

yeah, tests should try to reflect functionality and features not implementations

@danwinship
Copy link
Contributor Author

It sounds like the tests should be keying on [Feature: IPMode VIP]

kubernetes/enhancements#4632 talks about trying to figure out how to make LB e2e testing be provider-agnostic. I wanted to avoid having per-subfeature [Feature]s because we'd end up needing a separate subfeature for every single LB test basically 🙁. (Proposed plan in the KEP is to have the e2e tests retroactively detect whether the LB supported the feature, and skip themselves if not.)

we should just run them twice, once with cloud-provider-kind --ipmode=vip and once with cloud-provider-kind --ipmode=proxy?

Yeah, that's probably the right approach.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 1, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 1, 2024
@BenTheElder
Copy link
Member

/lifecycle frozen

I don't think we want to stop tracking this, but it may be a bit before it's resolved.

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Oct 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

5 participants