[pull] master from kubernetes:master#1924
Open
pull[bot] wants to merge 7579 commits intonext-stack:masterfrom
Open
[pull] master from kubernetes:master#1924pull[bot] wants to merge 7579 commits intonext-stack:masterfrom
pull[bot] wants to merge 7579 commits intonext-stack:masterfrom
Conversation
Announce AL2 deprecation in 1.35
gce: Simplify resource lister
Add unit test for GuessCloudForPath
Move localmutexes.go to GCE-specific cloudup
build k/k when running presubmit jobs in kubernetes repo
make some etcd variables configurable
Background: The ec2-master-scale-performance were failing due to DELETE events not meeting SLOs. Solution: After investigation, adding a DeleteCollectionWorkers flag would help resolve this by speeding up namespace clean up. Signed-off-by: ronaldngounou <rngounou@amazon.com>
…orkers-flag Add DeleteCollectionWorkers field to KubeAPIServerConfig
add exponential backoffs to calling kops-controller
All upgrade jobs use the upgrade-ab scenario which is more flexible.
We force the use of nftables on rhel10, but kube-proxy was defaulting to iptables and failing to start on rhel10 because of a missing kernel module (nft_ct). The module is available in the GCE images, but not the AWS images.
Upgrade scenario cleanup
fix: set proxyMode to nftables on rhel10
…deletion Background: The ec2-master-scale-performance tests were failing due to DELETE events not meeting SLOs. Solution: Increase --delete-collections-workers flag to 100 (arbitrary value) to accelerate namespace clean up. Follow up of PR #17928 Signed-off-by: ronaldngounou <rngounou@amazon.com>
Migrate karpenter and LBC scenario scripts to use --test=exec
Set MACAddressPolicy=none for AWS VPC CNI on AL2023
…rker-flag Increase delete-collections-workers flag value to speed up namespace deletion
tests: dump /proc/modules from e2e tests
This feels a bit hacky, but this is apparently par for the course?
This avoids leftover namespaces consuming resources in subsequent tests.
This should mean they exit earlier, even if they ignore signals.
These require pulling a large image, so it might take more than 60s.
This should show us what resources are blocking namespace deletion.
…stration_pod_autoscaling [aiconformance]: test for schedulingOrchestration pod autoscaling
Enable E2E external CSI testing on GCP
…akes in CI Unclear if this is a real issue or just a flake in CI, but increasing the timeout to 10m should help avoid flakes. We can/should reduce it in another PR and try to root-cause.
Signed-off-by: Moshe Vayner <moshe@vayner.me>
…stration_clusterautoscaling [aiconformance] add test for schedulingOrchestration clusterAutoscaling
Replace cwd with go:embed for storage.testdriver manifests
chore(channels): bump alpha channel k8s and ubuntu AMI versions
Attempting to solve errors during 'kubectl apply -f' ``` Error from server (NotFound): error when creating "STDIN": namespaces "test-pods" not found Error from server (NotFound): error when creating "STDIN": namespaces "test-pods" not found ``` Looks like kOps will use prowjob's namespace to create the resources Signed-off-by: Arnaud Meukam <ameukam@gmail.com>
…e-fix [aiconformance]: set namespace explicitly for testdata
Signed-off-by: Ciprian Hacman <ciprian@hakman.dev>
chore: Bump Google Cloud deps in kubetest
Some errors where kubectl use the prowjobs namespace: ```bash + kubectl wait --for=condition=complete job/test-gpu-pod --timeout=5m Error from server (NotFound): namespaces "test-pods" not found + true + kubectl logs job/test-gpu-pod error: error from server (NotFound): namespaces "test-pods" not found in namespace "test-pods" + echo 'Failed to get logs' Failed to get logs ``` I suspect a mismatch done in the kubeconfig generated by kOps when the cluster is created. Signed-off-by: Arnaud Meukam <ameukam@gmail.com>
[aiconformance] Explicity set the namespace for kubectl commands
During --down, getZones() picks random zones when no --create-args are passed, so the S3 client may be configured for the wrong region. DeleteBucket then hits the wrong regional endpoint and gets a 301 PermanentRedirect. Resolve the bucket's actual region via GetBucketLocation before calling DeleteBucket, and pass a per-call options override to target the correct endpoint. References: - https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#ErrorCodeList - https://docs.aws.amazon.com/AmazonS3/latest/userguide/Redirects.html Written with the help of Opus 4.6 Signed-off-by: Arnaud Meukam <ameukam@gmail.com>
GetBucketLocation may return the legacy EU alias for eu-west-1 Co-authored-by: Ciprian Hacman <ciprian@hakman.dev>
Fix S3 DeleteBucket 301 redirect during teardown
Bump CAS default version and previous versions Signed-off-by: Arnaud Meukam <ameukam@gmail.com>
Signed-off-by: Arnaud Meukam <ameukam@gmail.com>
Update Cluster Autoscaler to 1.35.0
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot]
Can you help keep this open source service alive? 💖 Please sponsor : )