Releases: Azure/AKS
Release 2025-09-21
Release Notes 2025-09-21
Monitor the release status by regions at AKS-Release-Tracker. This release is titled v20250921
.
Announcements
- AKS Kubernetes version
1.31
standard support will be deprecated by November 1, 2025. Kindly upgrade your clusters to 1.32 community version or enable Long Term Support with 1.31 in order to continue in the same version. Refer to version support policy and upgrading a cluster for more information.- The
InPlaceOrRecreate
feature gate for Vertical Pod Autoscaling 1.4.2 will be enabled with AKS Kubernetes version1.34
, allowing customers to use theInPlaceOrRecreate
update mode in their VPA objects. - The Vertical Pod Autoscaling component
vpa-recommender
andvpa-updater
will be highly available with AKS Kubernetes version1.34
. They will have 2 replicas by default.
- The
- Revision asm-1-24 of the Istio add-on has been deprecated. Please migrate to a supported revision following the Istio add-on upgrade guide.
- AKS Kubernetes version
1.34
is now available in preview. Refer to 1.34 Release Notes and upgrading a cluster for more information. - Starting on 30 November 2025, AKS will no longer support or provide security updates for Azure Linux 2.0. Migrate to a supported Azure Linux version by upgrading your node pools to a supported Kubernetes version or migrating to osSku AzureLinux3. For more information, see [Retirement] Azure Linux 2.0 node pools on AKS.
- Security patch information for Ubuntu 24.04 is available in AKS-Release-Tracker.
- Azure Kubernetes Service no longer supports the
--skip-gpu-driver-install
node pool tag to skip automatic driver installation. This node pool tag can no longer be used at AKS node pool creation time to install custom GPU drivers or use the GPU Operator. Alternatively, you should use the generally availablegpu-driver
API field to update your existing node pools or create new GPU-enabled node pools to skip automatic GPU driver installation. - AKS Automatic is generally available. Find the recording to the virtual launch event on Youtube.
- Availability Sets on AKS are being retired on AKS on September 30 2025. Any new attempts to create a new Availability Sets will be blocked as of September 30 2025. Existing Availability Sets will remain functional after retirement but will be considered out of support. To migrate from Availability Sets, see the Availability Sets migration documentation for more info.
- The Basic Load Balancer is being retired on AKS on September 30 2025. Any new attempts to create a new basic tier load balancer will be blocked. Existing Basic load balancers will remain functional after retirement but will be considered out of support. See the basic load balancer migration documentation for more details on migration to the Standard load balancer.
Release notes
Features
- API Server Vnet Integration is now available in East US region.
- AKS Node Problem Detector (NPD) conducts GPU health monitoring to enable automatic detection and reporting of issues impacting select GPU-enabled VM sizes, and is now generally available.
- Kubelet Serving Certificate Rotation (KSCR) is now enabled by default in Sovereign cloud regions. Existing node pools in these regions will have KSCR enabled by default when they perform their first upgrade to any kubernetes version 1.27 or greater. Kubelet serving certificate rotation allows AKS to utilize kubelet server TLS bootstrapping for both bootstrapping and rotating serving certificates signed by the Cluster CA. See documentation for detailed instructions.
- Node auto provisioning (NAP) now supports private clusters, and disk encryption sets. See NAP documentation for more information.
Bug Fixes
- Fixed an issue where KAITO workspace creation would fail on AKS Automatic because gpu-provisioner creates an agentPool. Non-node auto provisioning pools, such as agentPool, are now allowed to be added to AKS Automatic clusters.
- Fixed a bug where ETag was not returned in ManagedClusters or AgentPools responses in API versions 2024-09-01 or newer, even though the API specification said it would be.
Behavioral Changes
- Deployment Safeguards will stop enforcing readiness and liveness probes on the placeholder pods that Application Routing creates to mount synchronized secrets from Azure Key Vault.
- AKS Automatic system pool needs to have at least 3 availability zones, ephemeral OS disk, and Azure Linux OS.
- Starting with 20250902-preview API, the
enableCustomCATrust
field is removed. This field is not required when using the GA feature, and is only used by a deprecated version of the feature. When using Custom Certificate Authority, you no longer need to specifyenableCustomCATrust
. You can just add certificates to your cluster by specifying your text file for the--custom-ca-trust-certificates
parameter. See documentation for detailed instructions. - Starting September 2025, new AKS clusters that use the AKS-managed virtual network option will place cluster subnets into private subnets by default (defaultOutboundAccess = false) in alignment with egress best practices. This setting does not impact AKS-managed cluster traffic, which uses explicitly configured outbound paths. It may affect unsupported scenarios, such as deploying other resources (e.g., VMs) into the same subnet. Clusters using BYO VNets are unaffected by this change. In supported configurations, no action is required.
- For Pod Sandboxing,
kata-mshv-vm-isolation
will be replaced withkata-vm-isolation
while the--workload-runtime
used when creating a cluster will be changed fromKataMshvVmIsolation
toKataVmIsolation
. Make sure you use the correct name when creating Pod Sandboxing clusters. - Cluster Autoscaler will delete nodes that encounter provisioning errors/ failures immediately, instead of waiting for the full max-node-provision-time defined in the cluster autoscaler profile. This change significantly reduces scale-up delays caused by failed node provisioning attempts.
- In ingress-nginx managed via the application routing add-on, the metric
ingress_upstream_latency_seconds
has been removed following its deprecation upstream.
Component Updates
- Windows node images
- Server 2019 Gen1 – 17763.7792.250910
- Server 2022 Gen1/Gen2 – 20348.4171.250910
- Server 23H2 Gen1/Gen2 – 25398.1849.250910
- Server 2025 Gen1/Gen2 – 26100.6584.250910
- AKS Azure Linux v2 image has been updated to 202509.11.0
- AKS Azure Linux v3 image has been updated to 202509.18.0.
- AKS Ubuntu 22.04 node image has been updated to 202509.11.0.
- AKS Ubuntu 24.04 node image has been updated to 202509.11.0.
Azure File CSI driver
has been upgraded tov1.32.7
on AKS 1.32, andv1.33.5
on AKS 1.33.Azure Policy addon
has been upgraded to [v1.13.1
](https://github.com/Az...
Release 2025-08-29
Release 2025-08-29
Monitor the release status by regions at AKS-Release-Tracker. This release is titled v20250829
.
Announcements
- AKS Automatic is now generally available. AKS Automatic is based on three key pillars: production-ready by default, integrated best practices and safeguards, and code to Kubernetes in minutes. Sign up to watch the AKS Automatic Virtual Launch on September 16th from 8:00 AM - 12:00 PM (UTC-07:00).
- New Automatic cluster creation is only allowed in API Server Vnet Integration GA supported regions. Migrating from SKU: "Base" to SKU: "Automatic" is only allowed in API Server Vnet Integration GA supported regions. Operations on existing Automatic clusters will not be blocked even if the cluster is not in API Server Vnet Integration GA supported regions.
- AKS patch versions
1.33.3
,1.32.7
, and1.30.11
are now available. Refer to version support policy and upgrading a cluster for more information. - Istio-based service mesh add-on is now compatible with AKS Long Term Support (LTS) for Istio revisions asm-1-25+ and AKS versions 1.28+. Please note that not every Istio revision will be compatible with every AKS LTS version. It is recommended to review the Istio add-on support policy for an overview of this feature's support.
- API Server Vnet Integration is now available in the following additional regions: centralus, austriaeast, chilecentral, denmarkeast, israelnorthwest, malaysiawest, southcentralus2, southeastus3, southeastus5, southwestus, and usgovtexas. For the latest list of supported regions, see the API Server VNet Integration documentation.
- 1.30 Kubernetes version is now officially End of Life. Please upgrade to 1.31 version. If you require 1.30 version, then switch to AKS Long Term Support (LTS).
- Security Patch tab under AKS-Release-Tracker now provides information for Azure Linux v3. This provides real time info on the security patch contents and timestamp of actual release.
Release notes
Features
- Azure CNI Overlay is now GA and compatible with Application Gateway for Containers and Application Gateway Ingress Controller. See AGC networking for details on Overlay compatibility.
- Advanced Container Networking Services: Layer 7 Policies reached General Availability.
- Disabling SSH on Windows node pools is now available.
- Ubuntu 24.04 CVM is now enabled by default for K8s version 1.34-1.38.
- OpenID Connect (OIDC) issuer is now enabled by default on new cluster creation for Kubernetes version 1.34 and above.
- Node Auto-provisioning enabled clusters can use planned maintenance for scheduling node image upgrades that adhere to
aksManagedNodeOSUpgradeSchedule
. - When upgrading from kubenet to Azure CNI Overlay, customers can now specify a different pod CIDR using the --pod-cidr parameter. See Upgrade Azure CNI for more information.
- The migration CLI command to migrate from Availability Sets on AKS is now Generally Available. The feature is accessible in the Azure CLI v2.76.0 (August 2025). For more information on the migration tool, visit our Availability Sets migration documentation.
- The migration CLI command to migrate from the Basic Load Balancer on AKS is now Generally Available. The feature is accessible in the Azure CLI v2.76.0 (August 2025). For more information on the migration tool, visit our Basic Load Balancer migration documentation.
- Azure Linux 3.0 now supports the NVIDIA NC A100 GPU on AKS.
- AKS now supports a new OS Sku enum,
AzureLinux3
. This enum is now GA and supported in Kubernetes versions 1.28 to 1.36 using Azure CLI version 18.0.0b36 or later for preview and version 2.78.0 or later for GA. OS SkuAzureLinux3
is recommended if you need to migrate to Azure Linux 3.0 without upgrading your Kubernetes version.
Bug Fixes
- Fixed a bug where ETag was not returned in ManagedClusters or AgentPools responses in API versions 2024-09-01 or newer, even though the API specification said it would be.
- Fixed cluster autoscaler bug 7694 in kubernetes version 1.31+, where the "DeletionCandidateOfClusterAutoscaler" taint would persist on some of the remaining nodes after scale-down. This incorrect tainting prevented new pods from being scheduled on those nodes.
Behavioral Changes
- All AKS Automatic clusters, and AKS Standard clusters that enabled Deployment Safeguards via the safeguardsProfile, will now have a new
Microsoft.ContainerService/deploymentSafeguards
sub-resource created undermanagedClusters
. See Use Deployment Safeguards for more information. - Disallow adding non-Node auto provisioning pools to AKS Automatic clusters. There is no effect on existing Automatic Clusters that have non-Node auto provisioning pools.
- A new runTimeClassName,
kata-vm-isolation
, has been added for Pod Sandboxing in preparation for deprecating the oldkata-mshv-vm-isolation
name. Users can continue using the original name for the time being. - Starting with Kubernetes version 1.34, all AKS Automatic clusters will include a new AKS-managed component named
Cluster Health Monitor
within the kube-system namespace. This component is designed to collect metrics related to the cluster’s control plane and AKS-managed components, helping ensure these services are operating as expected and improving overall observability.
Component Updates
- Windows node images
- Server 2019 Gen1 – 17763.7678.250823
- Server 2022 Gen1/Gen2 – 20348.4052.250823
- Server 23H2 Gen1/Gen2 – 25398.1791.250823
- Server 2025 Gen1/Gen2 – 26100.4946.250823
- AKS Azure Linux v2 image has been updated to 202508.20.0 (image list).
- AKS Azure Linux v3 image has been updated to 202508.20.0 (image list).
- AKS Ubuntu 22.04 node image has been updated to 202508.20.0 (image list).
- AKS Ubuntu 24.04 node image has been updated to 202508.20.0 (image list).
Azure File CSI driver
has been upgraded tov1.33.4
on AKS 1.33, which includes performance improvements and bug fixes.Azure Disk CSI driver
has been upgraded tov1.33.4
on AKS 1.33, which includes performance improvements and bug fixes.NPM (Network Policy Manager)
has been upgraded tov1.6.33
to resolve multiple CVEs: CVE-2025-5702, CVE-2025-32988](https://nvd.nist.gov/vuln/detail/CVE-2025-3...
Release 2025-08-08
Release 2025-08-08
Monitor the release status by region at AKS-Release-Tracker. This release is titled v20250808
.
Announcements
- Starting in September 2025, AKS will start rolling out a change to enable a managed clusters quota for all current and new AKS customers. This rollout is expected to take place between 1-30 September 2025. AKS quota is the maximum number of managed clusters (AKS clusters) that an Azure subscription can create per region. Once the managed clusters quota is released, customers will need both managed clusters quota and node quota (VM SKUs) to create an AKS cluster. Existing AKS customer subscriptions will be given a default limit at or above their current usage, depending on the available regional capacity. Existing subscriptions using AKS for the first time and new subscriptions will be given a default limit. Customers can view quota limits and usage and request additional quota in the Azure portal Quotas blade or by using the Quotas REST API. Before the rollout is complete, quota limits and usage may be visible in the Azure portal on the Quotas blade, and customers will be able to request quota; however, limits won’t be enforced in every region until 1 October 2025. More information on the default limits for new subscriptions is available in documentation here.
- AKS Kubernetes patch versions
1.33.2, 1.32.6, 1.31.10, 1.30.13, 1.30.14
include a critical security fix for CVE-2025-4563 where nodes can bypass dynamic resource allocation authorization checks. This vulnerability affects the NodeRestriction admission controller when the DynamicResourceAllocation feature gate is enabled. Upgrade your clusters to these patched versions or above. Refer to version support policy and upgrading a cluster for more information. - Kubernetes CIS benchmark results and recommendations have been updated to CIS Kubernetes V1.27 Benchmark v1.11.1. The results are applicable to AKS 1.29.x through AKS 1.32.x.
- AKS long term support now fully supports KEDA.
- Kubelet serving certificate rotation is now enabled in all public cloud regions. For more information on kubelet serving certificate rotation and disablement, refer to the documentation. Sovereign cloud rollout will begin on 18 August 2025. For rollout updates and questions, see AKS Github Issues.
Release notes
Features
- Istio-based service mesh add-on now:
- Supports the following annotation:
service.beta.kubernetes.io/azure-disable-load-balancer-floating-ip
for Istio ingress gateways, allowing for Azure Load Balancer Floating IP configuration. - Permits use of the
defaultConfig.proxyHeaders
field inMeshConfig
as an allowed but unsupported customization. For guidance, see the MeshConfig documentation and the Istio support policy.
- Supports the following annotation:
- Azure Monitor users can now disable the Retina agent from running on specific nodes. This agent collects node network metrics and disabling it on a node will remove the Retina agent and stop all node network metric generation. Review the documentation for more information.
- Availability zones are now available as part of the Machine Show/List API.
Preview Features
- You can create new Confidential Virtual Machine node pools using Ubuntu 24.04 (preview) or Azure Linux 3.0 (preview). The default OS SKU for
Ubuntu
will remain Ubuntu 20.04 until Kubernetes version 1.35. You can upgrade existing Ubuntu node pools to Ubuntu 24.04 (preview). Note that you cannot update existing node pools to use a Confidential VM size. - Managed Namespaces is now available as preview with Azure RBAC enabled clusters. To get started, review the documentation.
- AKS Component Insights is now available in Preview. Component insights shows breaking changes and component version changes for upcoming minor version upgrades.
- AKS MCP Server is now in public preview.
- Agentic CLI for AKS is now in private preview. This experience focuses on enabling users to diagnose and resolve cluster issues using natural language. You can signup at [aka.ms/aks/cli-agent/signup]/(https://aka.ms/aks/cli-agent/signup) for early access.
Bug Fixes
- Fixes an issue in Istio-based service mesh add-on that was preventing simple TLS origination using system certificates. Addresses CVE-2025-46821 in
1.25.3
. - Bring your own CNI clusters don't utilize route tables. To optimize resource usage in such clusters, existing route tables will be deleted and no new ones will be created.
Behavior Changes
- To allow addons that require Microsoft Entra ID authentication to be able to use workload identity while enabling IMDS restriction, it is now required to enable the OIDC issuer as well.
- For Istio-based service mesh add-on for AKS, partial updates to serviceMeshProfile in AKS managedClusters API now supports empty revision lists. If no revisions are specified, the system will use existing revision values instead of returning an error.
Component Updates
- Windows node images
- Server 2019 Gen1 –
17763.7558.250714
. - Server 2022 Gen1/Gen2 –
20348.3932.250714
. - Server 23H2 Gen1/Gen2 –
25398.1732.250714
.
- Server 2019 Gen1 –
- AKS Azure Linux v2 image has been updated to 202507.21.0 (image list).
- AKS Azure Linux v3 image has been updated to 202507.21.0 (image list).
- AKS Ubuntu 22.04 node image has been updated to 202507.21.0 (image list).
- AKS Ubuntu 24.04 node image has been updated to 202507.21.0 (image list).
- Container Insights has been upgraded to
3.1.28
which includes performance improvements and bug fixes. - Azure Disk CSI driver has been upgraded to
v1.32.9
,v1.33.3
on AKS 1.32 and 1.33 respectively. - Retina Basic agent images have been updated to
v1.0.0-rc1
, addressing security vulnerability GHSA-fv92-fjc5-jj9h. - Node Auto Provisioning (NAP) has been updated to Karpenter release
1.6.1
with improvements and bug fixes. - Azure Monitor managed service for Prometheus addon is updated to the latest release 07-24-2025
- Istio-based service mesh add-on has been updated with patch releases
1.25.3
and1.26.2
for Istio-based service mesh revisions asm-1-25 and asm-1-26. To adopt patch updates, restart workloads to triggers sidecar re-injection of the new istio-proxy version. - Cloud Controller Manager image versions updated to
v1.33.2
, [v1.32.7
](https:...
Release 2025-07-20
Release 2025-07-20
Monitor the release status by region at AKS-Release-Tracker. This release is titled v20250720
.
Announcements
- Kubernetes 1.27 LTS version and 1.30 community version are going out of support by July 30th. Please upgrade to a supported version , refer to AKS release calendar for more information.
- AKS Kubernetes version 1.33 is now compatible with Long-Term Support (LTS), aligning with all supported Kubernetes versions are eligible for Long-Term Support (LTS) on AKS.
- The asm-1-23 revision for the Istio add-on has been deprecated. Kindly upgrade your service mesh to a supported version following the AKS Istio upgrade guide.
- Virtual Machines (VMs) node pools are now enabled by default when creating a new node pool. Previously Virtual Machine Scale Sets (VMSS) were the default node pool type when creating a node pool in AKS. To learn more about VMs, an AKS-optimized node pool type, visit our documentation.
- WASI Node Pool has been retired. If you'd like to run WebAssembly (WASM) workloads, you can deploy SpinKube to Azure Kubernetes Service (AKS) from Azure Marketplace.
Release notes
-
Features
- Application routing add-on now supports configuration of SSL passthrough, custom logging format, and load balancer IP ranges. Review the configuration of NGINX ingress controller documentation for more information.
- SecurityPatch Node OS upgrade channel is now supported for all network isolated clusters.
- API server VNet integration is now Generally Available (GA) in additional regions: East Asia, Southeast Asia, Switzerland North, Brazil South, Central India, Germany West Central, and more GA regions. For the complete list of supported regions and any capacity limitations, see the API Server VNet Integration documentation.
- Kubelet Service Certificate Rotation will begin rollout to all remaining public regions, starting on 23 July 2025. Rollout is expected to be completed in 10 days. Note: This is an estimate and is subject to change. See GitHub issue for regional updates. Existing node pools will have kubelet serving certificate rotation enabled by default when they perform their first upgrade to any kubernetes version 1.27 or greater. New node pools on kubernetes version 1.27 or greater will have kubelet serving certificate rotation enabled by default. For more information on kubelet serving certificate rotation and disablement, see https://aka.ms/aks/kubelet-serving-certificate-rotation.
- Kubernetes Event-Driven Autoscaling (KEDA) is now supported in LTS.
- Static Block allocation mode for Azure CNI Networking is now Generally Available.
- Node auto provisioning is now Generally Available (GA) in all public cloud. To learn more, visit our node auto provisioning documentation.
- On 30 September 2025, Availability Sets (VMAS) will be retired on AKS. To migrate from Availability Sets and the Basic Load Balancer in one step, visit our Availability Sets documentation. See our Availability Sets to Virtual Machines migration Github Issue for updates.
- On 30 September 2025, the Basic Load Balancer will be retired. To migrate from the Basic Load Balancer in one step, visit our Basic Load Balancer retirement on AKS documentation. See our Basic Load Balancer migration on AKS Github Issue for updates.
-
Preview Features
- Azure Virtual Network Verifier is now available in Azure Portal (Node pools blade) for troubleshooting outbound connectivity issues in your AKS cluster.
- Encryption in transit is now available for the Azure File CSI driver, starting from AKS version 1.33.
- Node auto provisioning metrics are now available through Azure Monitor managed service for Prometheus. To learn more, visit our node auto provisioning documentation.
- Disable HTTP Proxy is now available in Preview.
-
Bug Fixes
- Fixed issue where AKS evicted pods that had already been manually relocated, causing upgrade failures. This fix adds a node consistency check to ensure the pod is still on the original node before retrying eviction.
-
Behavior Changes
- The delete-machines API will only delete machines from the system nodepool if the system addon PDBs are respected.
- AKS will now reject invalid OsSku enums during cluster creation, node pool creation, and node pool update. Previously AKS would default to
Ubuntu
. Unspecified OsSku with OsTypeLinux
will still default toUbuntu
. For more information on supported OsSku options, see documentation for Azure CLI and the AKS API. - Application routing component Pods are now annotated with kubernetes.azure.com/set-kube-service-host-fqdn to automatically have the API server's domain name injected into the pod instead of the cluster IP, to enable communication to the API server. This is useful in cases where the cluster egress is via a layer 7 firewall.
- Advanced Container Networking Services (ACNS) pods now run with priorityClassName: system-node-critical, preventing eviction under node resource pressure and improving cluster security posture.
- Add node anti-affinity for FIPS-enabled nodes for retina-agent when pod-level metrics are enabled.
-
Component Updates
- Windows node images
- Server 2019 Gen1 –
17763.7558.250714
. - Server 2022 Gen1/Gen2 –
20348.3932.250714
. - Server 23H2 Gen1/Gen2 –
25398.1732.250714
.
- Server 2019 Gen1 –
- AKS Azure Linux v2 image has been updated to 202507.15.0.
- AKS Azure Linux v3 image has been updated to 202507.15.0.
- AKS Ubuntu 22.04 node image has been updated to 202507.15.0.
- AKS Ubuntu 24.04 node image has been updated to 202507.15.0.
- Application Insights addon image is updated to 1.0.0-beta.7 to expose container port 4000 for scraping Prometheus metrics.
- Application routing operator is updated to v0.2.7 for all supported Kubernetes versions.
- Azure Network Policy Manager (NPM) image version is updated to v1.6.29 to resolve iptables-legacy command issues and bump Ubuntu to 24.04 with CVE fixes.
- Azure Disk CSI driver versions are upgraded to v1.31.11, v1.32.8, v1.33.2 on AKS versions 1.31, 1.32, 1.33 respectively.
- Cloud Controller Manager has been upgraded to v1.33.1, v1.32.6, v1.31.7 and v1.30.13...
- Windows node images
Release 2025-06-17
Monitor the release status by region at AKS-Release-Tracker. This release is titled v20250617
.
Announcements
- Kubernetes 1.27 LTS version and 1.30 community version are going out of support by July 30th. Please upgrade to a supported version , refer to AKS release calendar for more information.
- Customers using Azure Linux 2.0 should migrate to Azure Linux 3.0 before November 2025. For details on how to migrate from Azure Linux 2.0 to Azure Linux 3.0, see this doc. AKS is currently working on a feature to allow for migrations between Azure Linux 2.0 and Azure Linux 3.0 through a node pool update command. For updates on feature progress and availability, see Github issue.
- Starting in June 2025, AKS clusters with version >= 1.28 and using Azure Linux 2.0 can be opted into Long Term Support. See blog for more information.
- Starting in July 2025, Azure Kubernetes Service will begin rolling out a change to enable quota for all current and new AKS customers. AKS quota will represent a limit of the maximum number of managed clusters that an Azure subscription can consume per region. Existing AKS customer subscriptions will be given a quota limit at or above their current usage, depending on region availability. Once quota is enabled, customers can view their available quota and request quota increases in the Quotas page in the Azure Portal or by using the Quotas REST API. For details on how to view and request quota increases via the Portal Quotas page, visit Azure Quotas. For details on how to view and request quota increases via the Quotas REST API, visit: Azure Quota REST API Reference. New AKS customer subscriptions will be given a default limit upon new subscription creation. More information on the default limits for new subscriptions is available in documentation here.
- Ubuntu 18.04 is no longer supported on AKS. AKS will no longer create new node images or provide security updates for Ubuntu 18.04 nodes. Existing node images will be deleted by 17 July 2025. Scaling operations will fail. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to a supported Kubernetes version.
- Teleport (preview) on AKS will be retired on 15 July 2025, please migrate to Artifact Streaming (preview) on AKS or update your node pools to set --aks-custom-headers EnableACRTeleport=false. Azure Container Registry has removed the Teleport API meaning that any nodes with Teleport enabled are pulling images from Azure Container Registry as any other AKS node. After 15 July 2025, any node pools with Teleport (preview) enabled may experience breakage and node provisioning failures. For more information, see aka.ms/aks/teleport-retirement.
Azure Kubernetes Service will no longer support the --skip-gpu-driver-install node pool tag to skip automatic driver installation. Starting on August 14 2025, you will no longer be able to use this node pool tag at AKS node pool creation time to install custom GPU drivers or use the GPU Operator. Alternatively, you should use the generally available gpu-driver API field to update your existing node pools or create new GPU-enabled node pools to skip automatic GPU driver installation.
Release Notes
-
Preview Features
- Azure Monitor Application Insights for Azure Kubernetes Service (AKS) workloads is now available in preview.
- Ubuntu 24.04 is now available in public preview in k8s 1.32+. ContainerD 2.0 is enabled by default. You can create new Ubuntu 24.04 node pools or update existing Linux node pools to Ubuntu 24.04. Use the "Ubuntu2404" os sku enum after registering the preview flag "Ubuntu2404Preview". You can also use the new "Ubuntu2204" os sku enum to rollback to Ubuntu 22.04 if you encounter any issues! You may also rollback using "Ubuntu" os sku enum. For more information, see upgrading your OS version.
- Cost optimized add-on scaling is now available in preview. This feature allows you to autoscale supported addons or customize the resource's default CPU/ memory requests and limits to improve cost savings.
-
Features
- AKS version 1.33 is now generally available. Please check the AKS Release tracker for when your region will receive the GA update.
- AKS patch versions 1.32.5, 1.31.9 are now available. Refer to version support policy and upgrading a cluster for more information.
- API Server VNet Integration is available now, please find the most up to date regions where this feature has been rolled out.
- Kubelet Service Certificate Rotation has now been rolled out to East US and UK South. Existing node pools will have kubelet serving certificate rotation enabled by default when they perform their first upgrade to any kubernetes version 1.27 or greater. New node pools on kubernetes version 1.27 or greater will have kubelet serving certificate rotation enabled by default. For more information on kubelet serving certificate rotation and disablement, see certificate rotation in Azure Kubernetes Service.
- MaxblockedNodes property is getting rolled to all regions. This helps cluster operators to put a limit on number of nodes that can be blocked on pdb blocked eviction failures and continuing upgrade forward. Read more.
-
Bug Fixes
- Fixed a race condition with streams sharing data between Cilium agent and ACNS security agent.
- Fixed Azure Policy addon Gatekeeper regression causing crash loop on clusters with Kubernetes versions < 1.27.
- Resolved an issue where node pool scaling failed with customized kubelet configuration. Without this fix, node pools using CustomKubeletConfigs could not be scaled, and encountered an error message stating that the CustomKubeletConfig or CustomLinuxOSConfig cannot be changed for the scaling operation.
- Fixed an issue where updating node pools with the exclude label, it doesn't update the Load Balancer Backend Pool properly.
- Resolved a problem when upgrading Kubenet or Nodesubnet cluster with AGIC enabled to Azure CNI Overlay there might be some connectivity issues to services exposed via Ingress App Gateway public IP.
- Fixed a bug where clusters with Node Auto Provisioning enabled could intermittently get an error about "multiple identities configured" and be unable to authenticate with Azure.
- Fixed an issure to ensure the vms in a specific cloud are compatible with the latest Windows 550 grid driver.
-
Behavior Changes
- AKS now allows daily schedules for the auto upgrade configuration.
- Static Egress Gateway memory limits increased from 500Mi to 3000Mi reducing the risk of memory-related restarts under load.
- The GPU provisioner component of KAITO has now been moved to the AKS control plane when the KAITO add-on is used. The OSS installation will still require this component to run on the kubernetes nodes.
- Azure Monitor managed service for Prometheus updates the max shards from 12 to 24, ensuring enhanced scaling capabilities.
linuxutil plugin
is enabled again for Retina Basic and ACNS.- Node Auto-Provisioning (NAP) now requires Kubernetes RBAC to be enabled, because NAP relies on secure and scoped access to Kubernetes resources to provision nodes based on pending pod resource requests. Kubernetes RBAC is enabled by default. For more information, see RBAC for Kubernetes.
- Deployment Safeguards no longer requires Azure Policy permissions. Cluster admins will have the ability to turn on and di...
Release 2025-05-19
Monitor the release status by region at AKS-Release-Tracker. This release is titled v20250519
.
Announcements
- Customers using Azure Linux 2.0 should migrate to Azure Linux 3.0 before November 2025. For details on how to migrate from Azure Linux 2.0 to Azure Linux 3.0, see this doc. AKS is currently working on a feature to allow for migrations between Azure Linux 2.0 and Azure Linux 3.0 through a node pool update command. For updates on feature progress and availability, see Github issue.
- Starting in June 2025, AKS clusters with version >= 1.28 and using Azure Linux 2.0 can be opted into Long Term Support.
- Starting in June 2025, Azure Kubernetes Service will begin rolling out a change to enable quota for all current and new AKS customers. AKS quota will represent a limit of the maximum number of managed clusters that an Azure subscription can consume per region. Existing AKS customer subscriptions will be given a quota limit at or above their current usage, depending on region availability. Once quota is enabled, customers can view their available quota and request quota increases in the Quotas page in the Azure Portal or by using the Quotas REST API. For details on how to view and request quota increases via the Portal Quotas page, visit Azure Quotas. For details on how to view and request quota increases via the Quotas REST API, visit: Azure Quota REST API Reference. New AKS customer subscriptions will be given a default limit upon new subscription creation. More information on the default limits for new subscriptions is available in documentation here.
- Starting on 17 June 2025, AKS will no longer create new node images for Ubuntu 18.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to a supported Kubernetes version.
- Teleport (preview) on AKS will be retired on 15 July 2025, please migrate to Artifact Streaming (preview) on AKS or update your node pools to set --aks-custom-headers EnableACRTeleport=false. Azure Container Registry has removed the Teleport API meaning that any nodes with Teleport enabled are pulling images from Azure Container Registry as any other AKS node. After 15 July 2025, any node pools with Teleport (preview) enabled may experience breakage and node provisioning failures. For more information, see aka.ms/aks/teleport-retirement.
- Azure Kubernetes Service will no longer support the --skip-gpu-driver-install node pool tag to skip automatic driver installation. Starting on August 14 2025, you will no longer be able to use this node pool tag at AKS node pool creation time to install custom GPU drivers or use the GPU Operator. Alternatively, you should use the generally available gpu-driver API field to update your existing node pools or create new GPU-enabled node pools to skip automatic GPU driver installation.
Release Notes
-
Preview Features
- For clusters with Layer 7 policies with Advanced Container Networking Services enabled, a new metric proxy_datapath_update_timeout_total has been introduced in disabled by default mode.
-
Features
- Kubernetes 1.31 and 1.32 are now designated as Long-Term Support (LTS) versions.
- Kubernetes 1.33 is available in Preview. A full matrix of supported add-ons and components is published at the AKS versions page.
- AKS now allows upgrading from Azure CNI NodeSubnet to Azure CNI NodeSubnet with Cilium dataplane, and from Azure CNI NodeSubnet with Cilium dataplane to Azure CNI Overlay with Cilium dataplane.
-
Bug Fixes
- Fixed failures triggered by duplicate tag keys that differed only by character case.
-
Behavior Changes
- Static egress gateway memory limits increased from 128Mi to 500Mi for greater stability.
- Memory for Azure Monitor Container Insights container
ama-logs
increased from750Mi
to1Gi
. - AKS nodes now use Azure Container Registry (ACR)-scoped Entra ID tokens for kubelet authentication when pulling images from ACR. This enhancement replaces the legacy ARM-based Entra token model, aligning with modern security practices by scoping credentials directly to the registry and improving isolation and traceability.
- Timeouts due to FQDN IP updates are exported by Cilium Agent as cilium_proxy_datapath_update_timeout_total on Azure CNI Powered by Cilium.
- ARM requests made with an api-version >=
2025-03-01
to obtain the status of async AKS operations can now return RP-defined status values for ongoing operations. Requests made with an api-version <2025-03-01
will only return anInProgress
status for ongoing operations.
-
Component Updates
- Updated Node Auto Provisioning to use karpenter-provider-azure release v1.4.0.
- Updated Azure Monitor Container Insights images to 3.1.27 for both Linux and Windows.
- Updated the Windows GPU device-plugin to
v0.0.19
, mitigating CVE-2025-22871. - Windows node images
- Server 2019 Gen1 –
17763.7240.250416
- Server 2022 Gen1/Gen2 –
20348.3561.250416
- Server 23H2 Gen1/Gen2 –
25398.1551.250416
- Server 2019 Gen1 –
- AKS Azure Linux v2 image has been updated to 202505.14.0.
- AKS Azure Linux v3 image has been updated to 202505.14.0.
- AKS Ubuntu 22.04 node image has been updated to 202505.14.0.
- AKS Ubuntu 24.04 node image has been updated to 202505.14.0.
- Azure Disk CSI driver updated to versions v1.30.12, v1.31.9, v1.32.5 on AKS versions 1.30, 1.31, and 1.32 respectively.
- Azure Blob CSI driver updated to versions v1.25.6 and v1.26.3 on AKS versions 1.31 and 1.32 respectively
- Azure File CSI driver v1.32.2 for AKS 1.32
- Updated cloud-node-manager to v1.32.5
- Updated cloud-controller-manager to v1.31.6
- Updated acr-credential-provider to v1.29.15
- Static egress gateway images updated to v0.0.21.
- Updated Azure Policy add-on image to v1.11. Gatekeeper updated to v3.19.1.
Release 2025-04-27
Monitor the release status by region at AKS-Release-Tracker. This release is titled v20250427
.
Announcements
- AKS supported Kubernetes version release updates are now available in AKS Release Tracker. You can check current in-support Kubernetes versions and LTS versions for specific region and track new patches version release progress with Release Tracker.
- Customers using AzureLinux 2.0 should migrate to Azure Linux 3.0 before November 2025. For details on how to migrate from Azure Linux 2.0 to Azure Linux 3.0, see this doc. AKS is currently working on a feature to allow for migrations between Azure Linux 2.0 and Azure Linux 3.0 through a node pool update command. For updates on feature progress and availability, see Github issue.
- AKS now requires a minimum of 2GBs of memory for the SKU for all user nodepools. To learn more, see aka.ms/aks/restrictedSKUs.
- Starting on 5 May, 2025, WebAssembly System Interface (WASI) node pools will no longer be supported. You can no longer create WASI (preview) node pools, and existing WASI node pools will be unsupported.
- Starting in June 2025, Azure Kubernetes Service will begin rolling out a change to enable quota for all current and new AKS customers. AKS quota will represent a limit of the maximum number of managed clusters that an Azure subscription can consume per region. Existing AKS customer subscriptions will be given a quota limit at or above their current usage, depending on region availability. Once quota is enabled, customers can view their available quota and request quota increases in the Quotas page in the Azure Portal or by using the Quotas REST API. For details on how to view and request quota increases via the Portal Quotas page, visit Azure Quotas. For details on how to view and request quota increases via the Quotas REST API, visit: Azure Quota REST API Reference. New AKS customer subscriptions will be given a default limit upon new subscription creation. More information on the default limits for new subscriptions is available in documentation here.
- As of 31 March 2025, AKS no longer allows new cluster creation with the Basic Load Balancer. On 30 September 2025, the Basic Load Balancer will be retired. We will be posting updates on migration paths to the Standard Load Balancer. See AKS Basic LB Migration Issue for updates on when a simplified upgrade path is available. Refer to Basic Load Balancer Deprecation Update for more information.
- The asm-1-22 revision for the Istio-based service mesh add-on has been deprecated. Migrate to a supported revision following the AKS Istio upgrade guide.
- Starting on 17 June 2025, AKS will no longer create new node images for Ubuntu 18.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to a supported Kubernetes version.
- Teleport (preview) on AKS will be retired on 15 July 2025, please migrate to Artifact Streaming (preview) on AKS or update your node pools to set --aks-custom-headers EnableACRTeleport=false. Azure Container Registry has removed the Teleport API meaning that any nodes with Teleport enabled are pulling images from Azure Container Registry as any other AKS node. After 15 July 2025, any node pools with Teleport (preview) enabled may experience breakage and node provisioning failures. For more information, see aka.ms/aks/teleport-retirement.
Release Notes
-
Features:
- Network isolated cluster with outbound type
none
is now Generally Available. - AKS Security Bulletin and AKS CVE Mitigation Status are now available to track Security and CVE mitigations.
- Network isolated cluster with outbound type
-
Preview Features:
- Kubernetes 1.33 version is now available for Preview, see Release tracker for when it hits your region.
- Kubernetes 1.31 and 1.32 are now recognized as Long-Term Support (LTS) releases in AKS, joining existing LTS versions 1.28 and 1.29. You can view when these LTS releases hit your region in real time via the Release tracker. For more information, see Long Term Support (LTS).
-
Bug Fixes:
- Fix an issue in Azure CNI Powered by Cilium to improves DNS request/response performance, especially in large scale clusters using FQDN based policies. Without this fix, if the user sets a DNS request timeout below 2 seconds, in high-scale scenarios they may experience request drops due to duplicate request IDs.
- Fix an issue where load balancer tags were not updated after accluster tag update. Load balancer tags now correctly reflect the latest state.
- Fix an issue in Cilium v1.17 where a deadlock was causing server pods to be unable to start.
-
Behavior Changes:
aksmanagedap
is blocked as a reserved name for AKS system component, you can no longer use it for creating agent pool. See naming convention for more information.linuxutil plugin
is temporarily disabled for Retina Basic and ACNS as it was causing memory leaks that leads to Retina pods OOMKill.- Advanced Container Networking Services (ACNS) configmaps (
cilium
,retina
,hubble
) now auto‑format cluster names to satisfy Cilium 1.17 rules:≤ 32 chars, lowercase alphanumeric characters and dashes, no leading/trailing dashes, functionality is unaffected. This change is due to the strict enforcement of Cilium 1.17. See this link for details. - The
defaultConfig.gatewayTopology
field is now included in the Istio add-onMeshConfig
AllowList as an unsupported field. For more details, see the Istio MeshConfig documentation. - Previously, you can't disable Node AutoProvisioning once enabled, now you can if meet certain criteria. See this document for more details.
- Disabling kube-proxy no longer requires the
KubeProxyConfigurationPreview
feature flag in bring-your-own (BYO) CNI scenarios. - Kubelet Service Certificate Rotation will begin regional rollout, starting with westcentralus and eastasia by 16 May 2025. Existing node pools in these regions will have kubelet serving certificate rotation enabled by default when they perform their first upgrade to any kubernetes version 1.27 or greater. New node pools in these regions on kubernetes version 1.27 or greater will have kubelet serving certificate rotation enabled by default. For more information on kubelet serving certificate rotation, see aka.ms/aks/kubelet-serving-certificate-rotation.
-
Component Updates:
- Fleet networking components updated to v0.39 from v0.38 to fix CVE.
- Workload Identity updated from v1.4.0 to v1.5.0
- App-routing-operator has been upgraded to v0.2.5 on all supported AKS versions.
- Cost-analysis-agent and cost-analysis-scraper image updated from v0.0.22 to 0.0.23 to fix CVE-2025-22871.
- Cloud-controller-manager updated to v1.32.4, v1.31.5, [v1.30.11](https://cloud-provider-azure.sigs.k8...
Release 2025-04-06
Monitor the release status by region at AKS-Release-Tracker. This release is titled v20250406
.
Announcements
- Starting in May 2025, Azure Kubernetes Service will begin rolling out a change to enable quota for all current and new AKS customers. AKS quota will represent a limit of the maximum number of managed clusters that an Azure subscription can consume per region. Existing AKS customer subscriptions will be given a quota limit at or above their current usage, depending on region availability. Once quota is enabled, customers can view their available quota and request quota increases in the Quotas page in the Azure Portal or by using the Quotas REST API. For details on how to view and request quota increases via the Portal Quotas page, visit Azure Quotas. For details on how to view and request quota increases via the Quotas REST API, visit: Azure Quota REST API Reference. New AKS customer subscriptions will be given a default limit upon new subscription creation. More information on the default limits for new subscriptions is available in documentation here.
- AKS Kubernetes version 1.32 roll out has been delayed and is now expected to reach all regions on or before the end of April. Please use the az-aks-get-versions command to accurately capture if Kubernetes version 1.32 is available in your region.
- Kubernetes version 1.28, 1.29 will become additional Long Term Support (LTS) versions in AKS, alongside existing LTS versions 1.27 and 1.30.
- AKS Kubernetes version 1.29 is going out of support in all regions on or before end April, 2025.
- You can now switch non-LTS clusters on Kubernetes versions 1.25 onwards and within 3 versions of the current LTS versions to LTS by switching their tier to Premium.
- As of 31 March 2025, AKS no longer allows new cluster creation with the Basic Load Balancer. On 30 September 2025, the Basic Load Balancer will be retired. We will be posting updates on migration paths to the Standard Load Balancer. See AKS Basic LB Migration Issue for updates on when a simplified upgrade path is available. Refer to Basic Load Balancer Deprecation Update for more information.
- The asm-1-22 revision for the Istio-based service mesh add-on has been deprecated. Migrate to a supported revision following the AKS Istio upgrade guide.
- The pod security policy feature was retired on 1st August 2023 and removed from AKS versions 1.25 and higher. PodSecurityPolicy property will be officially removed from AKS API starting from 2025-03-01.
- Starting on 17 June 2025, AKS will no longer create new node images for Ubuntu 18.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to a supported Kubernetes version.
- Starting on 17 March 2027, AKS will no longer create new node images for Ubuntu 20.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to Kubernetes version 1.34+ by the retirement date.
- HTTP Application Routing (preview) has been retired as of March 3, 2025 and AKS will start to block new cluster creation with HTTP App routing enabled. Affected clusters must migrate to the generally available Application Routing add-on prior to that date.
- Customers with nodepools using Standard_NC24rsv3 VM sizes should resize or deallocate those VMs. Microsoft will deallocate remaining Standard_NC24rsv3 VMs in the coming weeks.
- Teleport (preview) on AKS will be retired on 15 July 2025, please migrate to Artifact Streaming (preview) on AKS or update your node pools to set --aks-custom-headers EnableACRTeleport=false. Azure Container Registry has removed the Teleport API meaning that any nodes with Teleport enabled are pulling images from Azure Container Registry as any other AKS node. After 15 July 2025, any node pools with Teleport (preview) enabled may experience breakage and node provisioning failures. For more information, see aka.ms/aks/teleport-retirement.
Release Notes
-
Features:
- AKS Security Bulletin and AKS CVE Mitigation Status are now available to track Security and CVE mitigations
- Azure Portal will now show you Deployment Recommendations based on available capacity of virtual machines
- Microsoft Copilot in Azure, including AKS is now generally available
- AKS cost recommendations in Azure Advisor is Generally Available
- Kubernetes 1.32 is now Generally Available
- AKS Kubernetes patch versions 1.31.7, 1.30.11, 1.29.15 to resolve CVE-2025-0426
- You can now enable Federal Information Process Standard (FIPS) when using Arm64 VM SKUs in Azure Linux 3.0 node pools in Kubernetes version 1.31+.
- Enable Pod Sandboxing Confidential mounts for Azure File CSI driver on AKS 1.32
- The Azure Portal now offers Deployment Recommendations proactively if there are capacity constraints on the selected node pool sku, zone, and region when creating a new AKS cluster.
- Custom Certificate Authority is available as GA in the 2025-01-01 GA API. It isn't yet available in the CLI until May 2025. To use the GA feature in CLI before release, you can use the
az rest
command to add custom certificates during cluster creation. For more information, see aka.ms/aks/custom-certificate-authority.
-
Behavior Changes:
- Add node anti-affinity for FIPS-compliant nodes to prevent scheduling of retina-agent pods to stop CrashLoopBackOff on FIPS-enabled nodes whilst fix for Retina + FIPS is being rolled out.
- Increased tofqdns-endpoint-max-ip-per-hostname from 50 to 1000 and tofqdns-min-ttl from 0 to 3600 in Azure Cilium for better handling of large DNS responses and reduce DNS query load.
- Konnectivity agent will now scale based on cluster node count.
- Starting on 15 April 2025, you will now be able to update your clusters to add an HTTP Proxy Configuration. Any update command that adds/changes an HTTP Proxy Configuration will now trigger an automatic reimage that will ensure all node pools in the cluster will have the same configuration. For more information, see aka.ms/aks/http-proxy.
- Starting with Kubernetes 1.33, the default Kubernetes Scheduler is configured to use a
MaxSkew
value of 1 fortopology.kubernetes.io/zone
. For more details see Ensure pods are spread across AZs
-
Component Updates:
- Cost Analysis add-on updated to v0.0.22 to fix CVE-2025-22866
- Updated ip-masq-agent updated to 0.1.15-2 to address CVE-2024-45338
- Application routing add-on updated to v0.2.1-patch-8 for Kubernetes below 1.30 and to v0.2.3-patch-6 for Kubernetes 1.30+. This updates ingress-nginx to v1.11.5 to fix CVE-2025-1097, CVE-2025-1098, CVE-2025-1974, [CVE-2025-24513](https://nvd.nist.gov/vul...
Release 2025-03-16
Release 2025-03-16
Monitor the release status by region in the AKS Release Tracker. This release is titled v20250316
.
Announcements
- Starting in April 2025, Azure Kubernetes Service will begin rolling out a change to enable quota for all current and new AKS customers. AKS quota will represent a limit of the maximum number of managed clusters that an Azure subscription can consume per region. Existing AKS customer subscriptions will be given a quota limit at or above their current usage, depending on region availability. Once quota is enabled, customers can view their available quota and request quota increases in the Quotas page in the Azure Portal or by using the Quotas REST API. For details on how to view and request quota increases via the Portal Quotas page, visit Azure Quotas. For details on how to view and request quota increases via the Quotas REST API, visit: Azure Quota REST API Reference. New AKS customer subscriptions will be given a default limit upon new subscription creation. More information on the default limits for new subscriptions is available in documentation here.
- AKS Kubernetes version 1.32 roll out has been delayed and is now expected to reach all regions on or before the end of April. Please use the az-aks-get-versions command to accurately capture if Kubernetes version 1.32 is available in your region.
- AKS will be upgrading the KEDA addon to more recent KEDA versions. The AKS team will add KEDA 2.16 on AKS clusters with K8s versions >=1.32, KEDA 2.14 for Kubernetes v1.30 and v1.31. KEDA 2.15 and KEDA 2.14 will introduce multiple breaking changes. View the troubleshooting guide to learn how to mitigate these breaking changes.
- AKS Kubernetes version 1.28 will soon be available as a Long Term Support version.
- You can now switch non-LTS clusters on Kubernetes versions 1.25 onwards and within 3 versions of the current LTS versions to LTS by switching their tier to Premium.
- On 31 March 2025, AKS will no longer allow new cluster creation with the Basic Load Balancer. On 30 September 2025, the Basic Load Balancer will be retired. We will be posting updates on migration paths to the Standard Load Balancer. See AKS Basic LB Migration Issue for updates on when a simplified upgrade path is available. Refer to Basic Load Balancer Deprecation Update for more information.
- The asm-1-22 revision for the Istio-based service mesh add-on has been deprecated. Migrate to a supported revision following the AKS Istio upgrade guide.
- The pod security policy feature was retired on 1st August 2023 and removed from AKS versions 1.25 and higher. PodSecurityPolicy property will be officially removed from AKS API starting from 2025-03-01.
- Starting on 17 June 2025, AKS will no longer create new node images for Ubuntu 18.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to a supported Kubernetes version.
- Starting on 17 March 2027, AKS will no longer create new node images for Ubuntu 20.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to Kubernetes version 1.34+ by the retirement date.
- Customer on retired NCv1, NCv2, NDv1, and NVv1 VM sizes should expect to have those node pools deallocated. Please move to supported VM sizes. You can find more information and instructions to do so here.
Release Notes
-
Features:
- Application routing add-on support for configuring the default NGINX ingress controller visibility is now generally available in API 2025-02-01.
- Kubernetes events for monitoring node auto-repair actions are now available for your AKS cluster. You can ingest these events and create alerts following the same process as other Kubernetes events.
- AKS Kubernetes patch versions 1.29.12, 1.29.13, 1.30.8, 1.30.9, 1.31.4, and 1.31.5 are now available.
- Application Gateway Ingress Controller now supports Azure CNI overlay clusters.
- You can now upgrade AKS clusters with the Istio-based service mesh add-on enabled regardless of the compatibility with the current mesh revision, allowing to recover to a compatible and supported state. For more information, visit istio upgrade documentation.
- Istio-based service mesh add-on users can now customize the
externalTrafficPolicy
field in the Istio ingress gatewayService
spec. AKS will no longer reconcile this field, preserving user-defined values. - AKS now supports upgrading from Node Subnet to Node Subnet + Cilium and from Node Subnet + Cilium to Azure CNI Overlay + Cilium. For more information, please see our upgrade documentation.
- Message of the day is now generally available.
- You can now enable Federal Information Process Standard (FIPS) when using Arm64 VM SKUs. This is only supported for Azure Linux 3.0 node pools on Kubernetes version 1.32+.
- You can now create Windows type Virtual Machine Node Pools. Note that existing Linux type VM node pools cannot be converted to Windows VM node pools. For more information, see Create a Virtual Machine node pool.
- Private clusters are now supported in Automated Deployments.
-
Preview Features:
- You can use the
EnableCiliumNodeSubnet
feature in preview to create Cilium node subnet clusters using Azure CNI Powered by Cilium. - Control plane metrics are now available through Azure Monitor platform metrics in preview to monitor critical control plane components such as API server and etcd.
- You can use the
-
Bug Fixes:
- Fixed an issue with the retina-agent volume to restrict access to only
/var/run/cilium
directory. Currently retina-agent mounts/var/run
from host directory. This can have potential issue as it can overwrite data in the directory. - Fixed an issue where SSHAccess was being reset to the default value
enabled
on partial PUT requests formanagedCluster.AgentPoolProfile.SecurityProfile
without specifying SSHAccess. - Fixed an issue where Node Auto Provisioning (Karpenter) failed to properly apply the
kubernetes.azure.com/azure-cni-overlay=true
label to nodes which resulted in failure to assign pod IPs in some cases. - Fixed an issue where
calico-typha
could be scheduled on virtual-kubelet due to overly permissive tolerations. Tolerations are now properly restricted to prevent incorrect scheduling. Check this GitHub Issue for more details. - Fixed an issue in Hubble-Relay scheduling behavior to prevent deployment on cordoned nodes, allowing the cluster autoscaler to properly scale down nodes.
- Fixed an issue where pods could get stuck in
ContainerCreating
during Cilium+NodeSubnet to Cilium+Overlay upgrades by ensuring the original network configuration is retained on existing nodes. - Fixed an issue where priority class isn't set on the Custom CA Trust DaemonSet. This change ensures that the DaemonSet will not be evicted first in case of node pressure.
- Fixed an issue where policy enforcements through Azure Policy addon were interrupted during cluster scaling or upgrade operations due to a missing Pod Disruption Budget (PDB) for the Gatekee...
- Fixed an issue with the retina-agent volume to restrict access to only
Release 2025-02-20
Release 2025-02-20
Monitor the release status by region at AKS-Release-Tracker. This release is titled v20250220
.
Announcements
- AKS Kubernetes version 1.32 is rolling out soon and is expected to reach all regions on or before the end of March. Please use the az-aks-get-versions command to accurately capture if Kubernetes version 1.32 is available in your region.
- HTTP Application Routing (preview) is going to be retired on March 3, 2025 and AKS will start to block new cluster creation with HTTP Application Routing (preview) enabled. Affected clusters must migrate to the generally available Application Routing add-on prior to that date. Refer to the migration guide for more information.
- Using the GPU VHD image (preview) to provision GPU-enabled AKS nodes was retired on January 10, 2025 and AKS will block creation of new node pools with the GPU VHD image (preview). Follow the detailed steps to create GPU-enabled node pools using the alternative supported options.
- Extend the AKS security patch release notes in release tracker to include a package comparison with the current - 1 AKS Ubuntu base image.
Release Notes
-
Features:
- Application routing add-on support for configuring the default NGINX ingress controller visibility is now generally available in API 2025-02-01.
- Kubernetes events for monitoring node auto-repair actions are now available for your AKS cluster. You can ingest these events and create alerts following the same process as other Kubernetes events.
- AKS Kubernetes patch versions 1.29.12, 1.29.13, 1.30.8, 1.30.9, 1.31.4, and 1.31.5 are now available.
- The default max surge value for node pool upgrade has been set to 10% for new and existing clusters on Kubernetes versions 1.32.0 and above.
- You can now upgrade from one LTS version to another LTS version on your AKS cluster. If you are running version 1.27 LTS you can directly upgrade to version 1.30 LTS.
-
Preview Features:
- You can use the
EnableCiliumNodeSubnet
feature in preview to create Cilium node subnet clusters using Azure CNI Powered by Cilium. - Control plane metrics are now available through Azure Monitor platform metrics in preview to monitor critical control plane components such as API server, etcd, scheculer, autoscaler, and controller-manager.
- You can use the
-
Bug Fixes:
- Resolved an issue with Istio service mesh add-on where having multiple operations with the Lua EnvoyFilter (e.g. adding the Lua filter to call an external service and specifying the cluster referenced by Lua code) was not allowed.
- Fixed a bug in Azure CNI Pod Subnet Static Block Allocation mode with Cilium which caused incorrect iptables rules, leading to pod connectivity failures to DNS and IMDS.
- Resolved an issue in Azure CNI static block IP allocation mode, where the updated Azure Table client mishandled untyped numbers, causing static block node pools to be misidentified as dynamic and leading to operation failures.
- Fixed a bug in Azure Kubernetes Fleet Manager hub cluster resource groups (FL_ prefix resource groups) by truncating the name to avoid issues with long generated managed resource group names breaking the maximum length of resource groups.
-
Behavior Changes:
- Horizontal Pod Autoscaling introduced for
ama-metrics replicaset pod
in the Azure Monitor managed service for Prometheus add-on. More details about the configuration of the Horizontal Pod Autoscaler can be found here. - Starting with Kubernetes v1.32, node subnet mode will be installed via the
azure-cns
DaemonSet, allowing for faster security updates. - By default, in new create operations on supported k8s versions, if you have selected a VM SKU which supports Ephemeral OS disks but have not specified an OS disk size, AKS will provision an Ephemeral OS disk with a size that scales according to the total temp storage of the VM SKU, so long as the temp is at least 128GiB. If you are looking to utilize the temp storage of the VM SKU, you will need to specify the OS disk size during deployment, otherwise it will be consumed by default. See more information here.
vmSize
is no longer a required parameter in the AKS REST API. For AgentPools created through the SDK without a specifiedvmSize
, AKS will find an appropriate VM SKU for your deployment based on quota and capacity. See more information underproperties.vmSize
here.
- Horizontal Pod Autoscaling introduced for
-
Component Updates:
- Updated Windows CNS from v1.6.13 to v1.6.21 and Linux CNS from v1.6.18 to v1.6.21.
- Updated Windows CNI and Linux CNI from v1.6.18 to v1.6.21.
- Updated tigera operator to v1.36.3 and calico to v3.29.0.
- Node Auto Provisioning has been upgraded to use Karpenter v0.7.2.
- Updated LTS patch version 1.27.102 for Command Injection affecting Windows nodes to address CVE-2024-9042.
- Updated the Retina basic image to v0.0.25 for Linux and Windows to address CVE-2025-23047 and CVE-2024-45338.
- Updated the cost-analysis-agent image from v0.0.20 to v0.0.21. Upgrades the following dependencies in cost-analysis-agent to fix CVE-2024-45341 and CVE-2024-45336:
- AKS Azure Linux v2 image has been updated to 202502.09.0.
- AKS Ubuntu 22.04 node image has been updated to 202502.09.0.
- AKS Ubuntu 24.04 node image has been updated to 202502.09.0.
- AKS Windows Server 2019 image has been updated to 17763.6775.250117.
- AKS Windows Server 2022 image has been updated to 20348.3091.250117.
- AKS Windows Server 23H2 image has been updated to 25398.1369.250117.