Skip to content

Releases: Azure/AKS

Release 2025-07-20

24 Jul 17:31
8483c4e
Compare
Choose a tag to compare

Release 2025-07-20

Monitor the release status by region at AKS-Release-Tracker. This release is titled v20250720.

Announcements

Release notes

  • Features

    • Application routing add-on now supports configuration of SSL passthrough, custom logging format, and load balancer IP ranges. Review the configuration of NGINX ingress controller documentation for more information.
    • SecurityPatch Node OS upgrade channel is now supported for all network isolated clusters.
    • API server VNet integration is now Generally Available (GA) in additional regions: East Asia, Southeast Asia, Switzerland North, Brazil South, Central India, Germany West Central, and more GA regions. For the complete list of supported regions and any capacity limitations, see the API Server VNet Integration documentation.
    • Kubelet Service Certificate Rotation will begin rollout to all remaining public regions, starting on 23 July 2025. Rollout is expected to be completed in 10 days. Note: This is an estimate and is subject to change. See GitHub issue for regional updates. Existing node pools will have kubelet serving certificate rotation enabled by default when they perform their first upgrade to any kubernetes version 1.27 or greater. New node pools on kubernetes version 1.27 or greater will have kubelet serving certificate rotation enabled by default. For more information on kubelet serving certificate rotation and disablement, see https://aka.ms/aks/kubelet-serving-certificate-rotation.
    • Kubernetes Event-Driven Autoscaling (KEDA) is now supported in LTS.
    • Static Block allocation mode for Azure CNI Networking is now Generally Available.
  • Preview Features

  • Bug Fixes

    • Fixed issue where AKS evicted pods that had already been manually relocated, causing upgrade failures. This fix adds a node consistency check to ensure the pod is still on the original node before retrying eviction.
  • Behavior Changes

    • The delete-machines API will only delete machines from the system nodepool if the system addon PDBs are respected.
    • AKS will now reject invalid OsSku enums during cluster creation, node pool creation, and node pool update. Previously AKS would default to Ubuntu. Unspecified OsSku with OsType Linux will still default to Ubuntu. For more information on supported OsSku options, see documentation for Azure CLI and the AKS API.
    • Application routing component Pods are now annotated with kubernetes.azure.com/set-kube-service-host-fqdn to automatically have the API server's domain name injected into the pod instead of the cluster IP, to enable communication to the API server. This is useful in cases where the cluster egress is via a layer 7 firewall.
    • Container Insights agents now have a memory limit of 750Mi (down from 4Gi).
    • Advanced Container Networking Services (ACNS) pods now run with priorityClassName: system-node-critical, preventing eviction under node resource pressure and improving cluster security posture.
    • Add node anti-affinity for FIPS-enabled nodes for retina-agent when pod-level metrics are enabled.
  • Component Updates

Read more

Release 2025-06-17

20 Jun 00:47
10f65de
Compare
Choose a tag to compare

Monitor the release status by region at AKS-Release-Tracker. This release is titled v20250617.

Announcements

  • Kubernetes 1.27 LTS version and 1.30 community version are going out of support by July 30th. Please upgrade to a supported version , refer to AKS release calendar for more information.
  • Customers using Azure Linux 2.0 should migrate to Azure Linux 3.0 before November 2025. For details on how to migrate from Azure Linux 2.0 to Azure Linux 3.0, see this doc. AKS is currently working on a feature to allow for migrations between Azure Linux 2.0 and Azure Linux 3.0 through a node pool update command. For updates on feature progress and availability, see Github issue.
  • Starting in June 2025, AKS clusters with version >= 1.28 and using Azure Linux 2.0 can be opted into Long Term Support. See blog for more information.
  • Starting in July 2025, Azure Kubernetes Service will begin rolling out a change to enable quota for all current and new AKS customers. AKS quota will represent a limit of the maximum number of managed clusters that an Azure subscription can consume per region. Existing AKS customer subscriptions will be given a quota limit at or above their current usage, depending on region availability. Once quota is enabled, customers can view their available quota and request quota increases in the Quotas page in the Azure Portal or by using the Quotas REST API. For details on how to view and request quota increases via the Portal Quotas page, visit Azure Quotas. For details on how to view and request quota increases via the Quotas REST API, visit: Azure Quota REST API Reference. New AKS customer subscriptions will be given a default limit upon new subscription creation. More information on the default limits for new subscriptions is available in documentation here.
  • Ubuntu 18.04 is no longer supported on AKS. AKS will no longer create new node images or provide security updates for Ubuntu 18.04 nodes. Existing node images will be deleted by 17 July 2025. Scaling operations will fail. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to a supported Kubernetes version.
  • Teleport (preview) on AKS will be retired on 15 July 2025, please migrate to Artifact Streaming (preview) on AKS or update your node pools to set --aks-custom-headers EnableACRTeleport=false. Azure Container Registry has removed the Teleport API meaning that any nodes with Teleport enabled are pulling images from Azure Container Registry as any other AKS node. After 15 July 2025, any node pools with Teleport (preview) enabled may experience breakage and node provisioning failures. For more information, see aka.ms/aks/teleport-retirement.
    Azure Kubernetes Service will no longer support the --skip-gpu-driver-install node pool tag to skip automatic driver installation. Starting on August 14 2025, you will no longer be able to use this node pool tag at AKS node pool creation time to install custom GPU drivers or use the GPU Operator. Alternatively, you should use the generally available gpu-driver API field to update your existing node pools or create new GPU-enabled node pools to skip automatic GPU driver installation.

Release Notes

  • Preview Features

    • Azure Monitor Application Insights for Azure Kubernetes Service (AKS) workloads is now available in preview.
    • Ubuntu 24.04 is now available in public preview in k8s 1.32+. ContainerD 2.0 is enabled by default. You can create new Ubuntu 24.04 node pools or update existing Linux node pools to Ubuntu 24.04. Use the "Ubuntu2404" os sku enum after registering the preview flag "Ubuntu2404Preview". You can also use the new "Ubuntu2204" os sku enum to rollback to Ubuntu 22.04 if you encounter any issues! You may also rollback using "Ubuntu" os sku enum. For more information, see upgrading your OS version.
    • Cost optimized add-on scaling is now available in preview. This feature allows you to autoscale supported addons or customize the resource's default CPU/ memory requests and limits to improve cost savings.
  • Features

    • AKS version 1.33 is now generally available. Please check the AKS Release tracker for when your region will receive the GA update.
    • AKS patch versions 1.32.5, 1.31.9 are now available. Refer to version support policy and upgrading a cluster for more information.
    • API Server VNet Integration is available now, please find the most up to date regions where this feature has been rolled out.
    • Kubelet Service Certificate Rotation has now been rolled out to East US and UK South. Existing node pools will have kubelet serving certificate rotation enabled by default when they perform their first upgrade to any kubernetes version 1.27 or greater. New node pools on kubernetes version 1.27 or greater will have kubelet serving certificate rotation enabled by default. For more information on kubelet serving certificate rotation and disablement, see certificate rotation in Azure Kubernetes Service.
    • MaxblockedNodes property is getting rolled to all regions. This helps cluster operators to put a limit on number of nodes that can be blocked on pdb blocked eviction failures and continuing upgrade forward. Read more.
  • Bug Fixes

    • Fixed a race condition with streams sharing data between Cilium agent and ACNS security agent.
    • Fixed Azure Policy addon Gatekeeper regression causing crash loop on clusters with Kubernetes versions < 1.27.
    • Resolved an issue where node pool scaling failed with customized kubelet configuration. Without this fix, node pools using CustomKubeletConfigs could not be scaled, and encountered an error message stating that the CustomKubeletConfig or CustomLinuxOSConfig cannot be changed for the scaling operation.
    • Fixed an issue where updating node pools with the exclude label, it doesn't update the Load Balancer Backend Pool properly.
    • Resolved a problem when upgrading Kubenet or Nodesubnet cluster with AGIC enabled to Azure CNI Overlay there might be some connectivity issues to services exposed via Ingress App Gateway public IP.
    • Fixed a bug where clusters with Node Auto Provisioning enabled could intermittently get an error about "multiple identities configured" and be unable to authenticate with Azure.
    • Fixed an issure to ensure the vms in a specific cloud are compatible with the latest Windows 550 grid driver.
  • Behavior Changes

    • AKS now allows daily schedules for the auto upgrade configuration.
    • Static Egress Gateway memory limits increased from 500Mi to 3000Mi reducing the risk of memory-related restarts under load.
    • The GPU provisioner component of KAITO has now been moved to the AKS control plane when the KAITO add-on is used. The OSS installation will still require this component to run on the kubernetes nodes.
    • Azure Monitor managed service for Prometheus updates the max shards from 12 to 24, ensuring enhanced scaling capabilities.
    • linuxutil plugin is enabled again for Retina Basic and ACNS.
    • Node Auto-Provisioning (NAP) now requires Kubernetes RBAC to be enabled, because NAP relies on secure and scoped access to Kubernetes resources to provision nodes based on pending pod resource requests. Kubernetes RBAC is enabled by default. For more information, see RBAC for Kubernetes.
    • Deployment Safeguards no longer requires Azure Policy permissions. Cluster admins will have the ability to turn on and di...
Read more

Release 2025-05-19

24 May 19:04
adde443
Compare
Choose a tag to compare

Monitor the release status by region at AKS-Release-Tracker. This release is titled v20250519.

Announcements

  • Customers using Azure Linux 2.0 should migrate to Azure Linux 3.0 before November 2025. For details on how to migrate from Azure Linux 2.0 to Azure Linux 3.0, see this doc. AKS is currently working on a feature to allow for migrations between Azure Linux 2.0 and Azure Linux 3.0 through a node pool update command. For updates on feature progress and availability, see Github issue.
  • Starting in June 2025, AKS clusters with version >= 1.28 and using Azure Linux 2.0 can be opted into Long Term Support.
  • Starting in June 2025, Azure Kubernetes Service will begin rolling out a change to enable quota for all current and new AKS customers. AKS quota will represent a limit of the maximum number of managed clusters that an Azure subscription can consume per region. Existing AKS customer subscriptions will be given a quota limit at or above their current usage, depending on region availability. Once quota is enabled, customers can view their available quota and request quota increases in the Quotas page in the Azure Portal or by using the Quotas REST API. For details on how to view and request quota increases via the Portal Quotas page, visit Azure Quotas. For details on how to view and request quota increases via the Quotas REST API, visit: Azure Quota REST API Reference. New AKS customer subscriptions will be given a default limit upon new subscription creation. More information on the default limits for new subscriptions is available in documentation here.
  • Starting on 17 June 2025, AKS will no longer create new node images for Ubuntu 18.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to a supported Kubernetes version.
  • Teleport (preview) on AKS will be retired on 15 July 2025, please migrate to Artifact Streaming (preview) on AKS or update your node pools to set --aks-custom-headers EnableACRTeleport=false. Azure Container Registry has removed the Teleport API meaning that any nodes with Teleport enabled are pulling images from Azure Container Registry as any other AKS node. After 15 July 2025, any node pools with Teleport (preview) enabled may experience breakage and node provisioning failures. For more information, see aka.ms/aks/teleport-retirement.
  • Azure Kubernetes Service will no longer support the --skip-gpu-driver-install node pool tag to skip automatic driver installation. Starting on August 14 2025, you will no longer be able to use this node pool tag at AKS node pool creation time to install custom GPU drivers or use the GPU Operator. Alternatively, you should use the generally available gpu-driver API field to update your existing node pools or create new GPU-enabled node pools to skip automatic GPU driver installation.

Release Notes

  • Preview Features

  • Features

    • Kubernetes 1.31 and 1.32 are now designated as Long-Term Support (LTS) versions.
    • Kubernetes 1.33 is available in Preview. A full matrix of supported add-ons and components is published at the AKS versions page.
    • AKS now allows upgrading from Azure CNI NodeSubnet to Azure CNI NodeSubnet with Cilium dataplane, and from Azure CNI NodeSubnet with Cilium dataplane to Azure CNI Overlay with Cilium dataplane.
  • Bug Fixes

    • Fixed failures triggered by duplicate tag keys that differed only by character case.
  • Behavior Changes

    • Static egress gateway memory limits increased from 128Mi to 500Mi for greater stability.
    • Memory for Azure Monitor Container Insights container ama-logs increased from 750Mi to 1Gi.
    • AKS nodes now use Azure Container Registry (ACR)-scoped Entra ID tokens for kubelet authentication when pulling images from ACR. This enhancement replaces the legacy ARM-based Entra token model, aligning with modern security practices by scoping credentials directly to the registry and improving isolation and traceability.
    • Timeouts due to FQDN IP updates are exported by Cilium Agent as cilium_proxy_datapath_update_timeout_total on Azure CNI Powered by Cilium.
    • ARM requests made with an api-version >= 2025-03-01 to obtain the status of async AKS operations can now return RP-defined status values for ongoing operations. Requests made with an api-version < 2025-03-01 will only return an InProgress status for ongoing operations.
  • Component Updates

Release 2025-04-27

02 May 23:24
95fe633
Compare
Choose a tag to compare

Monitor the release status by region at AKS-Release-Tracker. This release is titled v20250427.

Announcements

  • AKS supported Kubernetes version release updates are now available in AKS Release Tracker. You can check current in-support Kubernetes versions and LTS versions for specific region and track new patches version release progress with Release Tracker.
  • Customers using AzureLinux 2.0 should migrate to Azure Linux 3.0 before November 2025. For details on how to migrate from Azure Linux 2.0 to Azure Linux 3.0, see this doc. AKS is currently working on a feature to allow for migrations between Azure Linux 2.0 and Azure Linux 3.0 through a node pool update command. For updates on feature progress and availability, see Github issue.
  • AKS now requires a minimum of 2GBs of memory for the SKU for all user nodepools. To learn more, see aka.ms/aks/restrictedSKUs.
  • Starting on 5 May, 2025, WebAssembly System Interface (WASI) node pools will no longer be supported. You can no longer create WASI (preview) node pools, and existing WASI node pools will be unsupported.
  • Starting in June 2025, Azure Kubernetes Service will begin rolling out a change to enable quota for all current and new AKS customers. AKS quota will represent a limit of the maximum number of managed clusters that an Azure subscription can consume per region. Existing AKS customer subscriptions will be given a quota limit at or above their current usage, depending on region availability. Once quota is enabled, customers can view their available quota and request quota increases in the Quotas page in the Azure Portal or by using the Quotas REST API. For details on how to view and request quota increases via the Portal Quotas page, visit Azure Quotas. For details on how to view and request quota increases via the Quotas REST API, visit: Azure Quota REST API Reference. New AKS customer subscriptions will be given a default limit upon new subscription creation. More information on the default limits for new subscriptions is available in documentation here.
  • As of 31 March 2025, AKS no longer allows new cluster creation with the Basic Load Balancer. On 30 September 2025, the Basic Load Balancer will be retired. We will be posting updates on migration paths to the Standard Load Balancer. See AKS Basic LB Migration Issue for updates on when a simplified upgrade path is available. Refer to Basic Load Balancer Deprecation Update for more information.
  • The asm-1-22 revision for the Istio-based service mesh add-on has been deprecated. Migrate to a supported revision following the AKS Istio upgrade guide.
  • Starting on 17 June 2025, AKS will no longer create new node images for Ubuntu 18.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to a supported Kubernetes version.
  • Teleport (preview) on AKS will be retired on 15 July 2025, please migrate to Artifact Streaming (preview) on AKS or update your node pools to set --aks-custom-headers EnableACRTeleport=false. Azure Container Registry has removed the Teleport API meaning that any nodes with Teleport enabled are pulling images from Azure Container Registry as any other AKS node. After 15 July 2025, any node pools with Teleport (preview) enabled may experience breakage and node provisioning failures. For more information, see aka.ms/aks/teleport-retirement.

Release Notes

  • Features:

  • Preview Features:

    • Kubernetes 1.33 version is now available for Preview, see Release tracker for when it hits your region.
    • Kubernetes 1.31 and 1.32 are now recognized as Long-Term Support (LTS) releases in AKS, joining existing LTS versions 1.28 and 1.29. You can view when these LTS releases hit your region in real time via the Release tracker. For more information, see Long Term Support (LTS).
  • Bug Fixes:

    • Fix an issue in Azure CNI Powered by Cilium to improves DNS request/response performance, especially in large scale clusters using FQDN based policies. Without this fix, if the user sets a DNS request timeout below 2 seconds, in high-scale scenarios they may experience request drops due to duplicate request IDs.
    • Fix an issue where load balancer tags were not updated after accluster tag update. Load balancer tags now correctly reflect the latest state.
    • Fix an issue in Cilium v1.17 where a deadlock was causing server pods to be unable to start.
  • Behavior Changes:

    • aksmanagedap is blocked as a reserved name for AKS system component, you can no longer use it for creating agent pool. See naming convention for more information.
    • linuxutil plugin is temporarily disabled for Retina Basic and ACNS as it was causing memory leaks that leads to Retina pods OOMKill.
    • Advanced Container Networking Services (ACNS) configmaps (cilium, retina, hubble) now auto‑format cluster names to satisfy Cilium 1.17 rules:≤ 32 chars, lowercase alphanumeric characters and dashes, no leading/trailing dashes, functionality is unaffected. This change is due to the strict enforcement of Cilium 1.17. See this link for details.
    • The defaultConfig.gatewayTopology field is now included in the Istio add-on MeshConfig AllowList as an unsupported field. For more details, see the Istio MeshConfig documentation.
    • Previously, you can't disable Node AutoProvisioning once enabled, now you can if meet certain criteria. See this document for more details.
    • Disabling kube-proxy no longer requires the KubeProxyConfigurationPreview feature flag in bring-your-own (BYO) CNI scenarios.
    • Kubelet Service Certificate Rotation will begin regional rollout, starting with westcentralus and eastasia by 16 May 2025. Existing node pools in these regions will have kubelet serving certificate rotation enabled by default when they perform their first upgrade to any kubernetes version 1.27 or greater. New node pools in these regions on kubernetes version 1.27 or greater will have kubelet serving certificate rotation enabled by default. For more information on kubelet serving certificate rotation, see aka.ms/aks/kubelet-serving-certificate-rotation.
  • Component Updates:

Read more

Release 2025-04-06

10 Apr 07:29
156388f
Compare
Choose a tag to compare

Monitor the release status by region at AKS-Release-Tracker. This release is titled v20250406.

Announcements

  • Starting in May 2025, Azure Kubernetes Service will begin rolling out a change to enable quota for all current and new AKS customers. AKS quota will represent a limit of the maximum number of managed clusters that an Azure subscription can consume per region. Existing AKS customer subscriptions will be given a quota limit at or above their current usage, depending on region availability. Once quota is enabled, customers can view their available quota and request quota increases in the Quotas page in the Azure Portal or by using the Quotas REST API. For details on how to view and request quota increases via the Portal Quotas page, visit Azure Quotas. For details on how to view and request quota increases via the Quotas REST API, visit: Azure Quota REST API Reference. New AKS customer subscriptions will be given a default limit upon new subscription creation. More information on the default limits for new subscriptions is available in documentation here.
  • AKS Kubernetes version 1.32 roll out has been delayed and is now expected to reach all regions on or before the end of April. Please use the az-aks-get-versions command to accurately capture if Kubernetes version 1.32 is available in your region.
  • Kubernetes version 1.28, 1.29 will become additional Long Term Support (LTS) versions in AKS, alongside existing LTS versions 1.27 and 1.30.
  • AKS Kubernetes version 1.29 is going out of support in all regions on or before end April, 2025.
  • You can now switch non-LTS clusters on Kubernetes versions 1.25 onwards and within 3 versions of the current LTS versions to LTS by switching their tier to Premium.
  • As of 31 March 2025, AKS no longer allows new cluster creation with the Basic Load Balancer. On 30 September 2025, the Basic Load Balancer will be retired. We will be posting updates on migration paths to the Standard Load Balancer. See AKS Basic LB Migration Issue for updates on when a simplified upgrade path is available. Refer to Basic Load Balancer Deprecation Update for more information.
  • The asm-1-22 revision for the Istio-based service mesh add-on has been deprecated. Migrate to a supported revision following the AKS Istio upgrade guide.
  • The pod security policy feature was retired on 1st August 2023 and removed from AKS versions 1.25 and higher. PodSecurityPolicy property will be officially removed from AKS API starting from 2025-03-01.
  • Starting on 17 June 2025, AKS will no longer create new node images for Ubuntu 18.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to a supported Kubernetes version.
  • Starting on 17 March 2027, AKS will no longer create new node images for Ubuntu 20.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to Kubernetes version 1.34+ by the retirement date.
  • HTTP Application Routing (preview) has been retired as of March 3, 2025 and AKS will start to block new cluster creation with HTTP App routing enabled. Affected clusters must migrate to the generally available Application Routing add-on prior to that date.
  • Customers with nodepools using Standard_NC24rsv3 VM sizes should resize or deallocate those VMs. Microsoft will deallocate remaining Standard_NC24rsv3 VMs in the coming weeks.
  • Teleport (preview) on AKS will be retired on 15 July 2025, please migrate to Artifact Streaming (preview) on AKS or update your node pools to set --aks-custom-headers EnableACRTeleport=false. Azure Container Registry has removed the Teleport API meaning that any nodes with Teleport enabled are pulling images from Azure Container Registry as any other AKS node. After 15 July 2025, any node pools with Teleport (preview) enabled may experience breakage and node provisioning failures. For more information, see aka.ms/aks/teleport-retirement.

Release Notes

  • Features:

  • Behavior Changes:

    • Add node anti-affinity for FIPS-compliant nodes to prevent scheduling of retina-agent pods to stop CrashLoopBackOff on FIPS-enabled nodes whilst fix for Retina + FIPS is being rolled out.
    • Increased tofqdns-endpoint-max-ip-per-hostname from 50 to 1000 and tofqdns-min-ttl from 0 to 3600 in Azure Cilium for better handling of large DNS responses and reduce DNS query load.
    • Konnectivity agent will now scale based on cluster node count.
    • Starting on 15 April 2025, you will now be able to update your clusters to add an HTTP Proxy Configuration. Any update command that adds/changes an HTTP Proxy Configuration will now trigger an automatic reimage that will ensure all node pools in the cluster will have the same configuration. For more information, see aka.ms/aks/http-proxy.
    • Starting with Kubernetes 1.33, the default Kubernetes Scheduler is configured to use a MaxSkew value of 1 for topology.kubernetes.io/zone. For more details see Ensure pods are spread across AZs
  • Component Updates:

Read more

Release 2025-03-16

21 Mar 21:55
5e184c2
Compare
Choose a tag to compare

Release 2025-03-16

Monitor the release status by region in the AKS Release Tracker. This release is titled v20250316.

Announcements

  • Starting in April 2025, Azure Kubernetes Service will begin rolling out a change to enable quota for all current and new AKS customers. AKS quota will represent a limit of the maximum number of managed clusters that an Azure subscription can consume per region. Existing AKS customer subscriptions will be given a quota limit at or above their current usage, depending on region availability. Once quota is enabled, customers can view their available quota and request quota increases in the Quotas page in the Azure Portal or by using the Quotas REST API. For details on how to view and request quota increases via the Portal Quotas page, visit Azure Quotas. For details on how to view and request quota increases via the Quotas REST API, visit: Azure Quota REST API Reference. New AKS customer subscriptions will be given a default limit upon new subscription creation. More information on the default limits for new subscriptions is available in documentation here.
  • AKS Kubernetes version 1.32 roll out has been delayed and is now expected to reach all regions on or before the end of April. Please use the az-aks-get-versions command to accurately capture if Kubernetes version 1.32 is available in your region.
  • AKS will be upgrading the KEDA addon to more recent KEDA versions. The AKS team will add KEDA 2.16 on AKS clusters with K8s versions >=1.32, KEDA 2.14 for Kubernetes v1.30 and v1.31. KEDA 2.15 and KEDA 2.14 will introduce multiple breaking changes. View the troubleshooting guide to learn how to mitigate these breaking changes.
  • AKS Kubernetes version 1.28 will soon be available as a Long Term Support version.
  • You can now switch non-LTS clusters on Kubernetes versions 1.25 onwards and within 3 versions of the current LTS versions to LTS by switching their tier to Premium.
  • On 31 March 2025, AKS will no longer allow new cluster creation with the Basic Load Balancer. On 30 September 2025, the Basic Load Balancer will be retired. We will be posting updates on migration paths to the Standard Load Balancer. See AKS Basic LB Migration Issue for updates on when a simplified upgrade path is available. Refer to Basic Load Balancer Deprecation Update for more information.
  • The asm-1-22 revision for the Istio-based service mesh add-on has been deprecated. Migrate to a supported revision following the AKS Istio upgrade guide.
  • The pod security policy feature was retired on 1st August 2023 and removed from AKS versions 1.25 and higher. PodSecurityPolicy property will be officially removed from AKS API starting from 2025-03-01.
  • Starting on 17 June 2025, AKS will no longer create new node images for Ubuntu 18.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to a supported Kubernetes version.
  • Starting on 17 March 2027, AKS will no longer create new node images for Ubuntu 20.04 or provide security updates. Existing node images will be deleted. Your node pools will be unsupported and you will no longer be able to scale. To avoid service disruptions, scaling restrictions, and remain supported, please follow our instructions to upgrade to Kubernetes version 1.34+ by the retirement date.
  • Customer on retired NCv1, NCv2, NDv1, and NVv1 VM sizes should expect to have those node pools deallocated. Please move to supported VM sizes. You can find more information and instructions to do so here.

Release Notes

  • Features:

  • Preview Features:

    • You can use the EnableCiliumNodeSubnet feature in preview to create Cilium node subnet clusters using Azure CNI Powered by Cilium.
    • Control plane metrics are now available through Azure Monitor platform metrics in preview to monitor critical control plane components such as API server and etcd.
  • Bug Fixes:

    • Fixed an issue with the retina-agent volume to restrict access to only /var/run/cilium directory. Currently retina-agent mounts /var/run from host directory. This can have potential issue as it can overwrite data in the directory.
    • Fixed an issue where SSHAccess was being reset to the default value enabled on partial PUT requests for managedCluster.AgentPoolProfile.SecurityProfile without specifying SSHAccess.
    • Fixed an issue where Node Auto Provisioning (Karpenter) failed to properly apply the kubernetes.azure.com/azure-cni-overlay=true label to nodes which resulted in failure to assign pod IPs in some cases.
    • Fixed an issue where calico-typha could be scheduled on virtual-kubelet due to overly permissive tolerations. Tolerations are now properly restricted to prevent incorrect scheduling. Check this GitHub Issue for more details.
    • Fixed an issue in Hubble-Relay scheduling behavior to prevent deployment on cordoned nodes, allowing the cluster autoscaler to properly scale down nodes.
    • Fixed an issue where pods could get stuck in ContainerCreating during Cilium+NodeSubnet to Cilium+Overlay upgrades by ensuring the original network configuration is retained on existing nodes.
    • Fixed an issue where priority class isn't set on the Custom CA Trust DaemonSet. This change ensures that the DaemonSet will not be evicted first in case of node pressure.
    • Fixed an issue where policy enforcements through Azure Policy addon were interrupted during cluster scaling or upgrade operations due to a missing Pod Disruption Budget (PDB) for the Gatekee...
Read more

Release 2025-02-20

25 Feb 03:17
7d89adb
Compare
Choose a tag to compare

Release 2025-02-20

Monitor the release status by region at AKS-Release-Tracker. This release is titled v20250220.

Announcements

  • AKS Kubernetes version 1.32 is rolling out soon and is expected to reach all regions on or before the end of March. Please use the az-aks-get-versions command to accurately capture if Kubernetes version 1.32 is available in your region.
  • HTTP Application Routing (preview) is going to be retired on March 3, 2025 and AKS will start to block new cluster creation with HTTP Application Routing (preview) enabled. Affected clusters must migrate to the generally available Application Routing add-on prior to that date. Refer to the migration guide for more information.
  • Using the GPU VHD image (preview) to provision GPU-enabled AKS nodes was retired on January 10, 2025 and AKS will block creation of new node pools with the GPU VHD image (preview). Follow the detailed steps to create GPU-enabled node pools using the alternative supported options.
  • Extend the AKS security patch release notes in release tracker to include a package comparison with the current - 1 AKS Ubuntu base image.

Release Notes

  • Features:

  • Preview Features:

    • You can use the EnableCiliumNodeSubnet feature in preview to create Cilium node subnet clusters using Azure CNI Powered by Cilium.
    • Control plane metrics are now available through Azure Monitor platform metrics in preview to monitor critical control plane components such as API server, etcd, scheculer, autoscaler, and controller-manager.
  • Bug Fixes:

    • Resolved an issue with Istio service mesh add-on where having multiple operations with the Lua EnvoyFilter (e.g. adding the Lua filter to call an external service and specifying the cluster referenced by Lua code) was not allowed.
    • Fixed a bug in Azure CNI Pod Subnet Static Block Allocation mode with Cilium which caused incorrect iptables rules, leading to pod connectivity failures to DNS and IMDS.
    • Resolved an issue in Azure CNI static block IP allocation mode, where the updated Azure Table client mishandled untyped numbers, causing static block node pools to be misidentified as dynamic and leading to operation failures.
    • Fixed a bug in Azure Kubernetes Fleet Manager hub cluster resource groups (FL_ prefix resource groups) by truncating the name to avoid issues with long generated managed resource group names breaking the maximum length of resource groups.
  • Behavior Changes:

    • Horizontal Pod Autoscaling introduced for ama-metrics replicaset pod in the Azure Monitor managed service for Prometheus add-on. More details about the configuration of the Horizontal Pod Autoscaler can be found here.
    • Starting with Kubernetes v1.32, node subnet mode will be installed via the azure-cns DaemonSet, allowing for faster security updates.
    • By default, in new create operations on supported k8s versions, if you have selected a VM SKU which supports Ephemeral OS disks but have not specified an OS disk size, AKS will provision an Ephemeral OS disk with a size that scales according to the total temp storage of the VM SKU, so long as the temp is at least 128GiB. If you are looking to utilize the temp storage of the VM SKU, you will need to specify the OS disk size during deployment, otherwise it will be consumed by default. See more information here.
    • vmSize is no longer a required parameter in the AKS REST API. For AgentPools created through the SDK without a specified vmSize, AKS will find an appropriate VM SKU for your deployment based on quota and capacity. See more information under properties.vmSize here.
  • Component Updates:

    • Updated Windows CNS from v1.6.13 to v1.6.21 and Linux CNS from v1.6.18 to v1.6.21.
    • Updated Windows CNI and Linux CNI from v1.6.18 to v1.6.21.
    • Updated tigera operator to v1.36.3 and calico to v3.29.0.
    • Node Auto Provisioning has been upgraded to use Karpenter v0.7.2.
    • Updated LTS patch version 1.27.102 for Command Injection affecting Windows nodes to address CVE-2024-9042.
    • Updated the Retina basic image to v0.0.25 for Linux and Windows to address CVE-2025-23047 and CVE-2024-45338.
    • Updated the cost-analysis-agent image from v0.0.20 to v0.0.21. Upgrades the following dependencies in cost-analysis-agent to fix CVE-2024-45341 and CVE-2024-45336:
      • github.com/Azure/azure-sdk-for-go/sdk/azcore v1.15.0 to v1.17.0
      • github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.8.0 to v1.8.1
      • github.com/prometheus/common v0.60.0 to v0.62.0
      • github.com/samber/lo v1.47.0 to v1.49.1
      • github.com/stretchr/testify v1.9.0 to v1.10.0
    • AKS Azure Linux v2 image has been updated to 202502.09.0.
    • AKS Ubuntu 22.04 node image has been updated to 202502.09.0.
    • AKS Ubuntu 24.04 node image has been updated to 202502.09.0.
    • AKS Windows Server 2019 image has been updated to 17763.6775.250117.
    • AKS Windows Server 2022 image has been updated to 20348.3091.250117.
    • AKS Windows Server 23H2 image has been updated to 25398.1369.250117.

Release 2025-01-30

14 Feb 05:33
de0de55
Compare
Choose a tag to compare

Release 2025-01-30

Monitor the release status by regions at AKS-Release-Tracker. This release is titled v20250130.

Announcements

  • General support for AKS Kubernetes version 1.28 was deprecated on Jan 30, 2025. Upgrade your clusters to version 1.29 or later. Refer to version support policy and upgrading a cluster for more information.
  • Azure Kubernetes Service will no longer support the WebAssembly System Interface (WASI) nodepools (preview). Starting on May 5, 2025 you will no longer be able to create new WASI nodepools. If you'd like to run WebAssembly (WASM) workloads, you can deploy SpinKube to Azure Kubernetes Service (AKS) from Azure Marketplace. For more information on this retirement, see AKS GitHub.
  • The open-source project Bridge to Kubernetes will be retired on April 30, 2025. For more information, please see the Bridge to Kubernetes repository.
  • The HTTP Application Routing add-on (preview) is going to be retired on March 3, 2025. You will no longer be able to create clusters that enable the add-on. Migrate to the generally available Application Routing add-on now.

Release Notes

  • Features:

    • AKS Kubernetes patch versions 1.29.11, 1.30.7 and 1.31.3 are now available.
    • Security patch releases in release tracker, starting with 20250115T000000Z will contain release notes for the release.
  • Preview Features:

    • You can now monitor your stateful workloads running on AKS with Azure Container Storage using Azure Monitor managed service for Prometheus in Preview. You can use Azure Monitor managed service for Prometheus to collect Azure Container Storage metrics along with other Prometheus metrics from your AKS cluster. For more information please see (Enable monitoring for Azure Container Storage)[https://learn.microsoft.com/azure/storage/container-storage/enable-monitoring?source=recommendations].
    • CNI validation for node autoprovisioner now allows all CNI configurations except for Calico and kubenet. See AKS CNI Overview for more information.
    • AKS Automatic SKU now supports using a custom virtual network.
    • When using NAP, custom subnets can be specified for node use via an update to the AKSNodeClass CRD which adds the vnetSubnetID property.
  • Behavior change:

    • Proper casing will be enforced on PUT of Microsoft.ContainerService/managedClusters/agentPools for the AgentPoolMode property. See this issue for more detail.
    • Removed Prometheus port and scrape annotations from Retina Linux and Windows DaemonSets to avoid double scraping of metrics.
    • The standard load balancer can now be customized to include port_* annotations referenced in the documentation. An additional annotation has been added for: external-dns.alpha.kubernetes.io/hostname. See this document for more information.
  • Bug Fix:

    • Fixed a bug where some AgentPools with "kubeletDiskType":"OS", were not validated.
    • Fixed a bug when creating a cluster with a private DNS zone may result in an InvalidTemplateDeployment error.
    • Fixed a race and potential deadlock condition when a Non-Cilium cluster is updating to ACNS Cilium.
    • Added early validation on cluster creation when attempting to use 169.254.0.0/16 (link local) for pod or service CIDR blocks to prevent later run-time failures.
    • Fixed a breaking change between AppArmor and cilium. Starting on K8s 1.30 and Ubuntu 24.04, cilium containers can fail with error Init:CreateContainerError since AppArmor annotations are no longer supported. This change keeps apparmor annotations for k8s versions below 1.30, and adds the new security context field for k8s versions 1.30 and above. Related PR in upstream cilium charts: cilium/cilium#32199.
    • Fixed a bug that prevented upgrade from starting if the PDB expectedPods count is less than the minAvailable count.
    • Fixed an error condition when AKS attempts to remove the taint disk.csi.azure.com/agent-not-ready=NoExecute on node startup. More details: kubernetes-sigs/azuredisk-csi-driver#2309
    • Addressed an issue related to node subnet IPAM Invoker Add failed with error: Failed to allocate pool in the CNI logs and the associated agentbaker release.
    • Added validation when a cluster migrates to CNI Overlay to block migration when there is a custom ip-masq-agent config in the kube-system namespace. This prevents loss of connectivity during migration. See the AKS documentation for more information.
  • Component updates:

    • Cilium v1.14 version from v1.14.18-241220 to v1.14.18-250107 (v1.14.18-1) to include a fix for cilium dual stack upgrades. On upgrades, cilium config changes bpf-filter-priority from 1 to 2 but is not cleaning up the old filters at the old priority and as a result impacts connectivity. This patch will fix this bug, see GH issue in cilium repo for more details cilium/cilium#36172
    • Update Azure File CSI driver version to v1.29.10 on AKS 1.28
    • Update Azure File CSI driver version to v1.30.7 on AKS 1.29 and 1.30
    • Update Azure File CSI driver version to v1.31.3 on AKS 1.31
    • Update Azure Disk CSI driver to v1.29.12 on AKS 1.28, 1.29
    • Update Azure Disk CSI driver to v1.30.7 on AKS 1.30, 1.31
    • Update Azure Blob CSI driver to v1.23.10 on AKS 1.28, 1.29
    • Update Azure Blob CSI driver to v1.24.6 on AKS 1.30, 1.31
    • Update Workload Identity image version to v1.4.0
    • CNS/CNI updated to v1.6.18 which includes Cilium nodesubnet support
    • Added Multi-Instance GPU support for standard_nc40ads_h100_v5
    • Update the OMS image to v3.1.25-1
    • Update secret store driver to v1.4.7 and akv provider to v1.6.2.
    • Updates the Retina basic image to v0.0.23 on Linux and Windows: release notes
    • Update karpenter image version to 0.6.1-aks
    • Update Cilium v1.16 from v1.16.5-250108 to v1.16.5-250110 (v1.16.5-1) to include a fix for Cilium dual stack upgrades. This will fix cilium/cilium#36172. Cilium v1.16.5 also contains fix for CVE-2024-52529.
    • The following CVEs were patched in Cilium v1.14.15
    • Update the cost-analysis-agent image v0.0.19 to v0.0.20. Upgrades the following dependencies in cost-analysis-agent to fix CVE-2024-45337 and CVE-2024-45338
      • golang.org/x/crypto v0.27.0 to v0.31.0
      • golang.org/x/net v0.29.0 to v0.33.0
      • golang.org/x/sys v0.25.0 to v0.28.0
        ...
Read more

Release 2025-01-06

18 Jan 02:11
1a9732d
Compare
Choose a tag to compare

Release 2025-01-06

Monitor the release status by regions at AKS-Release-Tracker. This release is titled as v20250106.

Announcements

Release Notes

Read more

Release 2024-10-25

05 Nov 21:39
1367a20
Compare
Choose a tag to compare

Release 2024-10-25

Monitor the release status by regions at AKS-Release-Tracker. This release is titled as v20241025.

Announcements

  • AKS version 1.28 End of Life is Jan, 15 2025.
  • AKS will be upgrading the KEDA addon to more recent KEDA versions. The AKS team has added KEDA 2.15 on AKS clusters with K8s versions >=1.32, KEDA 2.14 for Kubernetes v1.30 and v1.31. KEDA 2.15 and KEDA 2.14 will introduce multiple breaking changes. View the troubleshooting guide to learn how to mitigate these breaking changes.
  • AKS will no longer support the GPU image (preview) to provision GPU-enabled AKS nodes. Starting on Jan 10, 2025 you will no longer be able to create new GPU-enabled node pools with the GPU image. Alternative options that are supported today and recommended by AKS include the default experience with manual NVIDIA device plugin installation or the NVIDIA GPU Operator, detailed in AKS GPU node pool documentation.
  • Starting on January 1, 2025, invalid values sent to the Azure AKS API for the properties.mode field of AKS AgentPools will be rejected. Prior to this change, unknown modes were assumed to be User. The only valid values for this field are the (case-sensitive) strings:"User", "System", or "Gateway".
  • AKS will start to block new cluster creation with basic load balancer in January 2025. Basic Load Balancer will be deprecated September 31 2025 and affected clusters must be migrated to the Standard Load Balancer prior to that date. Refer to BLB deprecation announcement for more information.
  • As of November 30th, 2024, new AKS clusters created with Kubernetes versions 1.28 and 1.29 will no longer enable beta Kubernetes APIs. This matches the behavior of AKS 1.27 LTS and AKS 1.30+ clusters, which no longer enable beta APIs.

Release Notes

  • Features:

    • AKS patch versions 1.28.14, 1.29.9, 1.30.5 are now available. Refer to version support policy and upgrading a cluster for more information.
    • AKS version 1.31 is now generally available. Please check the release tracker for when your region will receive the GA update. Some regions may not receive this update until later in November.
    • The first official patch version of AKS LTS 1.27, 1.27.100, is being released.
    • GitHub Copilot for Azure now supports AKS commands.
    • You can now skip one release while upgrading Azure Service Mesh as long as the destination release is a supported revision - for example, asm-1-21 can upgrade directly to asm-1-23.
    • You can now fine-tune supported models on KAITO version 0.3.1 with the AI toolchain operator add-on on your AKS cluster.
    • Advanced Container Networking Services (ACNS) is now Generally Available. To earn more please see the ACNS Documentation.
  • Preview features:

    • We've added a new way to optimize your upgrade process drain behavior. By default, a node drain failure causes the upgrade operation to fail, leaving the undrained nodes in a schedulable state, this behavior is called Schedule. Alternatively, you can select the Cordon behavior, which skips nodes that fail to drain by placing them in a quarantined state, labeling them kubernetes.azure.com/upgrade-status:Quarantined and proceeds with upgrading the remaining nodes. This ensures that all nodes are either upgraded or quarantined. This approach allows you to troubleshoot drain failures and gracefully manage the quarantined nodes.
    • You can now block pod access to the Azure Instance Metadata Service (IMDS) endpoint to enhance security.
    • Azure Linux v3 is now in preview for AKS 1.31 clusters. After registering the preview flag AzureLinuxV3Preview newly created AzureLinux node pools will receive the v3 image. Existing Azure Linux v2 node pools will not upgrade to v3 and must be recreated to upgrade.
      • NOTE: Azure Linux v3 changes the cryptographic provider to OpenSSL + SymCrypt. The SymCrypt library will operate in FIPS mode but is still in the final stages of the validation process and thus is not considered to be FIPS-validated at this time. Do not use this preview with FIPS-enabled node pools if you must use a FIPS-validated cryptographic library.
  • Behavior change:

    • Virtual Machine node pools creation will be blocked if the cluster is using system-assigned identity and bring-your-own virtual network, as this combination does not function properly. To utilize virtual machine node pools, migrate the cluster to a user-assigned managed identity with the required permissions on the virtual network. Virtual Machine Scale Set pools are unaffected by this change.
    • Enabling long term support no longer changes the default cluster upgrade channel to patch.
    • AKS CoreDNS configuration will now block all queries ending in reddog.microsoft.com and some queries ending in internal.cloudapp.net from being forwarded to upstream DNS when they are the result of improper search domain completion. See the documentation for more details.
    • Azure NPM's CPU request has been lowered from 250m to 50m.
    • Azure CNI Overlay now checks that the pod CIDR does not conflict with any subnet in the virtual network, rather than checking if it conflicts with the virtual network address space as a whole.
    • Azure CNI Overlay is now the default networking configuration for AKS clusters. This means, when running az aks create --name TestCluster --Resource-Group TestGroup, by default, Azure CNI Overlay will be the CNI for the cluster. Other networking configurations are still available with definition.
  • Component updates: