Skip to content

Releases: openyurtio/openyurt

v1.6.0

08 Jan 05:42
beab71c
Compare
Choose a tag to compare

v1.6.0

What's New

Support Kubernetes up to V1.30

“k8s.io/xxx” and all its related dependencies are upgraded to version “v0.30.6”, for ensuring OpenYurt is compatible with Kubernetes v1.30 version. This compatibility has been confirmed by an end-to-end (E2E) test where we started a Kubernetes v1.30 cluster using KinD and deployed the latest components of OpenYurt.
#2179
#2249

Enhance edge autonomy capabilities

OpenYurt already offers robust edge autonomy capabilities, ensuring that applications on edge nodes can continue to operate even when the cloud-edge network is disconnected. However, there are several areas where the current edge autonomy capabilities can still be improved. For instance, once nodes are annotated with autonomy annotations, the cloud controller does not automatically evict Pods, regardless of whether the disconnection is due to cloud-edge network issues or node failures, yet users expect automatic Pod eviction during node failures. Additionally, the current edge autonomy capabilities cannot be directly used in managed Kubernetes environments because users cannot disable the NodeLifeCycle controller within the Kube-Controller-Manager component of managed Kubernetes. In this release, new endpoints/endpointslices webhooks are added to ensure that pods are not removed from the backend of the Service. Additionally, a new autonomous annotation is introduced, supporting the configuration of autonomous time.
#2155
#2201
#2211
#2218
#2241

Node-level Traffic Reuse Capability

In an OpenYurt cluster, control components are deployed in the cloud, and edge nodes usually interact with the cloud through the public internet, which can lead to significant consumption of cloud-edge traffic. This problem is more pronounced in large-scale clusters, mainly due to the edge-side components performing full-scale list/watch operations on resources. This not only consumes a large amount of cloud-edge traffic but also places considerable pressure on the apiserver due to the high volume of list operations. In this release, We have added a traffic multiplexing module in YurtHub. When multiple clients request the same resource (services, endpointslices), YurtHub returns data from the local cache, reducing the number of requests to the apiserver.
#2060
#2141
#2242

Other Notable changes

Fixes

Proposals

Contributors

Thank you to everyone who contributed to this release! ❤

v1.5.1

04 Jan 13:02
67618e0
Compare
Choose a tag to compare

What's Changed

  • fix(iot): the mount type of hostpath for localtime in napa by @LavenderQAQ in #2111
  • [Backport release-v1.5] fix: bug of yurtappset always the last tweaks make effect by @github-actions in #2243

Full Changelog: v1.5.0...v1.5.1

v1.5.0

17 Jul 06:55
07266e0
Compare
Choose a tag to compare

v1.5.0

What's New

Support Kubernetes up to V1.28

“k8s.io/xxx” and all its related dependencies are upgraded to version “v0.28.9”, for ensuring OpenYurt is compatible with Kubernetes v1.28 version. This compatibility has been confirmed by an end-to-end (E2E) test where we started a Kubernetes v1.28 cluster using KinD and deployed the latest components of OpenYurt. At the same time, all the key components of OpenYurt, such as yurt-manager and yurthub, are deployed on the Kubernetes cluster via Helm to ensure that the Helm charts provided by the OpenYurt community can run stably in the production environment.
#2047
#2074

Reduce cloud-edge traffic spike during rapid node additions

NodePool resource is essential for managing groups of nodes within OpenYurt clusters, as it records details of all nodes in the collective through the NodePool.status.nodes field. YurtHub relies on this information to identify endpoints within the same NodePool, thereby enabling pool-level service topology functionality. However, when a large NodePool—potentially comprising thousands of nodes—experiences swift expansion, such as the integration of hundreds of edge nodes within a mere minute, the surge in cloud-to-edge network traffic can be significant. In this release, a new type of resource called NodeBucket has been introduced. It provides a scalable and streamlined method for managing extensive NodePool, significantly reducing the impact on cloud-edge traffic during periods of rapid node growth, and ensuring the stability of the clusters is maintained.
#1864
#1874
#1930

Upgrade YurtAppSet to v1beta1 version

YurtAppSet v1beta1 is introduced to facilitate the management of multi-region workloads. Users can use YurtAppSet to distribute the same WorkloadTemplate (Deployment/Statefulset) to different nodepools by a label selector NodePoolSelector or nodepool name slice (Pools). Users can also customize the configuration of workloads in different node pools through WorkloadTweaks.
In this release, we have combined the functionality from the three old crds (YurtAppSet v1alpha1, YurtAppDaemon and YurtAppOverrider) in yurtappset v1beta1. We recommend to use this in favor of the old ones.
#1890
#1931
#1939
#1974
#1997

Improve transparent management mechanism for control traffic from edge to cloud

The current transparent management mechanism for cloud-edge control traffic has certain limitations and cannot effectively support direct requests to the default/kubernetes service. In this release, a new transparent management mechanism for cloud-edge control traffic, aimed at enabling pods using InClusterConfig or the default/kubernetes service name to access the kube-apiserver via YurtHub without needing to be aware of the details of the public network connection between the cloud and edge.
#1975
#1996

Separate clients for yurt-manager component

Yurt-manager is an important component in cloud environment for OpenYurt which holds multiple controllers and webhooks. Those controllers and webhooks shared one client and one set of RBAC (yurt-manager-role/yurt-manager-role-binding/yurt-manager-sa) which grew bigger as we add more function into yurt-manager. This mechanism makes a controller has access it shouldn't has. and it's difficult to find out the request is from which controller from the audit logs. In the latest release, we restrict each controller/webhook to only the permissions it may use and separate RBAC and UA for different controllers and webhooks.
#2051
#2069

Enhancement to Yurthub's Autonomy capabilities

New autonomy condition have been added to node conditions so that yurthub can report autonomy status of node in real time at each nodeStatusUpdateFrequency. This condition allows for accurate determination of each node's autonomy status. In addition, an error key mechanism has been introduced to log cache failure keys along with their corresponding fault reasons. The error keys are persisted using the AOF (Append-Only File) method, ensuring that the autonomy state is recovered even after a reboot and preventing the system from entering a pseudo-autonomous state. These enhancements also facilitate easier troubleshooting when autonomy issues arise.
#2015
#2033
#2096

Other Notable changes

Fixes

  • fix cache manager panic in yurthub by @rambohe-ch in #1950
  • fix: upgrade the version of runc to avoid security risk by @qclc in #1972
  • fix only openyurt crd conversion should be handled for upgrading cert by @rambohe-ch in #2013
  • fix the cache leak in yurtappoverrider controller by @MeenuyD in #1795
  • fix(yurt-manager): add clusterrole for nodes/status subresources by @qclc in #1884
  • fix: close dst file by @testwill in #2046

Proposals

  • Proposal: High Availability of Edge Services by @Rui-Gan in #1816
  • Proposal: yurt express: openyurt data transmission system proposal by @qsfang in #1840
  • proposal: add NodeBucket to reduce cloud-edge traffic spike during rapid node additions. by @rambohe-ch in #1864
  • Proposal: add yurtappset v1beta1 proposal by @luc99hen in #1890
  • proposal: improve transparent management mechanism for control traffic from edge to cloud by @rambohe-ch in #1975
  • Proposal: enhancement of edge autonomy by @vie-serendipity in #2015
  • Proposal: separate yurt-manager clients by @luc99hen in #2051

Contributors

Thank you to everyone who contributed to this release!

Read more

v1.4.4

24 Apr 00:57
32febf4
Compare
Choose a tag to compare

What's Changed

  • fix: edgex component creation cause registration errors and core-command crash by @LavenderQAQ in #2030

Full Changelog: v1.4.3...v1.4.4

v1.4.3

10 Apr 08:14
ce0c42c
Compare
Choose a tag to compare

What's Changed

  • [Backport release-v1.4] fix only openyurt crd conversion should be handled for upgrading cert by @github-actions in #2014

Full Changelog: v1.4.2...v1.4.3

v1.4.2

28 Mar 03:54
32a9758
Compare
Choose a tag to compare

What's Changed

  • [Backport release-v1.4] fix: yurtadm join can't work when kubernetes version large than v1.27.0 by @github-actions in #1998

Full Changelog: v1.4.1...v1.4.2

v1.4.1

20 Feb 11:34
cb2055b
Compare
Choose a tag to compare

What's Changed

  • [Backport release-v1.4] fix cache manager panic in yurthub by @github-actions in #1951
  • [Backport release-v1.4] fix: yurtadm join ignorePreflightErrors could not set all by @github-actions in #1954
  • [Backport release-v1.4] Feature: add name-length of dummy interface too long error by @github-actions in #1952
  • [Backport release-v1.4] feat: bookmark and error response should be skipped in yurthub filter (#1868) by @github-actions in #1953

Full Changelog: v1.4.0...v1.4.1

v1.4.0

08 Nov 02:42
c1a4760
Compare
Choose a tag to compare

v1.4.0

What's New

Support for HostNetwork Mode NodePool

When the resources of edge nodes are limited and only simple applications need to be run (for instance, situations where container network is not needed and there is no need for communication between applications),
using a HostNetwork mode nodepool is a reasonable choice. When creating a nodepool, users only need to set spec.HostNetwork=true to create a HostNetwork mode nodepool.

In this mode, only some essential components such as kubelet, yurthub and raven-agent will be installed on all nodes in the pool. In addition, Pods scheduled on these nodes will automatically adopt host network mode.
This method effectively reduces resource consumption while maintaining application performance efficiency.

Support for customized configuration at the nodepool level for multi-region workloads

YurtAppOverrider is a new CRD used to customize the configuration of the workloads managed by YurtAppSet/YurtAppDaemon. It provides a simple and straightforward way to configure every field of the workload under each nodepool.
It is fundamental component of multi-region workloads configuration rendering engine.

Support for building edgex iot systems by using PlatformAdmin

PlatformAdmin is a CRD that manages the IoT systems in the OpenYurt nodepool. It has evolved from the previous yurt-edgex-manager. Starting from this version, the functionality of yurt-edgex-controller has been merged into yurt-manager. This means that users no longer need to deploy any additional components; they only need to install yurt-manager to have all the capabilities for managing edge devices.

PlatformAdmin allows users with a user-friendly way to deploy a complete edgex system on nodepool. It comes with an optional component library and configuration templates. Advanced users can also customize the configuration of this system according to their needs.

Currently, PlatformAdmin supports all versions of EdgeX from Hanoi to Minnesota. In the future, it will continue to rapidly support upcoming releases using the auto-collector feature. This ensures that PlatformAdmin remains compatible with the latest versions of EdgeX as they are released.

Supports yurt-iot-dock deployment as an iot system component

yurt-iot-dock is a component responsible for managing edge devices in IoT system. It has evolved from the previous yurt-device-controller. As a component that connects the cloud and edge device management platforms, yurt-iot-dock abstracts three CRDs: DeviceProfile, DeviceService, and Device. These CRDs are used to represent and manage corresponding resources on the device management platform, thereby impacting real-world devices.

By declaratively modifying the fields of these CRs, users can achieve the operational and management goals of complex edge devices in a cloud-native manner. yurt-iot-dock is deployed by PlatformAdmin as an optional IoT component. It is responsible for device synchronization during startup and severs the synchronization relationship when being terminated or destroyed.

In this version, the deployment and destruction of the yurt-iot-dock are all controlled by PlatformAdmin, which improves the ease of use of the yurt-iot-dock.

Some Repos are archived

With the upgrading of OpenYurt architecture, the functions of quite a few components are merged into Yurt-Manager (e.g. yurt-app-manager, raven-controller-manager, etc.),
or there are repos migrated to openyurt for better management (e.g. yurtiotdock). The following repos have been archived:

Other Notable changes

Fixes

Proposals

Contributors

Thank you to everyone who contributed to this release!

And thank you very much to everyone else not listed here who contributed in other ways like filing issues,
giving feedback, ...

Read more

v1.2.2

12 Jul 06:27
9d5c451
Compare
Choose a tag to compare

What's Changed

  • [Backport release-v1.2] change access permission to default in general. by @github-actions in #1583
  • backport: feat: add yurtadm binaries release workflow by @rambohe-ch in #1601

Full Changelog: v1.2.1...v1.2.2

v1.1.1

12 Jul 02:40
e059ae0
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.1.0...v1.1.1