Kubernetes 1.29 Release Highlights: Mandala

Welcome to KodeKloud! Today, let's talk about the cool stuff in the latest Kubernetes version, 1.29. It's got some new features, changes in how things work, better guides, and some cleanup of old stuff.

This time, it's all about "Mandala (The Universe) ✨🌌." Imagine it like the beautiful Mandala art, showing off perfect patterns. This Kubernetes update is like a new chapter in our project's story. Everyone who helps, uses, or supports it is like a star in our big Kubernetes galaxy, lighting up the way. With each new release, we're building a world full of possibilities together.

In this third release of 2023, Kubernetes 1.29 is bringing some exciting changes with a total of 49 enhancements on the table.

Breaking it down:

  • There are 20 fresh or upgraded Alpha enhancements, giving us a taste of what's new and improved.
  • 18 Beta enhancements are stepping into the spotlight, now enabled by default starting from this release.
  • 11 enhancements have made it to the stable stage, indicating they're considered reliable and robust.

As a reminder, Kubernetes follows a multi-stage feature release process. Each enhancement goes through phases like alpha, beta, GA, and stable. If you're curious about how these enhancements evolve in Kubernetes, be sure to catch our video on Kubernetes Enhancements Proposals for an in-depth look.

In the spotlight for Kubernetes 1.29, here are the top 10 major enhancements:

  1. Sidecar Containers Graduated to Beta: Sidecar containers are now officially in Beta, bringing enhanced capabilities and stability.
  2. In-Place Update of Pod Resources: This major change allows for the seamless update of pod resources, improving flexibility in resource management.

Now, let's dive into the notable graduations:

  1. Improve the reliability of ingress connectivity serviced by Kube-proxy (Beta): Enhancing the reliability of ingress connectivity handled by Kube-proxy, now in Beta.
  2. Priority and Fairness for API Server Requests (Stable): The stability graduation for priority and fairness in API server requests ensures a reliable experience.
  3. Reduce secret-based service account tokens (Beta): An important Beta graduation that enhances security by reducing reliance on secret-based service account tokens.
  4. Support paged LIST queries from the Kubernetes API (Stable): Stable graduation for supporting paged LIST queries from the Kubernetes API, improving efficiency.
  5. ReadWriteOncePod PersistentVolume Access Mode (Stable): Stable graduation for ReadWriteOncePod PersistentVolume Access Mode ensures stable and reliable pod access.

Now, let's explore the fresh and essential features introduced in this release:

  1. Structured Authorization Configuration & Structured Authentication Config (Both Alpha): Introducing structured authorization and authentication configurations in Alpha, providing more flexibility and control.
  2. Transition from SPDY to WebSockets (Alpha): A notable Alpha introduction, transitioning from SPDY to WebSockets for improved communication.
  3. Add support for user namespaces in pods (Alpha): The introduction of user namespaces in pods in Alpha, expanding possibilities in Kubernetes environments.

SideCar Containers have leveled up and graduated to beta in Kubernetes 1.29

If you caught our 1.28 release video, you're already acquainted with sidecar containers in Kubernetes. In Kubernetes lingo, containers within a pod share networking and storage. A standard pod usually includes a primary container and, if needed, some helper containers for tasks like logging or monitoring – these are our 'sidecar' containers.

With Kubernetes 1.28, the sidecar container pattern became an integral part, a significant move to support auxiliary container roles within a pod.

Now, in version 1.29, we're taking it up a notch by formalizing the sidecar container pattern with two crucial features:

  1. Orderly Initiation and Termination: Sidecars now kick off before the main containers, ensuring they're up and running to provide support throughout the main container's lifecycle. Importantly, they stick around until all main containers have gracefully exited, ensuring uninterrupted assistance.
  2. Sequential Shutdown on Pod Termination: When the pod is wrapping up, sidecars gracefully shut down in the reverse order of their startup. This sequential shutdown is vital for unwinding dependencies properly. Each sidecar container receives a SIGTERM only after all main containers have completed their tasks, ensuring a smooth and orderly shutdown process.

If you're curious about the new and improved Sidecar Containers in Kubernetes 1.29, head over to our YouTube channel for a quick and easy-to-follow video!

In-Place Update of Pod Resources

Adapting to changing demands is crucial in Kubernetes. Consider scenarios like a sudden surge in traffic, where existing resources fall short, or a significant drop in load, leaving allocated resources unused. The necessity for adjusting resources is also apparent when initial settings prove inaccurate.

In the traditional setup, modifying these resources required recreating the entire pod because container resources in PodSpec were unchangeable. However, this approach presented challenges, particularly for stateful or batch workloads. Pod restarts could result in reduced availability or increased operational costs.

In Kubernetes 1.29, we're not just introducing the "In-Place Update of Pod Resources" feature – we're giving it an upgrade!

This game-changing feature initially made its debut in the 1.27 release.

Now, in 1.29, we've taken it a step further by not only improving its performance but also making it fully compatible with Windows containers. That means you can dynamically tweak your pod's CPU and memory allocations without restarting, and now it's even more efficient and versatile, catering to both Linux and Windows containers.

Improve the reliability of ingress connectivity serviced by Kube-proxy(beta)

Ensuring reliable networking in Kubernetes is key, with various components working together. Kube-proxy, running on each node, plays a crucial role in managing network traffic routing within the cluster, ensuring accurate communication between services and external sources.

Ingress, often likened to the front door of Kubernetes services, controls how external traffic reaches services within the cluster, particularly handling HTTP requests.

Enter the Kubernetes Cloud Controller Manager (KCCM), which tailors the management of cloud-specific elements for smooth operation in cloud environments like GCP.

Within this framework, services come in two flavors based on health checks: externalTrafficPolicy: Cluster (eTP:Cluster) and externalTrafficPolicy: Local (eTP:Local). The default, eTP:Cluster, allows Kube-proxy to respond based on health, while eTP:Local services report readiness based on the node's service endpoint.

Now, in Kubernetes 1.29, a beta enhancement focuses on eTP:Cluster services, refining Kube-proxy's role, especially during node termination. This is crucial for connection draining, ensuring existing connections are handled before a node shuts down, preventing sudden disconnection and service disruptions.

This enhancement introduces a /livez health check path, providing a more precise health status for load balancers to make accurate traffic routing decisions. With these improvements, particularly in cloud environments, network reliability and efficiency get a significant boost. Excitingly, this feature has graduated to beta, marking its readiness for wider use and increased stability.

Stable Release: Priority and Fairness for API Server Requests

In the intricate landscape of Kubernetes, the API server plays a pivotal role in handling operational requests, ranging from resource creation, modification, and deletion (mutating requests) to fetching data without alterations (read-only requests).

In the past, Kubernetes implemented maximum limits on these requests to prevent server overloads. However, without distinguishing their importance, critical system functions like node heartbeats, kubelet and kube-proxy operations, and system self-maintenance tasks risked being overshadowed by less crucial requests—a scenario termed "Self-Maintenance crowded out."

Now, with the "Priority and Fairness for API Server Requests" feature achieving stability in Kubernetes, this challenge has been addressed. The enhancement introduces a system where requests are categorized and prioritized, ensuring that crucial system maintenance functions and requests vital to cluster health aren't neglected. This not only guards against server overloads but also maintains fairness and optimizes throughput, ensuring a balanced and efficient handling of requests across various Kubernetes operations.

This feature is particularly invaluable in multi-tenant environments, preventing issues like a single buggy tenant or controller overwhelming the system. It marks a significant step forward in achieving fair resource allocation among different users, promoting stability and reliability in Kubernetes operations.

Beta Feature: Reduce Secret-Based Service Account Tokens

In the intricate workings of Kubernetes, service accounts play a pivotal role in assigning identities to processes within pods, ensuring secure access to the Kubernetes API. Traditionally, service account tokens were stored as Kubernetes secrets, presenting potential security risks due to broader accessibility.

With the evolution of Kubernetes 1.22 and the BoundServiceAccountTokenVolume feature achieving General Availability, there's a transformative shift in how pods manage service account tokens. Instead of the traditional secret-based storage, tokens are now acquired through the TokenRequest API and stored as a projected volume.

In Kubernetes, a projected volume consolidates various volume sources into one directory, allowing seamless projection of secrets, config maps, and service account tokens into a pod. This unified approach enhances security, providing a controlled token management system directly linked to the pod's life cycle.

This beta enhancement marks a significant stride in Kubernetes security. By eliminating auto-generated secret-based tokens and efficiently cleaning up unused tokens, the platform reinforces its commitment to a more robust and secure environment for managing service account credentials. It's a noteworthy step toward fortifying Kubernetes against potential security vulnerabilities.

Stable Release: Support Paged LIST Queries from the Kubernetes API

In the realm of Kubernetes, managing vast datasets, especially when pulling them from the API server, can pose challenges in terms of memory and performance constraints. The conventional method of fetching entire resource lists in one go could strain system resources significantly.

Enter the "Support for Paged LIST Queries from the Kubernetes API" feature, now stable in Kubernetes 1.29. This game-changing feature optimizes the process by allowing API consumers to retrieve large datasets through paginated responses. Breaking down extensive list requests into smaller, manageable page requests drastically reduces the memory allocation impact of these operations. This enhancement is a game-changer for system scalability, making the handling of extensive datasets more efficient and reliable. It's a win for performance and a boost for the overall efficiency of Kubernetes operations.

Stable Release: ReadWriteOncePod PersistentVolume Access Mode

In the complex world of Kubernetes, managing storage is crucial, and it's skillfully handled through PersistentVolumes (PVs) and StorageClasses. PVs represent storage resources in the cluster, and StorageClasses allow administrators to define and efficiently manage different storage types, such as SSDs or slower disks.

Efficiently managing storage for diverse workloads has been a challenge in Kubernetes. Previously, there was no way to restrict PersistentVolume (PV) access to a single pod on a single node. This limitation could lead to issues, especially for sensitive workloads. For instance, if a workload with ReadWriteOnce (RWO) access scaled to more than one pod and these pods ended up on the same node, they could simultaneously modify the storage device, causing conflicts.

Users often had to work around this by scheduling only one pod per node, which wasn't always the most resource-efficient approach.

Now, with the ReadWriteOncePod (RWOP) access mode achieving stability, this issue is effectively addressed. RWOP ensures that a PV can be exclusively accessed by only one pod on a single node, enhancing both security and efficiency in scenarios where data integrity and isolation are crucial. This mode marks a significant advancement in Kubernetes' storage management capabilities, offering a tailored solution for handling sensitive and critical workloads.

Introducing: Structured Authorization Configuration

In the intricate realm of Kubernetes, the kube-apiserver's authorization process is defined by command-line flags like --authorization-*. These flags configure the "authorization chain," a series of steps the API server follows to determine whether a request should be allowed or denied. This setup dictates who can access the Kubernetes API and what actions they can perform.

However, this historical configuration had its limitations, especially in integrating multiple webhooks – external services providing additional authorization decisions. More webhooks meant more complexity in rules and checks before granting access to the Kubernetes API.

Enter the Structured Authorization Configuration for Kubernetes kube-apiserver, introducing a revolutionary approach to defining its authorization chain. Unlike the previous command-line flag setup, administrators can now use a configuration file to specify a sequence of authorization checks, including multiple webhooks. This enhancement brings a new level of flexibility and ease in managing authorization processes, paving the way for more intricate and secure Kubernetes API access configurations.

For example, a config file can now define a primary webhook for initial authentication, followed by a secondary one for more specialized checks. This format provides administrators with more control, allowing for ordered authorization modes and pre-filtering based on resources or users, thus avoiding unnecessary processing.

This introduction of Structured Authorization Configuration in Kubernetes opens up possibilities for more complex, real-world authorization scenarios, catering to diverse security requirements. This enhanced flexibility ensures that the API server can efficiently and securely handle various authentication and authorization workflows.

In addition to the Structured Authorization Configuration, Kubernetes has unveiled the Structured Authentication Config in its latest release.

OpenID Connect (OIDC) serves as a straightforward identity layer on top of the OAuth 2.0 protocol, used in Kubernetes to authenticate users against the API server. JSON Web Token (JWT) is a compact and secure means of representing claims, often employed in OIDC for transmitting authenticated user information securely.

In this latest Kubernetes release, the Structured Authentication Config improves how OIDC and JWT are configured. Departing from complex flag-based setups, it introduces a structured, versioned configuration approach. This enhancement supports advanced features such as multiple client IDs, intricate claim mappings, and the utilization of multiple OIDC providers. This streamlining strengthens Kubernetes' authentication mechanisms, making them more accessible and robust.

Transitioning to WebSockets: Enhancing Kubernetes Communication

In the Kubernetes realm, kubectl serves as the command-line tool, facilitating user interaction with the cluster. Commands like kubectl exec empower users to execute actions within a container, while kubectl attach enables interaction with a running container. These commands rely on a connection involving the kubectl client, the API server, and the Kubelet, running on each node to manage containers.

Bi-directional streaming in Kubernetes involves continuous two-way communication between kubectl, the API server, and the Kubelet. This dynamic interaction is pivotal for real-time engagement with pods, enabling tasks like executing commands in a container (kubectl exec) or attaching to a running container (kubectl attach).

While Kubernetes has traditionally used the SPDY protocol for bi-directional communication, the shift to WebSockets is now underway. This transition is particularly crucial for commands like kubectl exec, kubectl attach, and kubectl cp, relying on seamless streaming between kubectl and the API server. WebSockets offer broader compatibility and future-proof these interactions, addressing the deprecation of SPDY/3.1 since 2015.

The transition not only covers the communication between kubectl and the API server but extends to communication from the API server to the Kubelet. This comprehensive shift ensures that the entire pathway from kubectl to the Kubelet leverages WebSockets, promising a more consistent and reliable streaming experience.

In the Kubernetes landscape, an L7 proxy or gateway operates at the application layer (Layer 7) of the network, managing and directing traffic based on various factors. This includes URL and message content, providing advanced traffic routing capabilities compared to lower-level proxies.

With the adoption of WebSockets for bi-directional communication, Kubernetes users now enjoy improved compatibility with L7 proxies and gateways. This transition enhances integration and connectivity, particularly in complex network setups or cloud environments like Google's Anthos Connect Gateway. Users executing commands like kubectl exec experience smoother and more reliable connections through these advanced networking components.

New Feature Alert: User Namespace Support in Pods

In the dynamic landscape of Kubernetes, namespaces are pivotal for isolating groups of resources within a cluster. User namespaces, a specialized type, specifically focus on isolating user IDs and group IDs, bolstering security.

The significant enhancement of introducing support for user namespaces in pods marks a transformative step for Kubernetes. Now, processes within a pod can be isolated at the user level, allowing them to run with distinct user and group IDs compared to those on the host. For instance, a privileged process within a pod can operate as an unprivileged user on the host, significantly reducing security risks, especially in scenarios where a process breaks out of a container.

This feature serves as a robust defense against past vulnerabilities, including those highlighted by CVEs such as CVE-2019-5736, CVE-2021-25741, CVE-2017-1002101, CVE-2021-30465, CVE-2016-8867, and CVE-2018-15664. By isolating container privileges from the host, user namespaces mitigate risks associated with overwriting host binaries, root privilege escalations, and TOCTOU race attacks.

Now, with user namespaces seamlessly integrated into Kubernetes pods, a key change is introduced in pod.spec. A new field, pod.spec.hostUsers, is added:

  • When set to true or not specified, pod.spec.hostUsers maintains the current behavior, utilizing the host's user namespace.
  • If set to false, Kubernetes creates a new user namespace dedicated to that pod.

By default, this field remains unset, allowing pods to utilize the host's user namespace. This addition offers users the flexibility to enhance security by isolating pod-level processes from the host, providing a more granular control over their Kubernetes environments.

As we conclude our exploration of Kubernetes 1.29, it's evident that this release is more than just updates and enhancements – it's a journey toward a more robust, secure, and flexible Kubernetes ecosystem. From the graduation of SideCar Containers to the introduction of user namespaces in pods, each feature adds a new layer to the Kubernetes universe, opening doors to endless possibilities. Happy deploying!

If you want to dig deeper into Kubernetes, check out our Kubernetes Learning Path here:

Kubernetes Learning Path | Kodekloud
Embark on the Kubernetes learning path. Hone your Kubernetes skills with our study roadmap. Start your Kubernetes journey today.