23
Oct

Upgrade Your EKS Cluster from 1.32 to 1.33: A Guide to Benefiting from the Newest Features Without Disrupting Your Applications

As Kubernetes rapidly evolves, so does Amazon EKS. With version 1.33 now available, the clock is ticking for clusters running on v1.32. For any team running a production EKS cluster, the upgrade to 1.33 is not just a matter of staying current; it’s a necessary step to maintain security, optimise costs, and unlock powerful new features.

However, the prospect of upgrading a live environment—especially one running stateful applications like Kafka, Elasticsearch, or MongoDB—can be daunting. The risks of downtime, data incompatibility, and configuration errors are real.

This guide reframes the upgrade process. It’s not just about avoiding penalties; it’s about unlocking strategic advantages. We’ll outline a safe, phased approach to upgrading your EKS cluster from 1.32 to 1.33, with a special focus on protecting your critical stateful workloads.

The High Cost of Standing Still: Why the 1.32 Upgrade Is Non-Negotiable

Delaying your EKS upgrade from v1.32 introduces significant business and technical risks. Amazon EKS officially supports a limited number of Kubernetes versions. Once a version reaches its end-of-support date, the consequences escalate:

  • Increased AWS Costs: AWS charges a premium for running a control plane on a version that has transitioned to “extended support.”
  • Critical Security Vulnerabilities: Your cluster will no longer receive timely security patches (CVEs), leaving your applications exposed.
  • Add-on and Tool Incompatibility: Essential components like CoreDNS, the VPC CNI plugin, or vital tools like Helm and ArgoCD can begin to fail or behave unpredictably.
  • Forced AMI Migrations: EKS 1.33 is the first version to drop support for new Amazon Linux 2 (AL2) AMIs. Delaying means you’ll eventually face a rushed migration to a newer AMI like Amazon Linux 2023 (AL2023).

(The following table shows the EKS support lifecycle as of July 2025. For the latest dates, always consult the official Amazon EKS documentation.)

Kubernetes version EKS Release Date End of Standard Support
1.33 May 28, 2025 July 29, 2026
1.32 January 25, 2025 March 23, 2026
1.31 September 26, 2024 November 26, 2025
1.30 May 23, 2024 July 23, 2025

Export to Sheets

Our Three-Phase Blueprint for a Safe EKS Upgrade

An EKS upgrade involves several moving parts—the control plane, managed add-ons, and worker node groups. To manage this complexity, we recommend a three-phase approach.

Phase 1: Reconnaissance and De-Risking

This planning phase is the most critical for preventing surprises.

  1. Check the Lifecycle: Confirm the end-of-life dates for EKS 1.32 and your current node group AMIs.
  2. Inventory Your Cluster: Document all components: node groups, instance types, IAM roles, and all managed add-ons with their current versions.
  3. Verify Application Compatibility (Especially StatefulSets): For every stateful application (e.g., Kafka, MongoDB, Elasticsearch), you must verify its compatibility with Kubernetes 1.33. Ask: Is our Helm chart or operator compatible? Are there known issues with persistent volumes? Have we tested backups on a staging cluster?
  4. Scan for Deprecated APIs: Before upgrading, use tools like pluto or kubent to scan your cluster’s deployments and Helm charts for any APIs that have been removed in v1.33. This proactive check can prevent major upgrade failures.

Phase 2: Staged Execution with the Blue/Green Method

Never perform an “in-place” upgrade on production nodes. The safest method is to create new infrastructure and migrate workloads gracefully.

  1. Upgrade the Add-ons First: Before touching the control plane, upgrade your managed add-ons—CoreDNS, kube-proxy, and VPC CNI—to versions compatible with both 1.32 and 1.33.
  2. Upgrade the Control Plane: Initiate the control plane upgrade to 1.33 via the AWS Console, CLI, or Terraform. This is handled by AWS without impacting your worker nodes.
  3. Create New Node Groups: Provision a brand new set of worker nodes using the 1.33 EKS-optimised AMI (e.g., Amazon Linux 2023). This is the “Green” part of your new infrastructure.
  4. Safely Drain and Migrate Workloads: Cordon the old (“Blue”) 1.32 nodes to prevent new pods from being scheduled on them. Then, carefully drain the old nodes, allowing PodDisruptionBudgets (PDBs) to ensure stateful applications are terminated gracefully and rescheduled onto the new “Green” nodes.

Phase 3: Validation and Final Cutover

Once workloads are running on the new node groups, the job isn’t done.

  1. Monitor Everything: Closely watch your dashboards (CloudWatch, Prometheus, Grafana) for any anomalies in CPU, memory, or application-level metrics.
  2. Validate Core Functionality: Test autoscaling, check ingress connectivity, and ensure logging and alerting pipelines are functioning correctly.
  3. Decommission with Confidence: Only after all tests have passed and the new 1.33 environment has been stable should you decommission the old 1.32 node groups.

Final Thoughts: An Upgrade Is a Strategic Advantage

Approached methodically, the EKS upgrade from 1.32 to 1.33 is not a risk to be feared but a mission-critical practice that enhances your infrastructure’s hygiene, performance, and security. It ensures you can leverage the latest Kubernetes features for cost optimisation and innovation.

By treating the upgrade as a structured project, you can move forward with confidence, knowing your critical applications are safe and your business is ready for the future.

share

Beyond the Lift-and-Shift: A Practical Guide to Cloud Cost Optimisation

Beyond the Lift-and-Shift: A Practical Guide to Cloud Cost Optimisation

previous-blog-arrowPrevious
AWS IAM: A Practical Guide to Users, Roles, and Modern Best Practices

AWS IAM: A Practical Guide to Users, Roles, and Modern Best Practices

next-blog-arrowNext