Kubernetes 1.36 Drops April 22: The 3 Changes Every Platform Team Must Act On Now

Kubernetes v1.36 releases April 22 with HPA scale-to-zero enabled by default, the final death of Ingress NGINX, and security hardening that will break complacent clusters. Here's your 30-minute audit checklist.

RankEdge Team
7 min read

Kubernetes v1.36 drops on April 22, 2026 — three weeks away. Most teams will upgrade without reading the changelog. A handful will catch the breaking changes in staging. The rest will find out in production.

This post covers the three changes that will actually matter for your platform team, plus a checklist to run before you upgrade.


Change 1: HPA Scale-to-Zero Is Finally Default-On

The HPAScaleToZero feature gate has been sitting in alpha since Kubernetes v1.16. Ten years of "coming soon." In v1.36, it's enabled by default.

What this means: your Horizontal Pod Autoscaler can now scale workloads down to zero replicas — not just to one. For dev and staging environments that sit idle nights and weekends, the savings are real. Early benchmarks show dev environment costs dropping from ~$450/month to ~$120/month (73% reduction).

What you need to do

Set minReplicas: 0 in your HPA spec to opt in:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 0   # <-- previously had to be 1
  maxReplicas: 10
  metrics:
    - type: External
      external:
        metric:
          name: queue_depth
        target:
          type: Value
          value: "10"

The catch: scaling from zero requires an external trigger. Native HPA still needs a non-zero metric to wake from zero — which means you'll want KEDA for event-driven workloads (SQS depth, Kafka lag, cron schedules). HPA scale-to-zero works best for internal tooling, preview environments, and batch processors.

Mandatory readiness probe: Without one, Kubernetes can't tell when a pod is ready to handle traffic after cold start. Add it or you'll route traffic to pods that aren't ready:

readinessProbe:
  httpGet:
    path: /healthz
    port: 8080
  initialDelaySeconds: 10
  periodSeconds: 5

Stabilization window: Prevent flapping — set stabilizationWindowSeconds: 300 to require five minutes of idleness before scale-down triggers.

Who benefits most

  • Ephemeral preview environments (one per PR): kill them at night automatically
  • Dev namespaces: idle 14 hours out of 24 on weekdays, all weekend
  • Batch jobs and async workers: zero replicas until the queue fills

Who should leave minReplicas: 1: anything with latency SLAs where cold-start delay would page someone.


Change 2: Ingress NGINX Is Dead — You Need a Migration Plan Today

The Kubernetes community officially announced in November 2025 that Ingress NGINX would receive best-effort maintenance until March 2026. That window is closed. There are no more releases, no security patches, no bug fixes.

If you're still running ingress-nginx in production, you're now running unmaintained software in a security-critical position (it sits in front of all your cluster traffic). CVE-2020-8554 never fully closed. New CVEs will not be patched.

The migration options, in order of recommendation

Option A: Gateway API (recommended for greenfield or teams with bandwidth)

The Kubernetes Gateway API hit GA in October 2023. It's the designed successor to Ingress, with expressive routing, traffic splitting, and header matching built in. Supported by Envoy-based controllers (Istio, Cilium, kgateway).

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: my-app-route
spec:
  parentRefs:
    - name: my-gateway
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /api
      backendRefs:
        - name: my-app-service
          port: 80

Option B: Traefik (easiest drop-in for existing Ingress resources)

Traefik supports native NGINX compatibility mode — the closest thing to a zero-annotation migration. For teams that need to move fast and can't rewrite all their Ingress manifests, this is the pragmatic choice.

Option C: F5 NGINX Ingress Controller

F5 maintains an Apache 2.0 licensed NGINX Ingress Controller with a dedicated engineering team. If your organization is standardized on NGINX tooling and doesn't want to retrain on new APIs, this is a maintained path forward.

Option D: Cloud-managed

On EKS: AWS Load Balancer Controller. On GKE: GKE Ingress or GKE Gateway. Less portable, but fully managed and security-patched.

What to do this week

  1. kubectl get ingress -A | grep nginx — inventory every Ingress resource using the nginx class
  2. Check your cert-manager integrations (cert-manager works with Gateway API, but needs config updates)
  3. Pick your replacement and test in a non-production namespace
  4. Don't wait for v1.36 — this migration should have started in Q1

Change 3: Security Hardening That Will Surprise Complacent Clusters

gitRepo volume: removed, not deprecated

The gitRepo volume type has been deprecated since v1.11 — fifteen years of deprecation warnings. In v1.36, it's gone. If you have any manifests using gitRepo volumes, your pods won't schedule after upgrade.

Why it mattered: gitRepo volumes cloned a git repository directly into a container at mount time. The implementation ran git as root on the node, creating a trivial path to node-level code execution. It was never safe.

Migration: Use an init container with a git-sync sidecar, or pull your repo artifacts into a container image at build time:

initContainers:
  - name: git-sync
    image: registry.k8s.io/git-sync/git-sync:v4.2.0
    args:
      - --repo=https://github.com/my-org/my-config
      - --depth=1
      - --one-time
    volumeMounts:
      - name: config-volume
        mountPath: /repo

externalIPs on Services: deprecated, with a timeline

The spec.externalIPs field on Service objects is deprecated in v1.36, with removal planned for v1.43. This field has been a known MitM attack vector (CVE-2020-8554) — it lets any cluster user route arbitrary external IPs to a service, potentially hijacking cluster traffic.

You won't break on v1.36, but you will start seeing deprecation warnings. Use this as your prompt to migrate:

  • LoadBalancer services for cloud-managed external ingress
  • NodePort for simple port exposure
  • Gateway API for production traffic routing

Ephemeral image pull tokens (graduating)

Rather than storing static imagePullSecrets (long-lived credentials that rotate badly), v1.36 continues advancing the move to ephemeral Kubernetes Service Account tokens for authenticating image pulls. Short-lived, pod-scoped, auto-rotating. Less blast radius when a secret leaks.

Platform teams running private registries should plan to adopt this pattern — it will eventually become the default.


Your Pre-Upgrade Checklist

Run this before upgrading to v1.36:

Breaking changes (will fail):

  • grep -r "gitRepo" your-manifests/ — remove any gitRepo volumes
  • Inventory and migrate off ingress-nginx if still in use

Deprecation warnings (won't fail yet, but need a plan):

  • kubectl get svc -A -o json | jq '.items[] | select(.spec.externalIPs != null) | .metadata' — find services using externalIPs
  • Schedule Ingress → Gateway API migration if not already started

Optimization opportunities:

  • Identify dev/staging namespaces where minReplicas: 0 would save cost
  • Add readiness probes to any deployments that lack them (required for scale-to-zero)
  • Review image pull secrets for transition to ephemeral SA tokens

Networking:

  • If using SELinux-enforcing nodes, test pod startup times post-upgrade (SELinux fast labeling is now GA and default — generally faster, but worth validating your specific volumes)

The Bigger Pattern

Each Kubernetes release follows the same arc: alpha flags that sat for years finally graduate to default-on. Deprecated APIs that nobody cleaned up get removed. Security debt from the 2015–2018 era gets paid down.

v1.36 is a "quiet significance" release — no flashy headline features, but HPA scale-to-zero and Ingress NGINX's end-of-life together represent years of accumulated decisions finally landing at the same time.

The teams that stay ahead of this aren't doing anything heroic. They have a runbook, they run the deprecation checks before each upgrade, and they track the CHANGELOG. That's it.


How We Help Platform Teams

At RankEdge, we work primarily with content-heavy B2B SaaS teams on SEO and GEO — but the platform engineering problems we describe above are the same ones our clients' engineering teams face. We've seen what happens when content infrastructure (CDN configs, redirect chains, Core Web Vitals) gets the same "we'll fix it eventually" treatment as deprecated Kubernetes APIs.

If your engineering org is also thinking about content infrastructure and SEO — how your site's technical foundation affects AI citation rates and search visibility — that's exactly what we help with.


Kubernetes v1.36 releases April 22, 2026. The official sneak peek and release information page are the authoritative sources for what ships.

Share:

Get More Insights

Subscribe to our newsletter for the latest strategies on AI visibility and citation optimization.

Stay Ahead of the Curve

Monthly insights on DevOps, AI solutions, and AI search. What's working, what's changed, what to do next. No fluff.

No spam. Unsubscribe anytime. Privacy-first.

Ready to Get Started?

Get a free assessment for your infrastructure, AI, or search needs.

Request a Free Consultation

Related Articles

8 min read

How One .unwrap() Call Took Down ChatGPT, X, Spotify, and 1 in 5 Websites on the Internet

On November 18, 2025, a single Rust panic cascaded through Cloudflare's global network, affecting 2.4 billion users across 700+ services. Here's the exact chain of events — and what 'Code Orange' means for every engineering team.

Read More
7 min read

Claude Code's Biggest Leap Yet: Scheduled Tasks, Remote Control, and the Rise of the AI Development Platform

Anthropic shipped five major capabilities in February 2026 — Remote Control, Scheduled Tasks, Parallel Agents, Auto Memory, and a Plugin Ecosystem — transforming Claude Code from a coding assistant into a multi-agent development platform.

Read More
7 min read

Your Blog Won't Get You Cited by AI. This Will. (March 2026 Research)

Three major studies dropped this month with a counterintuitive finding: 82% of AI citations come from earned media, not owned content. Here's what that means for your GEO strategy—and what to do about it.

Read More