The Great Kubernetes Ingress Migration

Ari Yonaty

If you haven’t been on Reddit in the past few months, you may have missed the latest bombshell: ingress-nginx is headed for retirement in March 2026.

For years, ingress-nginx was the go-to Ingress implementation. You didn’t even need to think about it; it was the “default” for many, and countless open-source Helm charts supported it out of the box. But with the countdown ticking, Reddit is ablaze. Engineers who haven’t looked at a networking manifest in four years are waking up to discover this “new” thing called the Gateway API.

The suggestions are everywhere: Cilium is touted as the eBPF savior; kgateway is the shiny new tool; NGINX Gateway Fabric is the choice for NGINX purists; and Traefik remains a familiar favorite for those coming from the Docker world.

It’s absolute chaos. First, a reality check: the Gateway API is not new – v1.0 was released back in October 2023. Additionally, while the Kubernetes project has frozen the Ingress API, it still works. There are likely many legacy applications that will continue to leverage the API for years to come.

That said, my recommendation is to either dive directly into the Gateway API or choose a controller that supports it. Which brings me to my favorite: Old Faithful Istio.

Istio: Not as Scary as You Think

When most people hear “Istio,” they immediately think “service mesh,” “overly complex,” or “sidecar hell.” While there is a sliver of truth to those reputations, bear with me. Istio was one of the first to natively adopt the Gateway API and is one of the most battle-tested implementations in existence.

In Istio, you don’t need the whole service mesh. You can get the power of the Envoy proxy without having to inject a sidecar into every pod in your cluster.

Enough Talk, Let’s See Some YAML

To start, install the Gateway API CRDs in your cluster. As of early 2026, we are using the experimental v1.4.1 release:

bash
Copied to clipboard
1
kubectl apply --server-side -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.1/experimental-install.yaml

Once installed, verify the CRDs are listed:

bash
Copied to clipboard
1
2
3
$ kubectl get crd | grep networking.k8s.io
gateways.gateway.networking.k8s.io
httproutes.gateway.networking.k8s.io

Next, we’ll install Istio using the official Helm charts. First, install the istio-base chart to set up the foundation:

bash
Copied to clipboard
1
2
helm install --repo https://istio-release.storage.googleapis.com/charts istio-base \
    --namespace istio-system --set defaultRevision=default --create-namespace

Then, install the istiod chart, which provides service discovery and configuration:

bash
Copied to clipboard
1
2
helm install --repo https://istio-release.storage.googleapis.com/charts istiod \
    --namespace istio-system

Verify the installations:

bash
Copied to clipboard
1
2
3
4
5
$ helm ls -n istio-system

NAME       NAMESPACE    REVISION UPDATED                                 STATUS   CHART         APP VERSION
istio-base istio-system 1        2024-04-17 22:14:45.964722028 +0000 UTC deployed base-1.28.2   1.28.2
istiod     istio-system 1        2024-04-17 22:14:45.964722028 +0000 UTC deployed istiod-1.28.2 1.28.2

Create the Kubernetes Gateway

Now, let’s create the actual Gateway. Note that by default, Istio creates a LoadBalancer service. I use the networking.istio.io/service-type annotation to swap to ClusterIP because I typically manage my external LoadBalancer (like an AWS ALB) via Terraform. This allows for cleaner blue/green upgrades at the infrastructure layer.

yaml
Copied to clipboard
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: istio-gateway
  namespace: istio-system
  annotations:
    networking.istio.io/service-type: ClusterIP
spec:
  gatewayClassName: istio
  listeners:
    - name: http
      port: 80
      protocol: HTTP
      allowedRoutes:
        namespaces:
          from: All

Note: The default behavior deploys a single Envoy pod. To avoid a single point of failure during node upgrades, remember to configure Pod Disruption Budgets and the Horizontal Pod Autoscaler.

Routing Traffic with HTTPRoute

Finally, we need an HTTPRoute to tell the Gateway where to send traffic:

yaml
Copied to clipboard
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: echo-server
spec:
  hostnames:
    - echo.example.com
  parentRefs:
    - name: istio-gateway
      namespace: istio-system
  rules:
    - backendRefs:
        - name: echo-server
          port: 8888
Copied to clipboard
$ curl http://echo.example.com
<response>

Just like that, you have a robust Gateway API implementation running on Istio. From here, you can leverage Istio-specific features like RequestAuthentication and AuthorizationPolicy to further harden your network.

Don’t let the reddit-chaos stress you out. Move to the Gateway API, lean on Istio, and keep your clusters happy.

As always, if you have questions, feel free to reach out on LinkedIn. Happy tinkering!

Stay Tuned

Practical insights and strategies from the world of DevOps and Platform Engineering.