Tag Archives: Gateway API

The Story of Gateway API

Earlier this week, Gateway API v1.0 was released, marking the significant milestone of General Availability. This Kubernetes API represents the future of load balancing, routing, and service mesh configuration. It already has more than 20 implementations, including GKE and Istio. In this post, we’ll take a look back at some of the key moments that led to this point, starting with the proposal that started it all.

Initial Proposal

The core ideas for this new API were initially proposed by Bowei Du (Software Engineer, Google) at KubeCon San Diego as “Ingress v2”, the next generation of the Ingress API for Kubernetes. This proposal came as the shortcomings of the original Ingress API were becoming apparent. The community had started to develop alternative APIs, notably including Istio’s VirtualService API and Contour’s IngressRoute API. We had reached an inflection point where the Kubernetes ecosystem was diverging, and Bowei believed it was important to develop a new standard that would expose all these advanced features in a portable way.

The initial proposal for this API provided a great foundation to build on. Specifically, this proposal focused on a role-oriented model that split capabilities into resources that were aligned with 3 different personas. It emphasized both expressiveness and extensibility as core design principles. This early sketch from that proposal closely resembles the API today:

Sketch of early proposal for Ingress API

One of the key limitations of the Ingress API was that it was designed with the lowest common denominator in mind: every feature included in the API needed to be implemented by everyone. This meant that the API surface was very small, and implementations that wanted to support more advanced features either relied on long lists of implementation-specific annotations or developing new custom APIs.

Bowei proposed that this new API could introduce a concept of “support levels.” This would allow us to add features to the API even if not every implementation could support them, for example “Extended” features would be fully portable if they were supported.Diagram showing proposed Custom API support levels

Evolution of the API

Since that initial proposal, the API has evolved significantly, benefiting from the expertise of many in the community. Gateway has been referred to as the “most collaborative API in Kubernetes history” due to the hundreds of contributors representing dozens of companies that have helped refine the API over the years.

One of the things that makes this API unique is that it is built on top of Custom Resource Definitions (CRDs). This has meant that Gateway API is developed and released outside of the main Kubernetes project, enabling broader collaboration and shorter feedback loops. For example, each new release of this API supports the 5 most recent versions of Kubernetes, covering the vast majority of clusters in use today. So, instead of waiting until you can upgrade to the latest version of Kubernetes, most will be able to try out these APIs close to the time they’re released.

As the first official Kubernetes API to take this approach, it has developed several unique concepts along the way:

GEPs

Similar to Kubernetes Enhancement Proposals (KEPs), Gateway Enhancement Proposals provide a streamlined approach for proposing significant new enhancements to Gateway API. As the API grew and attracted more contributors, it became critical to have a better way to document key design decisions. The concept of GEPs was initially proposed by Bowei in 2021.

More than 30 of these have already merged, with many more in progress right now. This pattern has been invaluable in keeping track of when and why key design decisions were made. All key parts of the API now have GEPs documenting when and why they were proposed, along with alternatives considered.

Release Channels

In 2021 we proposed a simplified approach to versioning that would introduce the concept of release channels, our own version of Kubernetes’ “feature gates”, which denote the stability of individual fields and features.

All new resources, fields, and features start in the “Experimental” release channel. As the name implies, this channel provides no stability guarantees and can include breaking changes to enable us to iterate more quickly on APIs.

As these experimental APIs stabilize, individual resources, fields, and features can graduate to the “Standard” release channel when they meet predefined graduation criteria. These two release channels enable us to both provide a stable and predictable API with the “Standard” release channel while still iterating on experimental concepts with the “Experimental” release channel.

Conformance Tests

We added the first conformance tests in 2022, before this API reached beta, and since then these tests have become a key part of every new feature in Gateway API, ensuring that implementations were truly providing a portable experience. Before a feature can graduate to the “Standard” release channel, thorough conformance tests need to be developed, and multiple implementations need to pass them.

Service Mesh Support

Earlier this year, mesh support launched its “Experimental” version, marking the first time a Kubernetes API has ever officially underpinned the concept of Service Mesh. In 2022, key Service Mesh projects came together to form the GAMMA initiative (Gateway API for Mesh Management and Administration). The core idea was that the Gateway API was sufficiently modular that the Routing and Policy layers could be used for both mesh and ingress use cases.

Trying it Out

Gateway API enables great new features on GKE, such as advanced multi-cluster routing. Yesterday GKE announced GA support for multi-cluster Gateways. In the coming weeks, GKE will also be rolling out the v1.0 CRDs for all customers that have enabled Gateway API in their clusters. In the meantime, you can access all of the same features with the v1beta1 CRDs already supported by GKE. For more information on how to get started with the Gateway API on GKE, refer to the GKE Gateway documentation.

If you’re interested in Gateway API’s support for Service Mesh, you can try it out with Anthos Service Mesh.

Alternatively, if you’d like to use this API with another implementation, refer to the open source project’s Getting Started documentation.

By Rob Scott – GKE Networking

Gateway API Graduates to Beta

For many years, Kubernetes users have wanted more advanced routing capabilities to be configurable in a Kubernetes API. With Google leadership, Gateway API has been developed to dramatically increase the number of features available. This API enables many new features in Kubernetes, including traffic splitting, header modification, and forwarding traffic to backends in different namespaces, just to name a few.
Since the project was originally proposed, Googlers have helped lead the open source efforts. Two of the top contributors to the project are from Google, while more than 10 engineers from Google have contributed to the API.

This week, the API has graduated from alpha to beta. This marks a significant milestone in the API and reflects new-found stability. There are now over a dozen implementations of the API and many are passing a comprehensive set of conformance tests. This ensures that users will have a consistent experience when using this API, regardless of environment or underlying implementation.

A Simple Example

To highlight some of the new features this API enables, it may help to walk through an example. We’ll start with a Gateway:

apiVersion: gateway.networking.k8s.io/v1beta1

kind: Gateway

metadata:

  name: store-xlb

spec:

  gatewayClassName: gke-l7-gxlb

  listeners:  

  - name: http

    protocol: HTTP

    port: 80

This Gateway uses the gke-l7-gxlb Gateway Class, which means a new external load balancer will be provisioned to serve this Gateway. Of course, we still need to tell the load balancer where to send traffic. We’ll use an HTTPRoute for this:

apiVersion: gateway.networking.k8s.io/v1beta1

kind: HTTPRoute

metadata:

  name: store

spec:

  parentRefs:

  - name: store-xlb

  rules:

  - matches:

    - path:

        type: PathPrefix

        value: /

    backendRefs:

    - name: store-svc

      port: 3080

      weight: 9

    - name: store-canary-svc

      port: 3080

      weight: 1

This simple HTTPRoute tells the load balancer to route traffic to one of the “store-svc” or “store-canary-svc” on port 3080. We’re using weights to do some basic traffic splitting here. That means that approximately 10% of requests will be routed to our canary Service.

Now, imagine that you want to provide a way for users to opt in or out of the canary service. To do that, we’ll add an additional HTTPRoute with some header matching configuration:

apiVersion: gateway.networking.k8s.io/v1beta1

kind: HTTPRoute

metadata:

  name: store-canary-option

spec:

  parentRefs:

  - name: store-xlb

  rules:

  - matches:

    - header:

        name: env

        value: stable

    backendRefs:

    - name: store-svc

      port: 3080

  - matches:

    - header:

        name: env

        value: canary

    backendRefs:

    - name: store-canary-svc

      port: 3080

This HTTPRoute works in conjunction with the first route we created. If any requests set the env header to “stable” or “canary” they’ll be routed directly to their preferred backend.

Getting Started

Unlike previous Kubernetes APIs, you don’t need to have the latest version of Kubernetes to use this API. Instead, this API is built with Custom Resource Definitions (CRDs) that can be installed in any Kubernetes cluster, as long as it is version 1.16 or newer (released almost 3 years ago).

To try this API on GKE, refer to the GKE specific installation instructions. Alternatively, if you’d like to use this API with another implementation, refer to the OSS getting started page.

What’s next for Gateway API?

As the core capabilities of Gateway API are stabilizing, new features and concepts are actively being explored. Ideas such as Route Delegation and a new GRPCRoute are deep in the design process. A new service mesh workstream has been established specifically to build consensus among mesh implementations for how this API can be used for service-to-service traffic. As with many open source projects, we’re trying to find the right balance between enabling new use cases and achieving API stability. This API has already accomplished a lot, but we’re most excited about what’s ahead.


By Rob Scott – GKE Networking