Tech Talk: Developing APIs the Easy Way – Streamline your API process with an endpoint-focused approach on Dec 5 at 11 am EST! Register now

Blog

The latest posts and insights about Ambassador Labs - our products, our ecosystem, as well as voices from across our community.

Traffic Shadowing and Dark Launch

API Gateway

API Gateway vs Service Mesh - Guide

We explain the difference between a service mesh and an API gateway and help you understand which tool you should be using. Today, we talk about a hot topic: the difference between an API gateway and a service mesh. An API Gateway is used to manage traffic into your cluster, we call this north-south traffic. A service mesh manages traffic between services within your cluster, we call this east-west traffic. One of the sources of confusion between API gateways and service meshes is that there is some overlapping functionality when building cloud native applications. For example, you need common semantics around resilience and common functionality around observability.

October 21, 2019 | 1 min read

Kubernetes

Scalability is a great reason to move to Kubernetes, but it’s far from the only one.

When you think of companies that might use Kubernetes, some really big names probably come to mind. Kubernetes has become synonymous with scale, and rightfully so. However, scale is just one benefit to running on Kubernetes with Ambassador Edge Stack, and knowing all of the advantages way before you need to scale to hundreds of requests every second is extremely beneficial. Running with Edge Stack API Gateway on Kubernetes also brings lots of very modern, very cool, and very powerful tools into focus. This creates an environment that is built to integrate. In this really short piece, we’d like to take a minute to explain these benefits. We’ll try to look through the lens of a smaller company, at a stage where the teams aren’t really huge yet, but there are lots of established procedures around building and shipping releases and fixes. There’s great incentive to really adapt (and stick with) CI/CD workflows.

October 14, 2019 | 6 min read

Kubernetes API Gateway

Benchmarking Envoy Proxy, HAProxy, and NGINX Performance on Kubernetes

Measuring proxy latency in an elastic environment In a typical Kubernetes deployment, all traffic to Kubernetes services flows through an ingress. The ingress proxies traffic from the Internet to the backend services. As such, the ingress is on your critical path for performance. There are a wide variety of ways to benchmark and measure performance. Perhaps the most common way of measuring proxy performance is raw throughput. In this type of testing, increasing amounts of traffic is sent through the proxy, and the maximum amount of traffic that a proxy can process is measured. A typical measurement for this will measure performance in Requests Per Second (RPS).

October 1, 2019 | 8 min read

Kubernetes, API Gateway

4 Strategies for Incremental Migration from VMs to Kubernetes using an API Gateway

Strategies for planning and implementing a migration from virtual machines to a cloud-native platform An increasing number of organizations are migrating from a datacenter composed of virtual machines (VMs) to a “next-generation” cloud-native platform that is built around container technologies like Docker and Kubernetes. However, due to the inherent complexity of this move, a migration doesn’t happen overnight. Instead, an organization will typically be running a hybrid multi-infrastructure and multi-platform environment in which applications span both VMs and containers. Beginning a migration at the edge of a system, using functionality provided by a cloud-native API gateway, and working inwards towards the application opens up several strategies to minimize the pain and risk. In a recently published article series on the Ambassador Labs blog, four strategies related to the planning and implementation of such a migration were presented: deploying a multi-platform service discovery system that is capable of routing effectively within a highly dynamic environment; adapting your continuous delivery pipeline to take advantage of best practices and avoid pitfalls with network complexity; using traffic shifting to facilitate an incremental and safe migration; and securing your infrastructure with encryption and network segmentation for all traffic, from end user to service.

September 11, 2019 | 9 min read

Kubernetes

Explore Kubernetes Advantages with Edge Stack Beyond Scalability

When considering a platform that can handle vast amounts of traffic, Kubernetes often comes to mind. That's fantastic because Kubernetes is the right tool for the job! However, there are compelling reasons to think about Kubernetes even if scaling isn't a current concern. Benefits of Running on Kubernetes with Edge Stack Adopting K8s with Edge Stack API Gateway offers numerous benefits that can enhance your workflow, making it faster, safer, and more automated. Here are some key advantages:

May 22, 2019 | 3 min read

Resilience for distributed systems

A cloud native app architecture that is composed of a number of microservices working together forms a distributed system. Ensuring the distributed system is available--reducing its downtime--necessitates increasing the system’s resilience. Resilience is the use of strategies for improving availability. Examples of resilience strategies include load balancing, timeouts and automatic retries, deadlines, and circuit breakers. Resilience can be added to the distributed system in more than one way. For example, having each microservice’s code include calls to code libraries with resilience functions, or having special network proxies handle microservice requests and replies. The ultimate goal of resilience is to ensure that failures or degradations of particular microservice instances don’t cause cascading failures that cause downtime for the entire distributed system. In the context of a distributed system, resilience is about the distributed system being capable of automatically adapting when adverse situations occur in order to continue to serve its purpose.

April 23, 2019 | 18 min read
1...3536
37
3839...45