Kubernetes: Now, With Less Complexity

Make your ADC work for Kubernetes.

Key Takeaway

“Kubernetes is a container orchestration engine composed of several services that handle everything from deployments and system architecture to load balancing, service discovery, and access control – Kubernetes automates much of what you would otherwise spend lots of time manually setting up. However, to set up a Kubernetes cluster, an administrator has to coordinate and configure several subsystems validating each one to make sure they work correctly. Some companies use a centralized application delivery controller (ADC) rather than relying on all the services built directly into Kubernetes. An ADC can help alleviate the burden of this process, as it will provide load balancing functionality out of the box. In addition, an access-management microservice helps handle tasks such as user management and authorization, further reducing deployment complexity while providing touchpoints through which administrators can manage the rest of the Kubernetes cluster.”

Kubernetes are as popular as tulips. It’s a powerful container orchestration technology – more precisely called a container engine — that’s grown enormously in popularity among enterprise users of all kinds, from startups to Fortune 500 companies. The newest figures on the growth of Kubernetes released by the Cloud Native Computing Foundation (CNCF), the organization that maintains Kubernetes, report that 96% of respondent organizations are either running Kubernetes or considering it for future projects.

Yet using Kubernetes is not all rainbows and sunshine. It’s a complicated technology with a lot of moving parts. At the “hello-world” level, adoption might seem straightforward, but it is a winding path with many potential issues.

K8s DNS – Does it have a troubling history?

Another essential part of Kubernetes is its internal DNS server, which is needed for implementing service discovery for the cluster. The internal DNS server in Kubernetes is responsible for a lot but has had some hurdles along the way. For example, Kubernetes used to have Kube-DNS as its DNS server, and as it turns out, dmsmasq -a core component of Kube-DNS- was single-threaded, so it could only take up one CPU core, and this caused massive issues when Kube-DNS; serving a large number of services or pods.

In some cases, performance issues within a container orchestration solution might not be much more than an annoyance that you can learn to live with. But for other companies, server performance related to DNS, such as the one described above, was a real problem that cost real time and money to solve, which is why Kubernetes 1.14* introduced CoreDNS as the new default DNS provider by replacing Kube-DNS.

There are many reasons for DNS problems: everything from poor configuring the implementation of DNS servers to insufficient hardware to problems with DNS itself. But one must understand that the internal DNS server is only one component in the Kubernetes control plane. Issues with the network and load balancing are two other potential triggers for this issue.

OUT-OF-BOX – Eliminates complexity.

The reality is that “An effective K8s load balancing is not possible without additional configuration”.

A Kubernetes Service is used to create pods within a cluster. It uses a Load Balancer to control the traffic to its pods. The load balancer makes sure all requests are sent to the backing pods.

It’s not just about the pods! When load balancing, there are a lot of other moving parts involved, like DNS, service meshes, and the like. There are many fundamental security concerns that need to be considered, but the most common ones include DDoS attacks, malicious data injection, and firewall protection. Kubernetes doesn’t have a “Prevent DDoS” switch you can flick. It also doesn’t have one for “Beware of Bad Data.”

I want to talk about a couple of the pitfalls of “Easy Street,” but I want to give you a heads up. These hazards can be addressed, but it requires expertise and a lot of work. Some companies have the necessary expertise on staff, but many don’t. As a result, they hire third-party providers who come with their own set of problems. This is because, at first glance, it looks like a nice easy stroll down Easy Street – until you realize that it’s not.

DNS and load balancing are just a few of the moving parts of the control plane that can create showstopper issues. This is what happens when you venture past the control plane to the worker nodes — there are even more moving pieces in play—for example, container management.

K8s is friendly to container runtime, but there is a catch.

Kubernetes creates no containers by itself. Instead, it relies on a container runtime to create, monitor, and destroy containers when needed. Kubernetes was designed to be implementation-agnostic and allows enterprises the freedom to use the technologies of their choice: they can go with Docker, the typical container runtime; or they can use another container runtime such as containerd or frakti. It’s up to the enterprise.

Imagine what will happen when a company sticks with Docker for a while and gets the order to switch to containerd (there can be many reasons for this situation). It’s a sure-fire way to cause some difficulties, but it does happen.

We have observed that it becomes harder and harder to get even the simple things to work when you mix and match technologies in an agnostic framework in the real world. It’s a two-edged sword. What should you do? Abandon Kubernetes altogether and implement container orchestration using Docker Swarm or Mesosphere Marathon? That would be going back to square one, trying to control technology with many moving parts and an infinite number of potential points of failure.

Luckily, there are ways to avoid these pitfalls. For example, companies can use ADC – an application delivery controller with built-in automation that manages clusters for you. 

Key Industry Drivers:

  • Enterprises adopting Cloud Native architecture
  • The shift in workloads across two or more public clouds
  • Increased Operational complexity (TCO)

Public Cloud Adoption for Enterprises

Worldwide Public Cloud Services End-User Spending Forecast (1)
Source: Gartner report on Worldwide Public Cloud Services End-User Spending Forecast (Millions of U.S. Dollars)

As applications move more to the ‘edge,’ so does the underlying Infrastructure complexity in terms of – managing multiple environments, Manual processes, Lack of elasticity, and Resource optimization. The ability to deploy and manage applications when and where they are needed will be a differentiator.

Before we delve into ‘why K8s’, let’s understand the problem at hand.

Before containers, application deployments have been monolithic in nature that often included a series of library and binary file dependencies. These application dependencies were required to be managed on each machine. Deploying new features, code updation, and streamlining DevOps flows became increasingly hard as applications and infrastructure grew in size and complexity.

The challenge was not just limited to accelerating application delivery. It also entailed tying legacy IT to a new breed of solutions to make everything work seamlessly—including keeping traditional applications that are critical to the organization.

With a Microservices-based approach, applications are broken down into small, modular entities, isolated from each other. Together with containers, they allow applications to be abstracted from the environment in which they run, allowing for applications to be deployed with ease and consistency across any environment–private datacenter, public Cloud, or the edge. Kubernetes (K8s) is an open-source platform that orchestrates and automates container operations for deploying, scaling, and managing containerized applications.  

Benefits of Managed Kubernetes:

  • Faster Hybrid cloud application deployment
  • Better Portability and flexibility
  • Focus on Business (applications vis-a-vis Infrastructure complexity)
  • Eliminate Operations overhead, improve OPEX

SIMPLICITY – Enables speed.

An Application Delivery Controller (ADC) reduces the complexity of applying Kubernetes to your infrastructure.

In short, an Application Delivery Controller (ADC) is a technology that helps to segment your application within the network and ensures the safety and efficiency of that application in terms of load optimization, service discovery, and access security.

There are many different ways that we need to look at how we address complexity because it will depend on the different types of workloads that we run on our clusters. This is important for anyone running Kubernetes at scale! [So, what does an ADC have to do with addressing the complexities and problems of running Kubernetes in production?]

Simplicity through bundling of features.

An ADC bundles many OSI layer 3-7 services that support load-balancing and features like IP Traffic Optimization, SSL offload, Traffic Chaining/Steering, DNS System/Service discovery, and proxy/reverse proxy services. They also offer more advanced features such as content redirection and server health monitoring.

If you want to include some of these features; out of the box in an application, such as service discovery or request routing, you will need a service mesh technology such as Istio or a proxy product such as Envoy. The features are not available in Kubernetes directly, so they must be installed and maintained on the cluster. That’s an added cost, both technically and financially.

I want to introduce a possible solution to this problem, which I think has a lot of merits. With an ADC, you are bringing all the features for Kubernetes, so you incur the expense of installation and maintenance only once and not many more times.

In-built Application Security.

A common feature that an ADC provides that is missing in Kubernetes is a Web application firewall (WAF). A WAF filters incoming traffic in a way that is specific to each application.

A standard firewall only filters traffic between servers in a very general manner, with restrictions typically at the IP address and port number level. For example, a standard firewall can be set to allow access to a server only on Ports 80 and 443, the ports typically used for HTTP access, and the standard firewall will allow ingress access from IP addresses within a certain range. But lower-level security control – what’s actually in an HTTP request, for example – falls within the realm of the WAF.

Ordinarily, a firewall is a set of hardware or software appliances that protect an organization’s LOB from being accessed from the internet, protecting both the inbound and outbound traffic. Standard security measures are at the IP address and port number level; for example, a standard firewall can be configured to allow access to a server on ports 80 and 443, typically used for HTTP access; and it will also allow ingress access from IP addresses within a certain range. Lower-level security control, like what’s actually in an HTTP request or other data packets, falls within the realm of the web application firewall.

Kubernetes has no built-in WAF. If you have a Kubernetes cluster, you should take a bunch of precautions directly within the application itself to provide a certain degree of safety.

In a perfect world, every web application would have an authentication system that worked the way we want, but putting a lot of access and security rules directly in the code makes for a brittle application that is hard to scale over time. And there are some types of vulnerabilities that can’t be protected against at the coding level, such as cookie poisoning or forced browsing.

On the other hand, a WAF, or Web Application Firewall, is designed to address this sort of application-level security requirement. Production-tested, quality-checked, out-of-box WAF-capable ADCs are a must-have for any network. Again, Kubernetes does not.

COST- Impact.

The cost of not using an ADC 

Understanding the benefits of using an ADC is simple. Imagine what would happen if a company did not use one as part of its digital infrastructure. Without an ADC, it’s every man for himself, with each application needing its own load balancer and service lookup mechanisms on its own to ensure traffic is safe and manageable all by itself! This requires a lot of work that doesn’t need to be done in order for applications or servers to function properly.

Kubernetes provides load-balancing, service discovery, and ingress/egress security as part of its feature set. There are many other enterprise technologies that do not offer these services. Still, no matter what the application needs, IT or DevOps teams will likely find a mix of solutions implemented throughout their enterprise to meet those needs in order to keep all applications functioning properly. As they begin with confusion and inconsistency while implementing various technology stacks on top of each other (i.e., spaghetti code), it is important for an organization to consider upgrading some components in their stack specifically so that Kubernetes can be deployed without additional concerns about compatibility issues across the board.

Automate 95% of F5 or NGINX Changes

How does an ADC allow for the development and deployment of applications? It allows for consistent application development and deployment because it assumes the burden of ensuring that applications are good citizens that play nicely together by enforcing consistency. The result is a “one ring to rule them all” approach to application management, which reduces some burdens on developers/deployers during this process.

ADOPTION  – Of Kubernetes.

Fewer moving parts make Kubernetes adoption easier.

With an ADC, IT and DevOps managers can deploy applications in Kubernetes without having to worry about the complexities of running a highly-scaled system themselves. With one package containing all these benefits from isolation, efficiency, and safety.

To minimize risk when adopting Kubernetes, you need to reduce the number of moving parts that need to be managed within the Kubernetes cluster. This will also help business and software developers with a distributed application’s operation by pushing away all “system housekeeping” tasks from managing large-scale clusters in production.

A good ADC will not eliminate the complexity and pain of Kubernetes adoption, but it can significantly reduce this for your organization.

MANAGING – Multi-vendor ADCs.

Single platform to manage all your ADC-related changes – It’s now normal.

In my previous whitepaper on “Turbocharge Your Application Delivery in a hybrid/multi-cloud operating world,” I have talked about strategies for accelerating digital transformation in a modern, multi-cloud data center with F5, Nginx, and AppViewX. Did you know? For many years, AppViewX’s ADC+ has emerged as an un-contested multi-vendor, multi-cloud platform for NetOps and Application Teams to – self-service, manage, automate and orchestrate applications and security services. 

  • Self-Service: Pre-configured’ service catalog’ for repeatable application deployment by app teams, DevOps, and NetOps
  • Manage: A single platform to manage all your ADC across a hybrid/multi-cloud environment.
  • Automate: Automate workflows for a variety of use-cases, including updates, upgrades, migration, CVE, validation, etc.
  • Observability: Application layer Observability, SSL/TLS Insights to monitor and troubleshoot application issues
  • Orchestration: Centralized orchestration for faster delivery of application and security

For the full feature list, click here

With ADC+, IT and DevOps managers can deploy applications in Kubernetes without worrying about the complexities of running a highly-scaled system themselves. With one package containing all these benefits that come from isolation, efficiency, and safety.

  • Cloud agnostic managed Kubernetes support (EKS, AKS, GKE)
  • Cloud agnostic native deployment support (AWS, Azure, GCP)
  • Eliminate operational complexity with a user-friendly platform
  • Enterprise-ready architecture and standardized deployment

Cloud agnostic managed Kubernetes support

ADC+ (an ADC automation platform) takes this technology to the next level. It enables App and I/O teams to manage all their ADCs centrally. It provides visibility into your app’s performance, including the state & status of LTM, GTM, and WAF, etc.; in addition, it’s designed to scale whether you have F5 or Nginx.”

Additional Resources

Use-Cases: Automate Load Balancing Services

Guide: Topology view of load balancing services

Video: Application view of load balancing services

E-Guide:  Buyer’s for ADC Management and Automation Tools

Whitepaper: LBaaS as a Service

Tags

  • ADC
  • ADC Automation
  • ADC management
  • Automation of service scaling
  • DevOps
  • DNS service discovery
  • DNS system and service discovery
  • F5 Load Balancer
  • Kubernetes
  • load balancer as a service
  • Web application firewall

About the Author

Tarshant Jain

Explorer & Hustler: ADC+

Helping network engineers and app teams simplify their application delivery with the power of automation, logic, and global wisdom.

More From the Author →

Related Articles

Passwords Are Becoming Weakest Links. Is Passwordless the Way Forward?

| 5 Min Read

Secure, Automate, and Take Control: Why Every Organization Needs a Network Orchestration Framework

| 7 Min Read

F5/NGINX Backup and Recovery Done Right

| 9 Min Read