<< ALL BLOG POSTS

Addressing common Kubernetes concerns

Table of Contents

Kubernetes is now too big to ignore for modern software development teams and companies. It is a technology that has prevailed. If introduced properly, it can bring a plethora of characteristics including scalability, reliability, cost efficiency, and flexibility.

However, concerns of whether to adopt it or not are plenty, especially for small- and medium-size organizations that do not have a dedicated and skilled team that focuses on these technologies and can support their introduction.

In this post we aim to address some of these concerns and provide a path for managers and engineers to consider adoption of Kubernetes as an environment to deploy their applications.

Why to consider adopting Kubernetes

Kubernetes is a tool that can be used not only for production but across all stages of software development, and this by itself can be worth the effort for the added complexity it brings.

Kubernetes can be especially beneficial for smaller teams for the following reasons:

Standardized deployment

A common scenario for a software engineering team is that developers are working on their local systems with Docker Compose. Once they complete their work — be it a new feature, a bug fix, addition of tests, or some extra functionality — this gets deployed on a testing environment, and sometimes on a staging environment too. These environments might be set up to use a different system, such as:

  • direct deployment on servers (through automation tools or custom made scripts),
  • a deployment service such as AWS ECR,
  • a PaaS such as Heroku,
  • a Serverless environment such as AWS Lambda; or
  • a managed or self host Kubernetes environment.

Any difference between how each of these environments is set up compared to the local development environment leaves room for subtle issues that are often hard to identify and fix. The complexity of modern software engineering stacks means that something that works locally often is not working in the final delivery environment. Having different setups across the several phases of development can bring:

  • mismatches on library versions and
  • untested parts, since one environment could be using Docker Compose files and the other Kubernetes Manifests, and volume mounts / network stuff are defined in different ways

Having an identical environment for all teams to develop, test and deploy their work minimizes the chance that these problems might occur, ensuring that what works locally will work on the production system and developers will not end up juggling different technologies in different layers.

Improved developer experience

We’ve learned through experience that the time a developer joining a new team needs in order to get fully onboarded and start being productive is increasing, partially because of the complexity of modern tech stacks. Having a ready-to-use Docker Compose environment is preferable to having to set up and monitor each of the services and how they interact.

Using Kubernetes for local development brings the added advantage of developing on an environment that is identical to what is deployed in production. Moreover, it facilitates a scenario that has been gaining ground among developers, where a remote server is used to run the application while they have their code and edit/debug it locally. It is not rare these days that it can take many GBs of memory just to start all services and keep them running, thus making the developer experience very poor and unstable if their local laptop/workstation is lacking the resources.

In fact, running the application on a remote environment and editing the code locally is a good option for projects that require a developer’s effort for a short time period.

Consider the scenario where a project is going to require a developer’s effort for one week, and is comprised of developing on a heavy stack (multiple services, databases and systems). Running the stack on a remote environment and having the developer edit code locally versus having them set up the whole environment locally before they can commit a single line of code can be the difference between delivering results and spending all the project’s time debugging issues.

Challenges setting up a development environment

Setting up a Kubernetes cluster from scratch poses its challenges. However, to develop locally we don’t need to use them, we can set up a Kubernetes cluster that lives on our laptop/workstation.

Popular tools for this task are kind and minikube, among others. They can be installed on Linux/Mac/Windows and will allow us to create a Kubernetes cluster that runs on a single place — our system.

This won’t provide high availability or scalability but we don’t care when we’re developing locally. What we do care about is that we can easily have a Kubernetes cluster so we can run our application.

All of them are well suited for developing and testing locally.

Screen Shot 2023-03-15 at 11.42.15 AM.png

As an example, setting up a local cluster with kind is a matter of running a few commands. One has to install Docker and Kubectl and run:

user@user:~$ curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.17.0/kind-linux-amd64
user@user:~$ chmod +x ./kind && sudo mv ./kind /usr/local/bin/kind
user@user:~$ kind create cluster

A Kubernetes cluster is created for us, ready to apply the manifests and deploy our application.

Developer workflow concerns

As mentioned above, the tools to set up a local cluster can run on all three major platforms — Linux/Mac/Windows. If developing on a Windows environment, definitely consider running with WSL.

Even for developers with limited resources on their workstations (e.g., low memory), there’s the option to connect to a remote Kubernetes cluster that is running the application. After all, it only requires is a single Kubeconfig file to connect to the cluster.

You might be wondering how local development is facilitated in the case where we connect to a remote cluster. Or even how we can develop on our local Kubernetes cluster at all and what it means. This is the moment that tools such as Skaffold, Tilt and DevSpace come into discussion.

All of them are developer tools that enable developing locally with Kubernetes, with the option to deploy applications remotely as well. They provide a single command that can build the application images, push to a remote or local registry, sync code between the local filesystem and deployed containers inside pods, and continually deploy the application.

Screen Shot 2023-03-15 at 11.42.44 AM.png

Skaffold is a single command that works in dev mode with skaffold dev and does a few things, including:

  • watch source code files for changes
  • build, tag, and push images
  • deploy application to a number of clusters
  • sync files to running containers or even rebuild images, when it detects changes
  • stream logs for containers built and deployed

A quick way to see this in action is through Six Feet Up’s opinionated cookiecutter template that creates a Python/Django project with Kubernetes manifests and Skaffold configuration ready to use.

To test it in 10 minutes:

1. Install cookiecutter.

user@user:~$ python3 -m pip install --user cookiecutter

2. Create a new project

user@user:~$ cookiecutter https://github.com/sixfeetup/cookiecutter-sixiedjango/

Set skaffold-tester as the project’s name and hit enter over the next few questions to get the default options.A new project is created in folder skaffold-tester/

3. Install Docker, Kubectl, and Skaffold

Follow instructions to get Docker, Kubectl and Skaffold installed if you haven’t already installed them on your system.

4. Install kind and create a k8s cluster

user@user:~$ cd skaffold_tester
user@user:~/skaffold_tester$ curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.17.0/kind-linux-amd64
user@user:~/skaffold_tester$ chmod +x ./kind && sudo mv ./kind /usr/local/bin/kind
user@user:~/skaffold_tester$ kind create cluster

5. Run make compile

This step is required by the cookiecutter created project and will compile the list of Python requirements.

user@user:~/skaffold_tester$ make compile

6. Run Skaffold

user@user:~/skaffold_tester$ skaffold dev
Listing files to watch...
- skaffold_tester_local_django
- skaffold_tester_local_frontend
...
- Checking cache...
- skaffold_tester_local_django: Not found. Building
- skaffold_tester_local_frontend: Not found. Building
- Starting build...
- Found [kind-kind] context, using local docker daemon.
- Building [skaffold_tester_local_frontend]...
...

Skaffold informs us that it is going to build the two applications (django and frontend). Once it finishes, it creates the deployment and does port forwarding for us, as defined on skaffold.yaml

Deployments stabilized in 49.07 seconds
Port forwarding service/django in namespace default, remote port 8000 -> http://127.0.0.1:8000
Port forwarding service/frontend in namespace default, remote port 3000 -> http://127.0.0.1:3000
Watching for changes...

Opening http://127.0.0.1:8000 or http://127.0.0.1:3000 on our browser will show the deployed applications.

If we now open the code and make changes, Skaffold informs us that it syncs the changed files to the deployed application. Moreover, if we make a change that requires a Docker image rebuild, Skaffold will rebuild it and redeploy it without needing an action from our part.

kube.png

Similar to Skaffold, Tilt and DevSpace provide highly intuitive development experiences on Kubernetes. Moreover they provide their own UIs with all sorts of information on what is happening in the background.

How to interact with Kubernetes and learn it, for developers

All we need to interact with a Kubernetes cluster is a kubeconfig and kubectl command. Among other things, this can:

  • list, update, create and delete all sorts of resources,
  • interact with pods, run shell, see logs, delete them,
  • see configuration for deployments and services, make changes and apply them,
  • list and set all sort of networking/ingress/port forwarding stuff; and
  • monitor cluster status.

The number of options kubectl supports in combination with the new concepts that one has to grasp when getting introduced to Kubernetes can make things hard at the beginning.

Also, it might take some time to assemble all arguments and options for what you want to achieve with kubectl. Having a UI tool can be much more user friendly for day-to-day work and there are a good number of tools that provide everything we can do with kubectl via intuitive UIs:

  • Kubernetes Dashboard is a web-based Kubernetes user interface, it requires some initial setup to connect with the cluster before we can use it.
  • Lens is a GUI application that runs across Linux/Mac/Windows. It comes in multiple versions, Lens Pro and Lens Personal, that require a Lens ID and includes extras that are suited for collaborative environments, and OpenLens which does not require any sort of ID. OpenLens requires a single extra step, which is to install an extension before it can interact with pods.
  • Rancher is a container management platform that makes it easy to run Kubernetes.
  • K9s is a curses-based terminal UI.
Screen Shot 2023-03-15 at 11.43.44 AM.png

It is worth using Kubernetes Dashboard or Lens or any other UI tool and gradually using more kubectl as a way to interact with Kubernetes as your knowledge around it evolves.

The kubectl Cheat Sheet is a good resource to have handy when using kubectl.

Screen Shot 2023-03-15 at 11.44.10 AM.png
Screen Shot 2023-03-15 at 11.44.26 AM.png

Recap

We have seen some of the concerns small teams and organizations are having when evaluating whether the adoption of Kubernetes. We hope we have contributed positively with hints and suggestions based on our experience.

Related Posts
How can we assist you?
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.