K3s – Top CNCF projects 2023 

February 20, 2023

When we chose our top 10 projects of the Cloud Native Computing Foundation (CNCF for short) for 2023, K3s was among our favorites. And it is a favorite for many other people, too, according to the 22.3k stars it has on GitHub. So what is K3s? Is it going to replace Kubernetes? What does K3s stand for? And why do people like it so much? Keep reading because we are about to unveil all these questions.

Lightweight Kubernetes

K3s is a lightweight Kubernetes distribution. Kubernetes – also known as K8s – is an orchestration tool that manages workloads running in a cluster. It has become ubiquitous in cloud computing for a handful of good reasons:

  • It allows abstraction from the underlying infrastructure. Applications no longer need to include infrastructure management. This makes them more versatile, lighter, and faster to develop.
  • Applications are isolated from other processes running on the same physical machine. Their specific dependencies do not interfere with those of others, as they remain inside the application’s container – its isolated environment.
  • Kubernetes is in charge of the application’s lifecycle. Containers are expected to eventually crash, and when this happens, K8s automatically spins a new instance to replace the previous one.
  • It scales horizontally the number of running instances according to the declared desired state and demand, and balances the load between them.
  • Kubernetes can handle rollouts and rollbacks graciously, guaranteeing zero downtime during application updates.

K3s aims to achieve just that in a single binary file under 100MB, which needs less than 512MB of RAM for the control plane (less than 50MB for worker nodes!) and delivers a working cluster in under a minute.

In addition, it has been preconfigured following the best practices for security and performance for constrained scenarios. So K3s is a secure by default, no-brainer, production-ready Kubernetes.

What does K3s mean?

Kubernetes is usually referred to as K8s for short. It is given this name because “Kubernetes” is a 10-letter word, so between the ‘K’ and the ‘s’ there are 8 letters: K8s. K3s is a lightweight Kubernetes, so we use just half the letters (10/2 = 5), leaving 3 letters between the ‘K’ and the ‘s’. It is a wonderful coincidence that a ‘3’ is also half an ‘8’ symbol. So K3s does not stand for anything in particular – and it does not have an official pronunciation, either.

K3s and the CNCF

K3s was created by Darren Shepherd at Rancher Labs (now a part of SUSE) in an attempt to speed up his application testing. Once it was certified by the CNCF, it started gaining popularity and being adopted by hundreds of users. Rancher donated the project to the CNCF in 2020 and it entered into the CNCF landscape as a sandbox project, which is the entry point for early stage projects.

Kubernetes distribution

K3s is not a fork – it is a K8s distribution. It has passed conformance tests to be fully CNCF certified. This means that a workload that runs in one certified distribution will run in another certified distribution. What you get is a fully-functioning Kubernetes installation that includes all of the necessary components for running Kubernetes on the control plane and the worker nodes.

As it is strongly focused on being highly available, lightweight, and simple, it includes some tweaks to achieve its goal: the binary file includes containerd as the CRI, Flannel for the CNI, SQLite for the datastore, and manifests to install critical resources like CoreDNS and Traefik Proxy as the ingress controller. It also contains a service load balancer that connects Kubernetes Services to the host IP, making it suitable for single-node clusters.

What is K3s for?

If we have Kubernetes already, why would we want a simplified version of it? Well, there is a series of use cases where K3s may come in handy:


Edge computing physically brings the computational power closer to the source of data. In the current scenario, where sensors and home devices generate terabytes of data at a fast speed, it makes sense to avoid consuming so much network bandwidth by processing them closer to the source. K3s enables the orchestration of artificial intelligence and machine learning services to process all that data in limited, edge devices.

The top most-edge scenario where this Kubernetes distribution has landed, though, must be the space. The US Department of Defense is using K3s to improve the processing of satellite imagery, thanks to its high availability in a low-power environment with intermittent connectivity.


The Internet of Things is growing every day with the rising number of new-generation home appliances and industrial devices that have an internet connection. Cloud-wise, IoT is just a case of edge computing and all we said above applies here. In this article by Bosch, the manufacturer highlights the importance of highly available services and zero-downtime deployments in business-critical systems.

CI testing

Since Kubernetes takes 10 minutes or more to install, creating ephemeral clusters as part of a Continuous Integration pipeline can really slow down testing an app. K3s installs and delivers a functioning cluster in under one minute, which makes it perfect for testing (in fact, that is the reason why it was created). If you are interested in setting K3s in your CI/CD flow, you can check this tutorial by Tan Nguyen.


ARM64, as well as ARMv7, are supported with binaries and multiarch images available for both. You could run K3s in a cluster of affordable Raspberry Pi processors for a number of reasons: to run your application on-premises, for educational purposes, or just for fun! If you want to give it a try, check Alex Ellis’ blog.

Join Napptive

Napptive enables developer self-service. We encourage you to try our playground and experience accelerated cloud-native development. It’s completely free, all you need to do is simply sign up and get started!

Advantages of K3s

Let’s see what makes K3s so suitable for constrained environments.

1. Lightweight Kubernetes

It is a self-contained single binary package of less than 100MB – including all the components of the data plane and worker nodes: CRI, CNI, service load balancer, and ingress controller.

2. Easy and fast deployment

You just need a single command to install and deploy K3s in about 30 seconds.

3. Flexibility

All of the embedded components can be switched off, giving the user the flexibility to install their own choices. Given that K3s is a certified Kubernetes distribution, you can use virtually any YAML configuration file suitable for K8s with K3s. The same goes for any Docker image. You just have to place them in a specific directory in the control plane node and they will load on start.

4. Smaller attack surface

Thanks to its small size and reduced amount of dependencies, it is harder to find a weak point to attack.

5. Ready for production

Unlike other simplified Kubernetes like Minikube, K3s is not just instructional but ready for production. It has been tailored to meet best practices and be secure by default.

6. Low cognitive load

You can run it as is. It already contains all it needs and its default values are good enough to just let it run.

Find out more about K3s

In case you want to know more about K3s, we suggest you visit its official website, where you will gain a more in-depth understanding of this tool. You can also check its page on the CNCF website

We also invite you to watch Darren Shepherd, K3s creator, explain this Kubernetes distribution at KubeCon / CloudNativeCon Europe 2020.

In case you have not yet tried Napptive, we encourage you to sign up for free and discover how we are helping propel the development of cloud-native apps.

More like this

The Evolution of Platform Engineering: Past, Present, and Future

The Evolution of Platform Engineering: Past, Present, and Future

In this post, we delve into the history of platform engineering, examining how it has evolved and what the future might hold. We will explore the transition from traditional IT infrastructure to modern platform engineering practices, and predict future trends and...

How to Quantify the ROI of Platform Engineering

How to Quantify the ROI of Platform Engineering

Measuring the Impact Platform Engineering is becoming the new “it” thing in software development, and it’s no wonder why. For starters, if we use an IDP (Internal Developer Platform) we can end up having fewer silos, better reusing components, and reaching an improved...

Host your own dashboard with Metabase

Host your own dashboard with Metabase

Data analytics platform on Napptive Data is one of the most precious assets of the twenty-first century, driving innovation, informing decisions, and shaping the future of technology and business. But extracting value out of an organization's data requires strategic...