Buzz about Kubernetes is everywhere, and plenty of large organizations have adopted it. But what about smaller organizations? Is the flexibility and complexity of Kubernetes worth it for much smaller teams?
Kubernetes really pays off at scale
If you’re a large organization, whether you’re moving your infrastructure to the cloud or continuing to manage it strictly on-premises, getting started with Kubernetes is a no-brainer.
Your apps are containerized
Containers are not just a hot trend – they are popular for a reason. Containers provide tools (like Dockerfiles and Docker Compose) that allow you to be explicit about the dependencies an application should have and to ensure that they are met no matter where the app is deployed. This means your development environment will be consistent across your team and that same environment will persist into production. It also means that your test suite (which also runs in containers) will be testing something that accurately reflects what will be deployed, thus reducing the risk of unforeseen problems in production.
You have tools to make your apps resilient to failure
There are about a million articles out there on the power of Kubernetes as an orchestrator, and it does do a great job of managing your application deployments. It was built to make it easy to scale your apps and to roll out changes, and Kubernetes clusters are self-healing so that if one container fails, your application remains available. There are other orchestrators available on the market as well (such as Mesos/Marathon, Swarm, Nomad, Rancher, etc), but Kubernetes is the leading orchestrator because it is extremely flexible, has the best tooling support, and is supported by all major cloud platforms.
You have a standard language for defining deployments
Every team in your organization will define the Kubernetes objects that they use in their application deployments using YAML resource definitions. This allows your organization to use an infrastructure-as-code approach while using configuration management that is actually part of the platform your applications are being deployed to. This helps ensure that what you have in your configuration is what gets deployed in production.
You have access to a lively and burgeoning ecosystem of partner products
Do you want custom monitoring? Check out Prometheus. Do you want to secure your service-to-service communication with a Kubernetes-native service mesh – one that even makes it easy to have blue-green deployments? Check out Istio. Need beautiful graphs of your monitoring data, to create dashboards for SREs or management? Grafana plays nicely with Prometheus, and AlertManager can even let you know when a problem is brewing. And all of these projects are open source! These are just a few projects on a long and growing list of Kubernetes-native products. To see a longer list of cloud-native projects that help with different phases of the SLDC, check out the Cloud Native Computing Foundation’s CNCF Landscape.
Kubernetes has perks for small organizations, too
But what if you’re working in a small organization – a startup or small business that has 5, 20, or 40 engineers working on its product?
If you’re working at a company with a small engineering team you are probably already deploying your apps to the cloud. But unless you’re comfortable with containers, you probably haven’t seriously considered using Kubernetes to deploy your apps even though there are easy to try hosted versions like Google Kubernetes Engine (GKE). As described above, there is a strong case to be made for getting familiar with containers if you aren’t already – and good place to start if you’re in that boat is the tutorials provided by Docker.
If you have used containers, especially if you’ve used them in production with Docker or even Docker Compose, then what are you waiting for? Kubernetes (especially the hosted versions) can start to make your operations easier.
Strong community support
Kubernetes is an open source project that does its development in the open and encourages active participation from its community. You can browse its community repo, request architectural changes, and get involved with the CNCF to learn about ongoing work related to the project. In non-official channels there are tons of additional resources available, from tutorials and blog posts to Awesome Kubernetes, a curated list of Kubernetes-related resources. In addition, all major cloud vendors have managed Kubernetes offerings so that you don’t have to worry about being locked in to any one vendor.
Keep your infrastructure-as-code with an open source tool
Since Kubernetes is open source software, you can avoid vendor lock-in but still have infrastructure-as-code that specifies how your apps should auto-scale, the desired state of your deployments (for example, how many replicas of your application pod should be maintained), or how to load balance your services. Kubernetes deployment manifests make it easy to make sure failed app containers are self-healing and your apps are auto-scaled.
Kubernetes has a super active ecosystem, with new and improved products available every day
There are a LOT of companies out there building software for Kubernetes deployments – which means that small organizations can reap the benefits of having a variety of potential solutions available to solve any problem they run into, and often for free.
You don’t need to deploy Kubernetes yourself
Kubernetes can be complex to deploy, but the good news is that you don’t need to. Google’s hosted Kubernetes (GKE) has been generally available for years, and this year has seen Amazon and Azure also making their own hosted versions available. Most of the companies offering software for the Kubernetes ecosystem are also looking to make it as easy as possible to deploy their products to Kubernetes – check out some of the recent work we’ve been doing with Conjur OSS and helm charts.
You don’t have to choose between Kubernetes and Serverless
We are reaching a point where serverless containers are a thing, so that you can on-demand spin up a large number of containers to handle a request and shut them down once a request has been processed. Amazon introduced Fargate in November 2017 to let you run containers on-demand without having to manage a cluster at all, and Google released Knative in July 2018 to open source the tools it uses to bring Serverless to GCP. In addition, Google currently has a serverless add-on for GKE available in early access preview. Expect more developments on this front in the near-term.
What you should be aware of
Given all of the potential advantages, what’s left to consider if you’re a small organization considering using Kubernetes for deploying your apps?
Kubernetes and its ecosystem are relatively new
Kubernetes was the first project to graduate from the CNCF in March 2018, and the supporting ecosystem is relatively young and in flux. Important decisions are still being made about the best practices for securely deploying apps using Kubernetes. If you’re at a small organization, you might be facing a big cost if you need to revise your architecture in response to changing best practices. That being said, it’s definitely production-ready, and you can check out some case studies for how to use Kubernetes in practice on the project’s website.
There is a limited selection of managed services (but it’s growing)
Though there is a very active ecosystem for products that enhance the experience of deploying apps to Kubernetes, there are fewer Kubernetes-native managed services because the ecosystem around Kubernetes is so new. Apps deployed to Kubernetes can still interact with services like managed databases and queues. However, there aren’t yet managed services available that can support each stage of the full software development lifecycle in Kubernetes, which does raise the bar for how difficult it is to use if you’re in a small organization.
Conclusion
It’s no surprise that larger organizations deploying apps in the cloud are increasingly choosing Kubernetes, and it would be great to see the ecosystem continue to evolve to the point where it is a welcoming space for smaller organizations as well. To get to that point, however, we need to continue working on a clear definition of cloud-native best practices in Kubernetes, and we need to reach a point where managed services and serverless technologies are more readily available and easier to integrate with Kubernetes.
If you are from a smaller organization and are interested in learning more, it’s easy to try out deploying apps to Kubernetes locally by installing Minikube. The Kubernetes documentation has great tutorials to help you learn the basics and choose the right solution for your Kubernetes deployment. So give it a shot and see what the fuss is all about!
Geri Jennings, PhD is an Engineering Manager on the Conjur team. She enjoys learning new things, and usually comes out with a blog post when there’s an idea she can’t shake. Follow her on twitter at @izgerij.