Since I have left VMware, multiple people have pinged me to find out what I am working on. While the startup is still stealth, I can talk a bit about some related stuff I am working on. Kubernetes is actually not new to me — we used it at VMware. With that said, I have never blogged about Kubernetes. I figured the best place to start would be with a quick introduction. Read on to learn more!
What is Kubernetes?
Kubernetes, also known as “k8s” (because it starts with k, has eight letters, and ends in s), is the Greek word for “helmsmen.” As it turns out, most of the initiatives in the Cloud Native space (more on Cloud Native in a bit) are named after greek sailing words. K8s is an open-source system for automating deployment, scaling, and management of containerized applications — I consider it an orchestration (and management) engine for containers. (In VMware terms, k8s is like vSphere + vRealize Automation.)
Why use Kubernetes?
As covered in the k8s architecture, k8s can be leveraged in both greenfield and brownfield environments. It is open-source and portable (meaning it can be run on many different environments). Today, it is the most popular and fastest-growing orchestration engine available (more than OpenStack, Apache Mesos, Docker Swarm, vSphere, etc.).
- Nodes: k8s equivalent of a host. A node hosts one or more pods.
- Pods: K8s equivalent of a VM. A grouping of one or more containers.
- Deployments: Definition and desired state of one or more pods.
- Services: Networking connectivity to pods.
Of course, these are not the only concepts you need to know, but these provide you with the foundational pieces you will need. After these foundational pieces, you should also be aware of:
- Labels: Tags.
- Jobs/Cronjobs: Single running processes or scheduled processes
- RBAC: Role-Based Access Control policies
- Replicasets: Scaling of deployments
- Volumes: How to configure persistent storage. Note there are many terms here — this would be an advanced topic
You should be aware that all configuration is done via YAML files (yes, JSON is also an option, but everyone uses YAML). While you will need to become familiar with the syntax used in the YAML files, to get started, you will likely use existing YAML files you find online.
Finally, you should be aware these are the BASIC concepts. K8s has A LOT of concepts, and these barely scratch the surface. With the above information, you can definitely get started, but you will NOT be production-ready.
Deploying k8s requires a blog post of its own. What I would like to say in this post is that you have a lot of options to deploy k8s:
- Locally using something like minikube — will cover this in the next post
- Via a Cloud provider (e.g. AWS = EKS, Azure = AKS, GCP = GKS, VMW = VKS, etc)
- Through a template (e.g., Heptio’s CloudFormation template)
- Manually using utilities such as kops or kubeadm
Interacting with Kubernetes
Once you have k8s up and running, you will want to interact with k8s and eventually the workloads on top of k8s. The most important command that allows you to do this is
kubectl. By default, everything deployed on k8s is not accessible publicly, so leveraging the
kubectl proxy or
kubectl port-forward commands will be important. I will cover interactions with k8s in a future post.
As I mentioned, this was meant to be an introduction post about k8s. This post explored what k8s is, why you might consider using it, the basic concepts, and some basic information about deploying and interacting with k8s. In the next post, I will cover how to run k8s locally on your system.
© 2018 – 2021, Steve Flanders. All rights reserved.