Introduction to Kubernetes (K8s)

Introduction to Kubernetes (K8s)

Definition - Kubernetes is an opensource orchestration engine for automating deployments,scaling, and managing containerized applications it was created by Google but its now actively maintained by Cloud Native Computing Foundation.

Kubernetes facilitates both declarative configuration and automation, with a large rapidly growing ecosystem this simply means that support, and tools are widely available.

Going back in time

By doing so we get to understand why Kubernetes is so important in the modern era of computing

Image Credits K8s Docs

[infra.png]

Traditional deployment era

In the past, organizations ran applications on physical servers (On-Premises) without a way to set resource limits for each application. This often resulted in resource monopolization by one application in a multi-application environment. A potential solution, running each application on separate servers, proved costly and difficult to scale.

Virtualized deployment era

Virtualization enables running multiple Virtual Machines (VMs) on a single physical server's CPU, effectively isolating applications in distinct VMs. This approach enhances resource utilization and scalability. Each VM operates as a complete unit with its own operating system and a full set of components, running on virtualized hardware.

Container deployment era

Containers are like lightweight versions of Virtual Machines (VMs), sharing the host's Operating System (OS) while maintaining separate file systems, CPU, and memory allocations. This shared OS approach provides efficiency but with less isolation than VMs. Containers are highly portable across different cloud environments and OS distributions.

The Need Of Kubernetes and It's Capabilities

Kubernetes is a powerful tool that simplifies the management of applications that are split into smaller parts, called containers, and run across different computers or servers. This system is especially useful because it can handle a variety of tasks that are crucial for keeping applications running smoothly and efficiently.

One of the key strengths of Kubernetes is its ability to automatically adjust the number of containers based on how many are needed at any given time. This means it can handle increases or decreases in the number of people using an application without any manual effort. It’s like having a smart assistant that knows exactly when to bring in more resources or scale back to save on costs, ensuring that the application runs efficiently.

Additionally, Kubernetes is like a safety net for applications. It keeps an eye on them and if something goes wrong, like a failure or crash, it quickly steps in to fix the issue. This might mean restarting a failed container or moving it to a healthier environment. This ensures that the application is always available to users, even in case of unexpected issues.

Kubernetes also comes with built-in strategies for updating applications. For instance, it can roll out updates in a controlled way, such as using a canary deployment. This approach updates only a small part of the application first, making sure everything works fine before updating the rest. This reduces the risk of introducing a problem that could affect all users.

Kubernetes provides you with:

  1. Service Discovery and Load Balancing: Kubernetes can give containers their own IP addresses or use a DNS name. if a container gets a lot of traffic, Kubernetes can spread the traffic evenly to keep everything running smoothly.

  2. Storage Orchestration: You can set up Kubernetes to automatically connect your containers to the storage system you prefer, like local storage or cloud services.

  3. Automated Roll outs and Rollbacks: Kubernetes lets you control how your containerized applications are updated. It can automatically replace old containers with new ones, ensuring your deployment always matches your desired setup.

  4. Automatic Bin Packing: Kubernetes helps your use your servers resources efficiently. You tell it how much power and memory your containers need, and it organizes them to make the most of what's available.

  5. Self-healing: If containers fail or have problems, Kubernetes can fix them by restarting or replacing them. it also waits to send traffic to containers until they are fully ready and working.

  6. Secret and Configuration Management: Manage sensitive data like passwords and keys safely with kubernetes. You can update secrets and app settings without having to rebuild your containers or risk exposing sensitive data.

  7. Batch Execution: Kubernetes can also handle batch processing and continuous integration work, automatically replacing containers that don't work as expected .

  8. Horizontal Scaling: Easily change the number of containers running your application, either manually, through a user interface, or automatically based on how much work they're doing.

  9. IPv4/IPv6 Dual-stack: Supports both IPv4 and IPv6 addresses for containers and services, allowing for more flexible network addressing.

  10. Designed for Extensibility: You can add new functionalities to your Kubernetes cluster without needing to modify the core source code.

Key Uses of Kubernetes

  1. Container Orchestration: It automates the deployment, scaling, management, and networking of containers.

  2. Load Balancing: Kubernetes can distribute network traffic so that the deployment is stable.

  3. Self-healing: It can restart containers that fail, replace, and reschedule containers when nodes die.

  4. Automated Roll outs & Rollbacks: Kubernetes progressively rolls out changes to your application or its configuration, monitoring the application's health to prevent any downtime.

Kubernetes Cluster Architecture

In Kubernetes, the cluster architecture is typically divided into two main types of nodes: Master Nodes and Worker Nodes. Each type of node has a distinct role and set of responsibilities within the Kubernetes cluster:

  1. Master Nodes (also known as Control Plane Nodes):

    • The Master Nodes are responsible for managing the state of the Kubernetes cluster. They make global decisions about the cluster (such as scheduling), and they detect and respond to cluster events (such as starting up a new pod when a deployment's replicas field is unsatisfied).

    • Components of Master Nodes include:

      • kube-apiserver: The API server acts as the front end for the Kubernetes control plane.

      • etcd: A consistent and highly-available key-value store used as Kubernetes' backing store for all cluster data.

      • kube-scheduler: Responsible for scheduling pods to run on various worker nodes.

      • kube-controller-manager: Runs controller processes, which are background threads that handle routine tasks in the cluster.

      • cloud-controller-manager (optional): Embeds cloud-specific control logic and lets you link your cluster into your cloud provider's API.

  2. Worker Nodes:

    • Worker Nodes are the machines where containers (workloads) are deployed. They are managed by the Master Nodes and do the actual work of running applications.

    • Components of Worker Nodes include:

      • kubelet: An agent that runs on each worker node and ensures that containers are running in a Pod.

      • kube-proxy: Maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.

      • Container Runtime: The software responsible for running containers (e.g., Docker, containerd, CRI-O).

In a Kubernetes cluster, you can have one or more Master Nodes and one or more Worker Nodes. The Master Nodes coordinate the cluster, and the Worker Nodes run the actual applications. For high availability, it is common to have multiple Master Nodes in a production environment.

Kubernetes Core Services

Kubernetes provides a variety of services and features to manage containerized applications in a clustered environment. Some of the key services and features include:

  1. Pods: The smallest deployable units in Kubernetes that can contain one or more containers.

  2. ReplicaSets: Ensures that a specified number of pod replicas are running at any given time.

  3. Deployments: Manages the deployment and scaling of a set of Pods, and provides updates to Pods along with a lot of other useful features.

  4. StatefulSets: Used for managing stateful applications, like databases.

  5. Services: An abstract way to expose an application running on a set of Pods as a network service.

  6. Ingress: Manages external access to the services in a cluster, typically HTTP.

  7. ConfigMaps and Secrets: Used for managing configuration data and sensitive information, such as passwords, OAuth tokens, and SSH keys.

  8. Volumes: Provides a way to persist data in Pods.

  9. PersistentVolume (PV) and PersistentVolumeClaim (PVC): Provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed.

  10. Jobs and CronJobs: Manage run-to-completion and scheduled tasks.

  11. Namespaces: Provides a mechanism for isolating groups of resources within a single cluster.

  12. DaemonSets: Ensures that all (or some) Nodes run a copy of a Pod.

  13. Horizontal Pod Autoscaler (HPA): Automatically scales the number of Pods in a replication controller, deployment, or replica set based on observed CPU utilization.

  14. Network Policies: Specifies how groups of Pods are allowed to communicate with each other and other network endpoints.

  15. Role-Based Access Control (RBAC): Controls access to Kubernetes resources based on roles.

These services and features enable Kubernetes to handle container orchestration, ensuring that the state of the cluster matches the user's intentions and that applications run efficiently and reliably.

Essential Kubernetes Commands

  1. kubectl: It's a command-line tool that allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.

    Examples:

    • kubectl get pods: List all pods in the namespace.

    • kubectl create -f my-deployment.yaml: Create a deployment described in my-deployment.yaml.

    • kubectl describe pod my-pod: Get detailed information about the pod my-pod.

  2. minikube: It's a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.

    • minikube start: Starts a local Kubernetes cluster.

    • minikube dashboard: Access the Kubernetes Dashboard running within the Minikube cluster.

  3. eksctl: A simple CLI tool for creating clusters on Amazon EKS. It's the fastest and easiest way to get started with Amazon EKS.

    • eksctl create cluster: Create an EKS cluster.

    • eksctl delete cluster: Delete an EKS cluster.

Getting Started with Minikube

To get your hands dirty with Kubernetes, Minikube is a great place to start. Here’s a quick guide on how to set it up:

  1. Install Minikube: Follow the instructions on the Minikube GitHub page to install Minikube on your machine.

  2. Start Minikube: Run minikube start. This command creates and configures a Virtual Machine that runs a single-node Kubernetes cluster.

  3. Deploy an Application: Once your cluster is running, you can deploy an application using kubectl, the command line interface for running commands against Kubernetes clusters.

  4. Access the Application: You can interact with your application (deploy services, view logs, etc.) using various kubectl commands.

  5. Explore Kubernetes Dashboard: Run minikube dashboard to open the Kubernetes dashboard in your default web browser. It provides a user-friendly interface to manage your cluster.


As we draw the curtains thank you for going through our latest blog post, we want to leave you with a sense of anticipation and excitement for what lies ahead. Remember, this is not just the end; it's the beginning of an even more fascinating journey filled with insightful articles and in-depth explorations into the topics that matter most to you.

We invite you to stay connected and join us as we continue to delve into a world of knowledge and discovery. Each post is a step further in our shared journey of learning and growth. So, keep your curiosity kindled and your passion for learning alive, as there's much more to come.

Thank you for being a part of our community, and for taking this journey with us. You can join the community by following the blog and liking this article. Until next time, keep exploring, keep questioning, and keep learning!