My K8s⚙️ Adventure: Leveling Up My DevOps Game

My K8s⚙️ Adventure: Leveling Up My DevOps Game

·

13 min read

  1. Introduction

  2. Stepping into Kubernetes Territory

  3. The Architecture of Kubernetes

  4. About Nodes

  5. Understanding Pods

    • Storage

    • Networking

    • Talking to the Outside World

  6. Hands-on Experience

    • First Cluster Deployment

    • Useful Commands

  7. Next Steps in the Journey

Introduction

Hey there, tech enthusiasts! Welcome to the fifth article in my journey to become a DevOps Engineer 🚀. I'm currently studying in a DevOps Bootcamp by Roxs and recently enrolled in the KodeKloud annual subscription (let’s see how it goes but so far I am enjoying it 🙃).

Now it's time to level up and tackle the elephant in the room: Kubernetes ⚙️.

You know that feeling when you think you've got containers figured out, and then someone mentions Kubernetes 🧐? Yeah, that mix of excitement and "what have I gotten myself into?" That's exactly where I am right now in my DevOps bootcamp journey. So grab your coffee☕️ (or your preferred debugging beverage, in my case it’s Mate 🧉🇦🇷❤️), and let's dive into the world of container orchestration together.

Stepping into Kubernetes Territory

🚀🧑‍🚀 Remember my spaceship analogy for Docker containers in my previous post? Well, imagine you're managing a vast space fleet 🚀🚀🚀. You have multiple spaceships (containers) on different missions, space stations coordinating traffic🚦, and maintenance crews👩‍🚀 handling repairs 🛠️. Now, picture yourself trying to manually coordinate everything, ensuring enough ships are patrolling each sector, quickly deploying replacements when a ship needs repairs, or reorganizing entire fleets when a cosmic storm hits. Overwhelming, right?

This is where Kubernetes⚙️ comes in, your automated galactic command center ✨.

Think of Kubernetes as your advanced AI fleet commander who:

  • Maintains optimal ship deployment across all sectors

  • Launches replacement vessels when ships need maintenance

  • Balances mission loads across the fleet

  • Ensures smooth operations during meteor showers

  • Handles emergencies without compromising mission objectives

In technical terms, Kubernetes⚙️ is a container orchestration platform🔥. But what does that really mean?

Just like how a starship's central computer manages all its systems, Kubernetes manages containerized applications, but at scale. Ensuring every vessel performs its mission perfectly.

Why do we need it?🤔 Well, modern applications are like complex space missions with many moving parts. You might need:

  • Multiple identical ships running parallel missions

  • Automatic fleet expansion when new territories need coverage

  • Quick deployment of backup ships if one gets damaged

  • Smart distribution of cosmic traffic

  • Secure storage of classified mission data

Kubernetes handles all of this automatically🫶. It's like having an intelligent command center that:

  • Clones successful ships when mission demand increases

  • Instantly launches replacement vessels

  • Reorganizes fleet formations for maximum efficiency

  • Maintains top-secret clearance protocols

  • Ensures all cargo reaches its destination safely

For space commanders (developers) and space federations (companies), this means focusing on improving mission capabilities rather than worrying about fleet logistics. It's the difference between manually coordinating every ship in your fleet and having a system handle the complexity for you.

The Architecture of Kubernetes

A running Kubernetes cluster contains the following components:

API Server 📡

  • Central communication hub for all galactic operations

  • All space fleets, stations, and planetary systems report here

  • The only component that talks directly to our galactic database

  • Acts as a security checkpoint for outside communications

etcd 💾

  • Stores all critical information about our federation

  • Keeps track of where every spaceship and station is located

  • Maintains configuration data for the entire galaxy

  • Quickly notifies command when changes occur

Controller Manager 🎯

  • Maintains proper fleet size and health

  • Removes damaged space stations

  • Ensures enough ships are deployed in each sector

  • Manages ship lifecycle (launch, maintenance, retirement)

Scheduler📝

  • Decides which planets host new space stations

  • Considers resources, mission requirements, and strategic positioning

  • Ensures balanced distribution of our fleet across the galaxy

  • Respects special mission requirements (like keeping certain ships away from each other)

About Nodes

Well, in our Kubernetes galaxy, Nodes are like planetary systems that host our space stations (Pods) and ships (containers). Think of them as dedicated star systems where all our space operations happen.

Understanding Pods

What are Pods? Let’s break it down!

If containers are like individual spaceships, then Pods are like specialized space stations 🚉. Each Pod is a self-contained unit that can host one or more spaceships (containers) that need to work closely together. Think of it as a mini space dock where related missions happen, all containers inside share the same resources, communication systems, and coordinate their operations as one unit.

A Pod is the smallest mission unit in the Kubernetes⚙️ universe. Just like how a space station can host either a single important spacecraft or a small fleet of vessels that need to work together, a Pod typically houses one container but can host multiple if they need to.

Here's what makes Pods special 🫰:

  • They're like best friends who share everything: network, storage, and living space.

  • If you move a Pod, all its containers move together.

  • They work as one unit, if something goes wrong with the Pod, Kubernetes treats all containers inside it as a package deal.

  • Be careful! Remember, Pods are temporary.

Following the spaceship analogy, imagine Pods are like disposable space station.

The following is an example of a Pod which consists of a container running the image nginx:1.14.2

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.14.2
    ports:
    - containerPort: 80

Storage 💾

Just like how a space station has common storage areas that all crew members can access, Pods have shared storage💾. This means:

  • All containers in your Pod can access these shared spaces

  • If one spacecraft needs maintenance (container restart), the cargo stays safe

  • It's like having a shared supply room that survives even if one crew member gets replaced

Networking 📟

Imagine containers as crew members living in the same space station:

  • They all share the same address (IP) and communication system

  • They can talk to each other using the station's internal comm system (localhost)

  • It's like walking down the hall to talk to your crewmate, quick and direct!

Talking to the Outside World 📡

When communicating with other space stations (Pods):

  • Each station has its own unique communication frequency (IP address)

  • Crew members need to coordinate when using external communication channels (ports)

  • It's like having one radio system for the whole station - everyone needs to coordinate its use.

Hands-on Experience

Okay, it's time for some fun 🤩! I'm going to show you how to deploy a microservices architecture app developed by Docker called the Voting App. Here's the GitHub repository if you want to check it out.

For this example, I’m going to use Minikube ✨🛠️.

Minikube is a tool that allows us to run Kubernetes locally. It enables you to run an all-in-one or multi-node local Kubernetes cluster on your personal computer, whether it's Windows, macOS, or Linux. Please head over to the Minikube Docs for the setup.

A microservices architecture refers to an architectural style for developing applications. Microservices allow a large application to be separated into smaller independent parts, with each part having its own responsibility 👩🏽‍💻.

Let's take a look at our sample app for this demo:

The application consists of a web app written in Python🐍, where you can vote between two options. It includes a Redis database that collects your votes, a worker written in .NET that processes the votes and stores them in a Postgres database supported by a Docker volume, and finally, a results interface written in Node.js that displays the voting results in real time.

What's the plan? The goal is to deploy these applications as containers in a Kubernetes⚙️ cluster and ensure they can connect with each other and the databases. Additionally, we want to enable external access for the apps that face the outside world: the voting app and the results app.

✅ First, we need to deploy the applications as PODS.

✅ Second, to enable connectivity between the services, we need to identify which applications need access to which services. For example, the Redis database needs to be accessed by both the Voting app (which saves the vote in the Redis database) and the Worker app (which reads the vote from the Redis database). The Postgres database is accessed by the Worker app (to update the total count of votes) and by the Results app (to read the total count of votes). The voting app and results app should be accessible to external users.The Worker app is not accessed by anyone.

As shown in the diagram, each container has a service that listens on its own port, except for the Worker app‼️. The Worker app doesn't have a service because it only functions as a worker and isn't accessed by any other service.

To let the Voting app access the Redis database, we won't use the IP address of the Redis POD because it can change if the POD restarts, and it's not scalable🧐. Instead, we'll use a Service. A Service can expose an application to other applications or even to users for external access.

To do this, we will create a Service called "redis" service which will be accessible anywhere in the cluster by the name "redis". But why is this name important? Let's take a look at the source code:

# Python app: app.py
def get_redis():
    if not hasattr(g, 'redis'):
        g.redis = Redis(host="redis", db=0, socket_timeout=5)
    return g.redis

## Worker app: Program.cs
var redisConn = OpenRedisConnection("redis");

As we can see, the name "redis" in the source code is hardcoded. We know it's not a good practice to hardcode values like this, instead, we should use environment variables, for example.

For the Postgres database service, similar to Redis, we should also review the source code. Let's examine the source code of the result app:

// Result app: server.js
var pool = new Pool({
  connectionString: 'postgres://postgres:postgres@db/postgres'
});

// Worker app: Program.cs
var pgsql = OpenDbConnection("Server=db;Username=postgres;Password=postgres;");

We can see that they are looking for a database at the address "db," so we should name the service "db" as well.

✅ The third step is to allow external access for the voting app and the result app. To do this, we should use two services with the type NodePort. Meanwhile, we will create services of type ClusterIP for the Redis and Postgres databases since they don't need external access.

First Cluster Deployment

Enough of theoretical explanations, it's time for some action ✨🏃‍♀️‍➡️

Here’s what the deployments and services would look like (you can also check my GitHub repository):

✅ Postgress:

# postgres-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres-deploy
  labels:
    name: postgres-deploy
    app: demo-voting-app
spec:
  replicas: 1
  selector:
    matchLabels:
      name: postgres-pod
      app: demo-voting-app

  template:
    metadata:
      name: postgres-pod
      labels:
        name: postgres-pod
        app: demo-voting-app
    spec:
      containers:
        - name: postgres
          image: postgres
          ports:
            - containerPort: 5432
          env:
            - name: POSTGRES_USER
              value: "postgres"
            - name: POSTGRES_PASSWORD
              value: "postgres"
            - name: POSTGRES_HOST_AUTH_METHOD
              value: trust
# postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: db
  labels:
    name: postgres-service
    app: demo-voting-app
spec:
  ports:
    - port: 5432
      targetPort: 5432
  selector:
    name: postgres-pod
    app: demo-voting-app

✅ Redis:

# redis-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-deploy
  labels:
    name: redis-deploy
    app: demo-voting-app
spec:
  replicas: 1
  selector:
    matchLabels:
      name: redis-pod
      app: demo-voting-app

  template:
    metadata:
      name: redis-pod
      labels:
        name: redis-pod
        app: demo-voting-app
    spec:
      containers:
        - name: redis
          image: redis
          ports:
            - containerPort: 6379
# redis-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: redis
  labels:
    name: redis-service
    app: demo-voting-app
spec:
  ports:
    - port: 6379
      targetPort: 6379
  selector:
    name: redis-pod
    app: demo-voting-app

✅ Result app:

# result-app-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: result-app-deploy
  labels:
    name: result-app-deploy
    app: demo-voting-app
spec:
  replicas: 1
  selector:
    matchLabels:
      name: result-app-pod
      app: demo-voting-app

  template:
    metadata:
      name: result-app-pod
      labels:
        name: result-app-pod
        app: demo-voting-app
    spec:
      containers:
        - name: result-app
          image: kodekloud/examplevotingapp_result:v1
          ports:
            - containerPort: 80
# result-app-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: result-service
  labels:
    name: result-service
    app: demo-voting-app
spec:
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
  selector:
    name: result-app-pod
    app: demo-voting-app

✅ Voting:

# voting-app-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: voting-app-deploy
  labels:
    name: voting-app-deploy
    app: demo-voting-app
spec:
  replicas: 1
  selector:
    matchLabels:
      name: voting-app-pod
      app: demo-voting-app

  template:
    metadata:
      name: voting-app-pod
      labels:
        name: voting-app-pod
        app: demo-voting-app
    spec:
      containers:
        - name: voting-app
          image: kodekloud/examplevotingapp_vote:v1
          ports:
            - containerPort: 80
# voting-app-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: voting-service
  labels:
    name: voting-service
    app: demo-voting-app
spec:
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
  selector:
    name: voting-app-pod
    app: demo-voting-app

✅ Worker:

# worker-app-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: worker-app-deploy
  labels:
    name: worker-app-deploy
    app: demo-voting-app
spec:
  replicas: 1
  selector:
    matchLabels:
      name: worker-app-pod
      app: demo-voting-app

  template:
    metadata:
      name: worker-app-pod
      labels:
        name: worker-app-pod
        app: demo-voting-app
    spec:
      containers:
        - name: worker-app
          image: kodekloud/examplevotingapp_worker:v1

Now that we have created all the deployment and service YAML files, we need to use the kubectl command line to deploy our app. How do we do that? Let's run the following commands in the console:

Start Minikube ✨

minikube start

You should see an output like this:

Deploy the app 🙌

Now that Minikube has started, we are ready to apply our first commands to deploy this app:

Make sure you are in the correct location by typing pwd in the command line to see the current directory, or ls to check the files in that directory.

Let’s deploy postgres deployment and service:

kubectl create -f postgres-deploy.yaml
kubectl create -f postgres-service.yaml

Now it’s time for Redis:

kubectl create -f redis-deploy.yaml
kubectl create -f redis-service.yaml

Result app:

kubectl create -f result-app-deploy.yaml
kubectl create -f result-app-service.yaml

Voting:

kubectl create -f voting-app-deploy.yaml 
kubectl create -f voting-app-service.yaml

And finally, the Worker deployment:

worker-app-deploy.yaml

To check if the app was deployed correctly, we can execute the following command to get the url to access from the browser (just for testing purposes):

minikube service voting-service --url

minikube service result-service --url

And there you have it, we successfully deployed our first app on a Kubernetes Cluster! 🍾🎉

Useful Commands

To debug a deployment on Kubernetes, there are a few helpful commands you can use to find out what might be causing an error.

To check the existing Pods, services, and deployments that have been created, we can run:

kubectl get pods,svc,deployments

This way, you can see an overview about the status, ports, and other useful information about our Kubernetes cluster all in one command.

To check an specific resource, you can run the following command:

kubectl describe pod <pod-name>

Here’s a cheatsheet that you might find useful 🙌:

# Get deployment status and details
kubectl get deployments
kubectl describe deployment <deployment-name>

# Check pods status and details
kubectl get pods
kubectl get pods -o wide                    # Shows more details including node allocation
kubectl describe pod <pod-name>
kubectl get pod <pod-name> -o yaml         # Get pod configuration in YAML
kubectl logs <pod-name>                    # Get pod logs
kubectl logs <pod-name> -c <container>     # For multi-container pods
kubectl logs <pod-name> --previous         # Get previous container logs if it crashed
kubectl logs -f <pod-name>                 # Stream logs in real-time

# Check resource usage
kubectl top pods
kubectl top nodes

# Debug networking issues
kubectl get services
kubectl describe service <service-name>
kubectl get endpoints <service-name>
kubectl exec -it <pod-name> -- curl <service-name>
kubectl exec -it <pod-name> -- nslookup <service-name>

# Debug configuration and secrets
kubectl get configmaps
kubectl describe configmap <configmap-name>
kubectl get secrets
kubectl describe secret <secret-name>

# Interactive debugging
kubectl debug <pod-name> -it --copy-to=debug-pod  # Create copy of pod for debugging
kubectl exec -it <pod-name> -- /bin/sh            # Get shell access to container
kubectl port-forward <pod-name> 8080:80           # Forward local port to pod

# Check cluster health
kubectl get nodes
kubectl describe node <node-name>
kubectl cluster-info
kubectl get componentstatuses

# Useful troubleshooting flags
kubectl get pods -o wide --show-labels          # Show labels and node allocation
kubectl describe pod <pod-name> --warnings      # Show warnings
kubectl logs <pod-name> --all-containers=true   # Show logs from all containers

# Advanced debugging
kubectl rollout history deployment/<deployment-name>   # Check deployment history
kubectl rollout status deployment/<deployment-name>    # Check rollout status
kubectl rollout undo deployment/<deployment-name>      # Rollback to previous version

# Network policy debugging
kubectl get networkpolicies
kubectl describe networkpolicy <policy-name>

# Resource quota debugging
kubectl get resourcequota
kubectl describe resourcequota <quota-name>

Next Steps in the Journey

I really hope this article helped you navigate through these concepts (see what I did there with the space reference? 😄). I had a blast writing it and tried something different this time by turning our tech adventure into a cosmic journey, because honestly, who doesn't love a good space analogy?😂

My next mission in my DevOps journey is to explore CI/CD pipelines, with Jenkins👩🏽‍💻. Pretty excited to discover what it's all about and how it can level up my DevOps game! 🌟

Got questions? Drop me a message, I'd love to hear from you!

Catch you in the next post! 🩵