Sunday, August 13, 2023

How to Stop running workloads and in Kubernetes

In Kubernetes API, there is no verb “stop”. Technically speaking, it is not possible to “stop” something in Kubernetes. However, instead, we can set the number of replicas to zero. This action will instruct the deployment controller to delete all the existing pods of a given deployment. After that, no new pods will be created unless the replica count is increased back to more than zero. Applying this setting is a literal equivalent to stopping a deployment.

kubectl --namespace default scale deployment my-deployment --replicas 0

Now to get the deployment name, below command can be used 

kubectl get deploy -o wide

NAME           READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                                             IMAGES                  SELECTOR

nginx-webapp   5/5     5            5           8h    sidecar-container1,sidecar-container2,main-container   busybox,busybox,nginx   app=nginx-webapp

So the command to stop the deployment in this scenario is 

kubectl --namespace default scale deployment nginx-webapp --replicas 0

Stop Multiple Deployments

Kubectl allows you to perform the same operation on multiple objects at once. By doing some Linux shell magic we can obtain a list of all deployments and scale all of them using a single command.

To stop all Kubernetes deployments, run the following kubectl command:

kubectl --namespace default scale deployment $(kubectl --namespace default get deployment | awk '{print $1}') --replicas 0 

Deployment is not the only resource that manages Kubernetes workloads, there are also stateful sets.

kubectl --namespace default scale statefulset --replicas 0 $(kubectl --namespace default get statefulset  | awk '{print $1}')

If you want to perform a complete cleanup of your Kubernetes cluster, you can delete all your resources at once:

kubectl delete all --all --namespace default


references:

https://yourdevopsmentor.com/blog/how-to-stop-all-kubernetes-deployments/#:~:text=In%20Kubernetes%20API%2C%20there%20is,pods%20of%20a%20given%20deployment.


What is Kind in Kubernetes

In Kubernetes, the kind field is used to specify the type of Kubernetes resource being described in a YAML or JSON configuration file. The kind field determines how the Kubernetes API server should interpret and handle the resource. Here are some common values for the kind field along with brief explanations of each:

Pod:

Represents a single instance of a running process in a cluster. Pods are the smallest deployable units in Kubernetes and can contain one or more containers.

Service:

Provides network connectivity to a set of pods. Services enable load balancing and DNS-based discovery for pods.

ReplicationController:

Ensures a specified number of pod replicas are running at all times. If a pod fails or is deleted, the ReplicationController replaces it.

Deployment:

Provides declarative updates to applications. It allows you to define desired state and manages the deployment and scaling of pods.

StatefulSet:

Manages the deployment and scaling of a set of pods with unique identities. It is useful for applications that require stable network identities and persistent storage.

DaemonSet:

Ensures that a copy of a specified pod is running on each node in the cluster. Used for running background tasks or agents.

Job:

Represents a single task or batch job. Jobs create one or more pods and run the specified command to completion.

CronJob:

Creates jobs on a schedule defined using the Cron format. Useful for running jobs periodically.

Namespace:

Provides a way to logically partition resources within a cluster. Namespaces help in organizing and managing resources.

ConfigMap:

Stores configuration data as key-value pairs that can be used by pods or other resources.

Secret:

Stores sensitive information, such as passwords, tokens, or API keys, securely.

PersistentVolume:

Represents a storage resource in the cluster that can be used by pods. PersistentVolumes decouple storage from individual pods.

PersistentVolumeClaim:

Requests a specific amount of storage from a PersistentVolume. Pods use PersistentVolumeClaims to access storage resources.

ServiceAccount:

Represents an identity for pods and allows fine-grained access control to Kubernetes API resources.

Ingress:

Manages external access to services within a cluster, typically for HTTP-based applications.

ClusterRole:

Defines a set of permissions for accessing cluster-level resources. ClusterRoles are used with RoleBindings and ClusterRoleBindings.

Namespace:

Logical partitioning of a Kubernetes cluster, allowing isolation of resources and access control.

CustomResourceDefinition (CRD):

Extends Kubernetes API with custom resources. It allows you to define your own API objects.

ServiceMonitor:

Custom resource used by monitoring tools like Prometheus Operator to discover and monitor services.

These are just a few examples of the possible values for the kind field in Kubernetes resource definitions. Each value corresponds to a specific type of resource that Kubernetes manages. The kind field is crucial for proper interpretation and functioning of the resource within the Kubernetes cluster.


References:

ChatGPT 


What is NodePort in Kubernetes Deployment

In Kubernetes, a NodePort is a type of service that exposes an application running inside a cluster to be accessible from outside the cluster. It allows you to expose your application to a specific port on each node in the cluster, making the application accessible from the public internet or other networks.

Here's how the NodePort type works:

Internal Cluster Communication: Your application runs as a set of pods inside the Kubernetes cluster. These pods communicate with each other using their own IP addresses and ports within the cluster's internal network.

NodePort Service: When you create a NodePort service, Kubernetes allocates a port (the NodePort) on each node in the cluster. Any traffic that arrives at this port on any node is forwarded to the corresponding port on the selected pods.

External Access: This means that the service can be accessed from outside the cluster by connecting to any node's IP address on the specified NodePort.

Port Range: The NodePort is usually in the range 30000-32767. You can specify the NodePort explicitly when creating the service, or let Kubernetes choose an available port within this range.

References:

ChatGPT 

What is Kubernetes Deployment YAML

A Kubernetes user or administrator specifies data in a YAML file, typically to define a Kubernetes object. The YAML configuration is called a “manifest”, and when it is “applied” to a Kubernetes cluster, Kubernetes creates an object based on the configuration.


A Kubernetes Deployment YAML specifies the configuration for a Deployment object—this is a Kubernetes object that can create and update a set of identical pods. Each pod runs specific containers, which are defined in the spec.template field of the YAML configuration. 


The Deployment object not only creates the pods but also ensures the correct number of pods is always running in the cluster, handles scalability, and takes care of updates to the pods on an ongoing basis. All these activities can be configured through fields in the Deployment YAML. 


The following YAML configuration creates a Deployment object that runs 5 replicas of an NGINX container.


apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-deployment

  labels:

    app: web

spec:

  selector:

    matchLabels:

      app: web

  replicas: 5

  strategy:

    type: RollingUpdate

  template:

    metadata:

      labels:

        app: web

    spec:

      containers:

       —name: nginx

          image: nginx

          ports:

           —containerPort: 80



spec.replicas—specifies how many pods to run

strategy.type—specifies which deployment strategy should be used. In this case and in the following examples we select RollingUpdate, which means new versions are rolled out gradually to pods to avoid downtime.

spec.template.spec.containers—specifies which container image to run in each of the pods and ports to expose.



Below is how to specify the limits 


 spec:

      containers:

       —name: nginx

          image: nginx

          resources:

            limits:

              memory: 200Mi

            requests:

              cpu: 100m

              memory: 200Mi

          ports:

           —containerPort: 80



limits—each container should not be allowed to consume more than 200Mi of memory.

requests—each container requires 100m of CPU resources and 200Mi of memory on the node



references:

https://codefresh.io/learn/kubernetes-deployment/kubernetes-deployment-yaml/#:~:text=The%20Deployment%20object%20not%20only,fields%20in%20the%20Deployment%20YAML.


Kubernetes Sidecar Pattern

A pod is the basic building block of kubernetes application. Kubernetes manages pods instead of containers and pods encapsulate containers. A pod may contain one or more containers, storage, IP addresses, and, options that govern how containers should run inside the pod.


A pod that contains one container refers to a single container pod and it is the most common kubernetes use case. A pod that contains Multiple co-related containers refers to a multi-container pod. There are few patterns for multi-container pods one of them is the sidecar container pattern



What are Sidecar Containers

Sidecar containers are the containers that should run along with the main container in the pod. This sidecar pattern extends and enhances the functionality of current containers without changing it



All the Containers will be executed parallelly and the whole functionality works only if both types of containers are running successfully. Most of the time these sidecar containers are simple and small that consume fewer resources than the main container.


Below is a sample pod yaml file 


apiVersion: v1

kind: Pod

metadata:

  name: sidecar-container-demo

spec:

  containers:

  - image: busybox

    command: ["/bin/sh"]

    args: ["-c", "while true; do echo echo $(date -u) 'Hi I am from Sidecar container' >> /var/log/index.html; sleep 5;done"]

    name: sidecar-container

    resources: {}

    volumeMounts:

    - name: var-logs

      mountPath: /var/log

  - image: nginx

    name: main-container

    resources: {}

    ports:

      - containerPort: 80

    volumeMounts:

    - name: var-logs

      mountPath: /usr/share/nginx/html

  dnsPolicy: Default

  volumes:

  - name: var-logs

    emptyDir: {}




// create the pod

kubectl create -f pod.yml

// list the pods

kubectl get po

// exec into pod

kubectl exec -it sidecar-container-demo -c main-container -- /bin/sh

# apt-get update && apt-get install -y curl

# curl localhost




references:

https://medium.com/bb-tutorials-and-thoughts/kubernetes-learn-sidecar-container-pattern-6d8c21f873d

Saturday, August 12, 2023

Installing and running Minikube on Mac

Step 1:

To check if virtualization is supported on macOS, run the following command on your terminal.

sysctl -a | grep -E --color 'machdep.cpu.features|VMX'

If you see VMX in the output (should be colored), the VT-x feature is enabled in your machine.


Make sure you have kubectl installed. You can install kubectl using below command


brew install kubectl

Verify kubectl version

kubectl version


Install a Hypervisor

If you do not already have a hypervisor installed, install one of these now:


• HyperKit

• VirtualBox

• VMware Fusion


We will install HyperKit to run our Minikube

brew install hyperkit

Verify that you installed kubectl & HyperKit successfully in your mac using

brew list


Install Minikube

The easiest way to install Minikube on macOS is using Homebrew


brew install minikube


minikube version

We successfully setup minikube in our mac now we are good to start minikube


Starting minikube is using below 

minikube start

if you observe the statement above , minikube choose default driver as hyperkit , that’s what we installed as Hypervisor


Once minikube started successfully , we can verify its status

minikube status

After you have confirmed whether Minikube is working with your chosen hypervisor, you can continue to use Minikube or you can stop your cluster. To stop your cluster, run:

minikube stop

Delete minikube

minikube delete


references:

https://medium.com/@javatechie/kubernetes-tutorial-install-run-minikube-in-mac-os-k8s-cluster-369b25b0c3f0

Friday, August 4, 2023

What is Sidecar container

 A Kubernetes sidecar container is an additional container that is deployed alongside the main container within the same Kubernetes Pod. The term "sidecar" is derived from the sidecar attached to a motorcycle, which provides additional support and functionality. Similarly, a sidecar container in Kubernetes enhances the capabilities of the main container by providing complementary functionality or services.

The primary purpose of sidecar containers is to support the main application container by sharing the same network namespace, storage, and other resources within the Pod. This allows sidecar containers to closely interact with the main container and work together seamlessly.

Some common use cases for Kubernetes sidecar containers include:

Logging and Monitoring: A sidecar container can be used to collect logs from the main container or forward them to a centralized logging system. It can also handle metrics and send them to monitoring solutions.

Security and Encryption: A sidecar container can handle tasks related to security, such as managing SSL certificates, handling encryption/decryption, or authenticating requests.

Data Synchronization: A sidecar container can perform data synchronization or caching tasks, making data readily available to the main container.

Adapters and Proxies: A sidecar container can act as an adapter or proxy, modifying requests and responses before they reach the main container.

Backup and Restore: A sidecar container can handle backup and restore operations for the main container's data or configurations.

Using sidecar containers has several advantages, including:

Separation of Concerns: Sidecar containers allow you to keep specific functionalities or services separate from the main application, promoting a modular and maintainable architecture.

Reuse and Scalability: Sidecar containers can be easily reused across different applications, promoting code reuse and reducing duplication.

Easy Integration: Sidecar containers can integrate seamlessly with the main container within the same Pod, simplifying communication and coordination.

When defining sidecar containers in a Kubernetes Pod, ensure that the sidecar and main containers have clearly defined roles and responsibilities. Keep in mind that each container within a Pod shares the same network namespace, so they can communicate using localhost and ports without the need for exposing them externally.

Overall, Kubernetes sidecar containers are a powerful pattern to extend and enhance the functionality of your applications, enabling you to build more robust and feature-rich containerized solutions.

references:
OpenAI