Wednesday, August 27, 2025

Kubernetes : What are three main processes in worker nodes?

 Below given is detail on this 


1. Container Runtime (e.g., containerd, CRI-O, Docker in older setups)

Actually runs the containers inside Pods.

Pulls images, starts/stops containers, manages container lifecycle.

2. Kubelet

The primary Kubernetes agent running on each Node.

Talks to the API Server to get Pod specs.

Ensures that the described containers (via Pod specs) are running in the Container Runtime.

Collects resource usage stats and reports back to the control plane.

Handles liveness & readiness probes.

3. Kube-proxy

Handles Service networking on each Node.

Implements iptables / IPVS rules so traffic to a Service IP gets forwarded to the right Pod.

Provides basic L4 load balancing.

Does “intelligent routing” → e.g., if a Pod needs to connect to a DB Service, kube-proxy tries to send traffic to a Pod of the DB that may be local (same node), reducing cross-node networking overhead.



🔹 Elaborated View

When the API Server assigns a Pod to a Node:

The Kubelet gets the PodSpec.

Kubelet tells the Container Runtime to pull the image and run the container.

Kube-proxy ensures that when traffic comes to a Service, it gets routed to the Pod correctly, even if it is across Nodes.

This triad ensures Pods run, stay healthy, and can communicate seamlessly.



🔹 Communication Flow (Sequence Diagram)


Paste this in Mermaid Live Editor:


sequenceDiagram

    participant APIServer as Kubernetes API Server (Control Plane)

    participant Kubelet as Kubelet (Node Agent)

    participant CR as Container Runtime (containerd / CRI-O)

    participant KubeProxy as Kube-proxy (Service Routing)

    participant Pod as Pod/Container (App + DB)


    APIServer->>Kubelet: Assign PodSpec (App Pod needs to run)

    Kubelet->>CR: Pull image & start container

    CR->>Kubelet: Container started successfully

    Kubelet->>APIServer: Pod status updated (Running)


    Note over Kubelet,CR: Kubelet ensures Pod lifecycle <br> (start, monitor, restart if needed)


    Pod->>KubeProxy: Request DB Service

    KubeProxy->>Pod: Routes to local DB Pod if available

    KubeProxy->>OtherNode: Or forwards traffic to DB Pod on another node

    OtherNode-->>Pod: Response from DB


    Note over KubeProxy,Pod: Kube-proxy provides L4 load balancing <br> and Service-to-Pod resolution

✅ Summary:

Kubelet = makes sure containers match what’s declared in API server.

Container Runtime = actually runs the containers.

Kube-proxy = makes sure services → pods traffic is routed correctly across nodes, while trying to optimize locality.




Kubernetes: What is Ingress and Why Ingress controller is required ?

 


This is one of the most confusing (but important) distinctions in Kubernetes networking. Let’s break it down clearly:



🔹 What is an Ingress?

An Ingress is a Kubernetes API object that manages external HTTP/HTTPS access to services in a cluster.

It works at Layer 7 (Application Layer).

Ingress lets you define routing rules like:

example.com/app1 → routes to Service A

example.com/app2 → routes to Service B

api.example.com → routes to Service C

It also supports:

SSL/TLS termination

Path-based routing

Hostname-based routing


Think of Ingress as the “rules” that describe how requests should be routed.



🔹 What is an Ingress Controller?

An Ingress Controller is the actual implementation that reads Ingress rules and configures a reverse proxy / load balancer.

Common Ingress Controllers:

NGINX Ingress Controller (most popular)

Traefik

HAProxy

Cloud-specific (AWS ALB Ingress, GCP Load Balancer Ingress, etc.)

Without an Ingress Controller, the Ingress object is just a definition — it won’t actually route traffic.



🔹 How is Ingress different from Services / LoadBalancers?


Feature Service (ClusterIP/NodePort/LoadBalancer) Ingress

Layer Layer 4 (Transport Layer) Layer 7 (Application Layer)

Purpose Stable access to Pods, load-balancing across them Advanced routing for external traffic (HTTP/HTTPS)

External Exposure LoadBalancer service provisions a cloud LB Ingress typically uses 1 external LB and multiplexes traffic

Routing Routes by IP:Port Routes by Hostname/Path (e.g. /api, /app)

SSL/TLS Not handled directly Handled at Ingress (TLS termination)

Use Case Simple service access (DBs, gRPC, internal APIs) Complex web app routing (multi-service apps, APIs, microservices)




🔹 How They Work Together

1. Service: Provides stable IP + DNS inside the cluster, load balances across Pods.

2. Ingress + Ingress Controller:

Accepts external HTTP/HTTPS traffic.

Applies rules to map traffic → Services.

Reduces need for one LoadBalancer per Service (saves cost).



🔹 Example


Without Ingress:

Each Service exposed externally needs its own LoadBalancer (expensive, redundant).


With Ingress:

You use one LoadBalancer (e.g., AWS ELB) → forwards to Ingress Controller → Ingress rules distribute to multiple Services.



✅ In short:

Service/LoadBalancer = basic connectivity + load balancing (L4).

Ingress = smarter traffic routing + SSL + path/host-based rules (L7).

Ingress Controller = the “engine” that makes Ingress work (usually NGINX or Traefik).



Would you like me to also make a Mermaid diagram that shows User → Ingress Controller → Services → Pods, so it’s visually clear how traffic flows compared to a direct LoadBalancer Service?



Below is the sequence diagram that shows the communication 


sequenceDiagram

    participant User as External User (Browser/API Client)

    participant CloudLB as Cloud Provider Load Balancer (AWS ELB / GCP LB / Azure LB)

    participant IngressCtl as Ingress Controller (NGINX / Traefik)

    participant Ingress as Ingress Rules (Host/Path Routing)

    participant ServiceA as Service A (ClusterIP)

    participant ServiceB as Service B (ClusterIP)

    participant PodA as Pod A (on Node1)

    participant PodB as Pod B (on Node2)


    User->>CloudLB: Request https://app.example.com/api

    CloudLB->>IngressCtl: Forward traffic to Ingress Controller


    IngressCtl->>Ingress: Check rules (e.g., /api → Service A)

    Ingress->>ServiceA: Route request to Service A


    ServiceA->>PodA: Forward traffic (Pod on Node1)

    ServiceA->>PodB: Or forward traffic (Pod on Node2)


    Note over IngressCtl,Ingress: Ingress Controller enforces Ingress rules <br> (path/host based, SSL termination)

    Note over ServiceA,PodA: Service does L4 load balancing <br> across Pods

    Note over CloudLB,IngressCtl: Cloud LB exposes cluster externally <br> only 1 LB needed for many services




Kubernetes : What is Internal and External services in Kubernetes ? Is service attached to a pod?

 What is Internal and External services in Kubernetes ? Is service attached to a pod? Is same Service acts as a routing when there are multiple nodes and traffic is routed to node depending on the traffic to a specific node? Is this like a load balancer? If the services acts as load balancer, then what is role of Load balancers provided by cloud providers ? 

Let’s break it down clearly:

🔹 Internal vs External Services in Kubernetes

Internal Service (ClusterIP)

Default type.

Exposes the service only within the cluster (accessible by other pods).

Gets a virtual IP (cluster IP).

Useful for microservice-to-microservice communication.

External Service

Exposes the service outside the cluster.

Types:

NodePort → Exposes service on a port of every node (e.g. <NodeIP>:<NodePort>).

LoadBalancer → Integrates with cloud provider’s load balancer (AWS ELB, GCP LB, Azure LB).

ExternalName → Maps service to an external DNS name.



🔹 Is a Service attached to a Pod?

Not exactly.

A Service is attached to a set of Pods (selected via labels and selectors).

Pods are ephemeral (they can die, restart, or move to other nodes), but the Service stays stable and always points to healthy pods.



🔹 Does the same Service act as routing across multiple nodes?


✅ Yes.

Service routes traffic to pods regardless of which node they are on.

Kubernetes uses kube-proxy (iptables or IPVS) to distribute traffic across pods.

From the client’s perspective, you just talk to the Service — Kubernetes handles finding the right Pod across nodes.



🔹 Is this like a Load Balancer?

Internally, yes → Kubernetes Service acts as a layer-4 load balancer across pods.

But, externally, a Service of type LoadBalancer just provisions a cloud provider’s load balancer to expose your service outside the cluster.



🔹 Then what is the role of Cloud Provider Load Balancers?

Kubernetes internal Service load balancing happens inside the cluster only.

Cloud provider external load balancers are needed to:

Expose the service to the internet or external clients.

Provide global load balancing, SSL termination, WAF, health checks, autoscaling integration, etc.

Example: In AWS → an ELB distributes traffic to your Kubernetes nodes, then kube-proxy forwards traffic to the right pod.



✅ In summary:

Kubernetes Services = Internal cluster service discovery + load balancing across pods.

Cloud Load Balancers = External exposure, internet traffic routing, enterprise-grade LB features.

They work together: cloud LB sends traffic into cluster → Service distributes across pods.



Would you like me to also give you a diagram (ASCII or image) that shows Cloud LB → Node → Service → Pods so you can visualize the full traffic flow?



Below is sequence diagram describing these 


sequenceDiagram

    participant User as External User

    participant CloudLB as Cloud Provider Load Balancer (AWS ELB / GCP LB / Azure LB)

    participant Node1 as Kubernetes Node 1

    participant Node2 as Kubernetes Node 2

    participant Service as Kubernetes Service (ClusterIP / NodePort)

    participant PodA as Pod A (on Node1)

    participant PodB as Pod B (on Node2)


    User->>CloudLB: HTTP Request to app.example.com

    CloudLB->>Node1: Forward traffic (round-robin / LB logic)

    CloudLB->>Node2: Or forward traffic (depending on LB)


    Node1->>Service: kube-proxy routes request

    Node2->>Service: kube-proxy routes request


    Service->>PodA: Forward traffic (if Pod selected)

    Service->>PodB: Forward traffic (if Pod selected)


    Note over Service,PodA: Service load-balances traffic <br> internally across Pods

    Note over CloudLB,Service: Cloud LB exposes service <br> to internet, forwards to cluster





Kubernetes: How a ReplicaSet Works

A ReplicaSet is designed to manage stateless applications. It focuses on ensuring a consistent number of identical pods are running.


1.  **Desired State:** You define a ReplicaSet with a pod template and a desired replica count (e.g., `replicas: 3`).

2.  **Pod Creation:** The ReplicaSet controller looks at the current number of running pods that match the template. If the count is less than the desired count, it creates new pods.

3.  **No Stable Identity:** The pods are given random, unique names (`app-1234-abc`, `app-1234-xyz`). They are considered interchangeable.

4.  **No Persistent Storage:** The pods do not have dedicated persistent storage. They are disposable; if a pod is terminated, the ReplicaSet creates a new one, but the data from the terminated pod is lost.

5.  **Scaling and Self-Healing:** If a pod crashes or is terminated, the ReplicaSet controller detects the change and automatically creates a new pod to maintain the desired count. If you manually scale up or down the ReplicaSet, it will add or remove pods accordingly.



Below is sequence diagram showing this 

sequenceDiagram

    participant User

    participant kubectl

    participant Kubernetes Master

    participant ReplicaSet Controller

    participant Kubelet

    participant Node

    participant Pod


    User->>kubectl: Apply ReplicaSet manifest (replicas: 3)

    kubectl->>Kubernetes Master: Create ReplicaSet

    Kubernetes Master->>ReplicaSet Controller: ReplicaSet is created

    ReplicaSet Controller->>Kubernetes Master: Creates Pod 1, Pod 2, Pod 3

    Kubernetes Master-->>Kubelet: Schedules Pods to Nodes

    Kubelet->>Node: Launches Pods

    Pod->>ReplicaSet Controller: Reports Ready

    

    Note over Pod: Pods are random, disposable

    

    loop Monitoring and Self-Healing

        Kubelet->>Kubernetes Master: Pod 2 dies

        Kubernetes Master->>ReplicaSet Controller: Pod 2 is missing

        ReplicaSet Controller->>Kubernetes Master: Creates a new Pod 4

        Kubernetes Master-->>Kubelet: Schedules Pod 4

        Kubelet->>Node: Launches Pod 4

    end




Kubernets: How statefulsets work in Kubernetes?

A StatefulSet is designed for stateful applications that require stable network identity and persistent storage.


1.  **Desired State:** You define a StatefulSet with a pod template, a desired replica count, and a `volumeClaimTemplates` to manage storage.

2.  **Ordered Pod Creation:** The StatefulSet controller creates pods in a strict, ordered sequence, starting from index `0`. The first pod created will be `web-0`, followed by `web-1`, and so on.

3.  **Stable Identity:** Each pod gets a stable, predictable name and hostname based on its ordinal index (`web-0`, `web-1`).

4.  **Unique Persistent Storage:** For each pod, the StatefulSet controller creates a unique `PersistentVolumeClaim` (PVC) based on the `volumeClaimTemplates`. This PVC ensures that `web-0` always gets the same storage volume, and `web-1` gets its own, separate volume.

5.  **Ordered Scaling and Self-Healing:** The StatefulSet guarantees that pods are created and terminated in order. When scaling down, it terminates pods from the highest ordinal index first (`web-2`, then `web-1`). If a pod dies, the StatefulSet will replace it with a new pod that has the **same identity and connects to the same persistent volume**, ensuring data is not lost. 


sequenceDiagram

    participant User

    participant kubectl

    participant Kubernetes Master

    participant StatefulSet Controller

    participant Kubelet

    participant PersistentVolume Provisioner

    participant Storage

    participant Pod


    User->>kubectl: Apply StatefulSet manifest (replicas: 3)

    kubectl->>Kubernetes Master: Create StatefulSet

    Kubernetes Master->>StatefulSet Controller: StatefulSet is created

    

    StatefulSet Controller->>Kubernetes Master: Creates Pod 0

    Kubernetes Master->>PersistentVolume Provisioner: Request Persistent Volume for Pod 0

    PersistentVolume Provisioner->>Storage: Creates new volume

    Storage-->>PersistentVolume Provisioner: Volume created

    PersistentVolume Provisioner-->>Kubernetes Master: PVC created

    Kubernetes Master-->>Kubelet: Schedules Pod 0

    Kubelet->>Node: Launches Pod 0, mounts volume

    Pod->>StatefulSet Controller: Pod 0 Ready

    

    StatefulSet Controller->>Kubernetes Master: Creates Pod 1

    Note over Pod: Waits for Pod 0 to be ready before creating Pod 1

    Kubernetes Master->>PersistentVolume Provisioner: Request Persistent Volume for Pod 1

    PersistentVolume Provisioner->>Storage: Creates new volume

    Storage-->>PersistentVolume Provisioner: Volume created

    PersistentVolume Provisioner-->>Kubernetes Master: PVC created

    Kubernetes Master-->>Kubelet: Schedules Pod 1

    Kubelet->>Node: Launches Pod 1, mounts volume

    Pod->>StatefulSet Controller: Pod 1 Ready

    

    Note right of Pod: Pods have stable identity (pod-0, pod-1) and storage

    

    loop Scaling Down

        User->>kubectl: Scale down to 1 replica

        kubectl->>Kubernetes Master: Update StatefulSet

        Kubernetes Master->>StatefulSet Controller: Scale to 1

        StatefulSet Controller->>Kubernetes Master: Deletes Pod 1

        Kubernetes Master->>Kubelet: Terminates Pod 1

        Kubelet->>Node: Unmounts and deletes Pod 1

        Note over Pod: Pods are terminated in reverse ordinal order

    end



Does the data in stateful set get replicated across nodes ?

The data in a **StatefulSet does not get replicated across nodes by Kubernetes itself**. This is a common point of confusion. The StatefulSet's job is to ensure that each pod has a unique, stable identity and that its associated storage is persistent and unique to that pod.


To achieve data replication, you need to use a separate, application-specific mechanism, such as:


* **Database-level Replication:** For databases like MySQL or PostgreSQL, you configure them to replicate data from a primary instance to one or more secondary instances.

* **Distributed File Systems:** Using a system like Ceph or GlusterFS, which handles the data replication and synchronization across multiple storage nodes.

* **Cloud Provider Features:** Many cloud-native databases offer built-in replication and high-availability features.


In your diagram, the pods would all be configured to use a remote volume for their database, but the **database application itself**, running inside the pods, would be responsible for synchronizing data between the instances. The StatefulSet simply ensures that each pod can reliably find and attach to its correct persistent volume, even if it's restarted on a different node.


Kubernetes : What is difference between ReplicaSet and Statefulset in kubernetes ?

 


The core difference between a Kubernetes ReplicaSet and a StatefulSet is how they handle the identity and storage of their pods.


ReplicaSet: The Stateless Workhorse

A ReplicaSet is designed for stateless applications. Its primary goal is to maintain a stable number of identical pod replicas.


Identity: Pods managed by a ReplicaSet have no stable identity. They are given random names (e.g., web-server-8b94f6c7c-j2bxb) and are considered interchangeable. If a pod dies, a new one is created to replace it, and the new pod will have a different, random name.


Storage: ReplicaSets are not concerned with persistent storage. Pods they create can use volumes, but these are typically temporary (emptyDir) or read-only. For persistent data, you'd need to manage the volumes separately, as the data is not tied to a specific pod's identity.


StatefulSet: The Stateful Specialist

A StatefulSet is specifically designed for stateful applications, such as databases or message queues, that require a stable identity and persistent storage.


Identity: Pods managed by a StatefulSet have a stable, unique identity. They are given ordinal names (e.g., db-0, db-1, db-2) that persist even if the pod is rescheduled or restarted. This stable identity allows you to track and manage each pod individually.


Storage: StatefulSets manage persistent storage by creating a unique PersistentVolumeClaim (PVC) for each pod. This means db-0 will always have its own dedicated storage volume, and if it dies and is rescheduled, it will reconnect to the exact same volume, preserving its data.


What is Pod and Containers in Kubernetes

Every pod has unique IP address. And this IP address is reachable from all other pods in the cluster. 

One challenge in distributed services architecture is , how to allocate ports without getting conflicts 

When a pod is created on a node, it gets its own virtual namespace and an ethernet connection to connect to the underlaying network. 

Pod can be considered as host just like in a network. Pod is having IP addresses and a range of ports to allocate to its containers. 

This helps to manage he containers ip and ports within the container itself. 

There may be one or maximum upto six containers in a pod. But only 1 main container. 

The pod architecture allows to have 10 micro services running in port 8080 inside 10 different pods without any port conflict. Because they all run on self contained isolated machines 

The pod architecture also allows to change the container runtime without really changing the kubernetes configurations. Because runtime is really within the pods. 

The side car container 

Containers can communicate each other within a pod using localhost. 

When pod dies, it get recreated and get a new IP address. To mitigate this, Service is introduced. Service will have a permanent IP address .Life cycle of pod and service is not connected. 

Service is a static or dynamic IP address that can be attached to a pod 

External service is the one which allows access from outside application It opens communication from external sources. 

Internal service is the one which does not allow external communication 

The type of service is specified at the time of creation. 

ConfigMap contains external configurations of application. Pods can read the config map properties 

Secrets are similar to ConfigMap but the difference is that it is used to store secret data. IT is stored in B64 encoded format. 

The secret mechanism is not enabled by default.

The contents of config map is accessed via properties file or environment variables 



Monday, August 25, 2025

What is Docker Model Runner?

Local development for applications powered by LLMs is gaining momentum, and for good reason. It offers several advantages on key dimensions such as performance, cost, and data privacy. But today, local setup is complex.  


Developers are often forced to manually integrate multiple tools, configure environments, and manage models separately from container workflows. Running a model varies by platform and depends on available hardware. Model storage is fragmented because there is no standard way to store, share, or serve models. 


The result? Rising cloud inference costs and a disjoined developer experience. With our first release, we’re focused on reducing that friction, making local model execution simpler, faster, and easier to fit into the way developers already build.


Docker Model Runner is designed to make AI model execution as simple as running a container. 


With Docker Model Runner, running AI models locally is now as simple as running any other service in your inner loop. Docker Model Runner delivers this by including an inference engine as part of Docker Desktop, built on top of llama.cpp and accessible through the familiar OpenAI API. No extra tools, no extra setup, and no disconnected workflows. Everything stays in one place, so you can test and iterate quickly, right on your machine.



Enabling GPU acceleration (Apple silicon)

GPU acceleration on Apple silicon helps developers get fast inference and the most out of their local hardware. By using host-based execution, we avoid the performance limitations of running models inside virtual machines. This translates to faster inference, smoother testing, and better feedback loops.



https://www.youtube.com/watch?v=rGGZJT3ZCvo

What is Fabric.so ?

Fabric.so is an AI-powered workspace and file explorer that acts as a centralized hub for your digital content, connecting your notes, files, links, and cloud services into a single, searchable "second brain". It helps individuals and teams organize information, capture ideas instantly, search using natural language, and collaborate by replacing multiple productivity apps with one unified tool.

Key Features and Functionality

Centralized Organization:

Fabric connects to your existing cloud storage (like Dropbox) and allows you to save any type of content—text, images, websites, files, and voice notes—to its universal workspace. 

Instant Capture:

Users can quickly save information from any device with a single click or by recording voice notes, which are automatically transcribed and become searchable. 

Powerful AI Search:

The platform features an AI-powered search that can find information across all connected apps and services using natural language, eliminating the need to switch between multiple tools. 

AI-Powered Assistant:

A built-in AI assistant can search, summarize, and even help brainstorm content within your Fabric workspace. 

Self-Organizing Workspace:

Fabric is designed to be a self-organizing system, reducing manual organization efforts and creating a system that works "just like your mind". 

Collaboration:

It provides a space for individuals and teams to collaborate, share ideas, and work on projects together. 

Replaces Other Apps:

By consolidating various productivity tools, Fabric aims to save users time and money by reducing the number of subscriptions needed


Kubernetes Hands on

 Ingress configuration : https://www.youtube.com/watch?v=80Ew_fsV4rM










Sunday, August 24, 2025

What is AI Algorithmic Red Teaming?

AI Algorithmic Red Teaming is the practice of stress-testing AI systems by deliberately probing, attacking, and evaluating them to find weaknesses, biases, vulnerabilities, or potential harmful behaviors before real users encounter them.


It’s inspired by red teaming in cybersecurity, where a “red team” plays the role of an adversary to uncover flaws, while a “blue team” defends. In AI, the red team doesn’t just focus on security, but also on ethics, fairness, robustness, and safety.



🔑 Key Aspects of AI Algorithmic Red Teaming

1. Bias & Fairness Testing

Checking if an AI system produces biased or unfair outputs across different demographic groups.

Example: Does a hiring algorithm rank resumes differently by gender or race?

2. Robustness & Adversarial Attacks

Testing if AI can be tricked with small perturbations (adversarial examples).

Example: Slightly modified stop sign images fooling a self-driving car.

3. Security Vulnerabilities

Prompt injection attacks in LLMs (e.g., tricking a chatbot into revealing hidden instructions).

Data poisoning: inserting malicious examples into training datasets.

4. Misinformation & Safety Risks

Evaluating whether AI spreads false information, harmful content, or unsafe instructions.

5. Explainability Gaps

Checking if the AI provides misleading or inconsistent explanations for its predictions.



🔧 Methods Used in AI Red Teaming

Adversarial input generation → generating tricky or edge-case inputs.

Stress testing with synthetic data → feeding rare or extreme scenarios.

Fairness probing → running systematic demographic tests.

Prompt injection & jailbreaks (for LLMs) → seeing if hidden instructions can override safety.

Monitoring drift over time → ensuring deployed AI doesn’t degrade or start behaving unexpectedly.



📌 Example in Practice

A fraud detection model → red team might simulate adversaries who generate fake accounts with patterns designed to bypass detection.

A medical AI → red team may test rare diseases, ambiguous imaging cases, or adversarially crafted medical notes.

A chatbot (like GPT) → red team tries to make it generate unsafe instructions, harmful stereotypes, or disallowed content.



🟢 Why It Matters

Increases trustworthiness of AI.

Helps comply with AI regulations (like EU AI Act, NIST AI Risk Management Framework).

Prevents real-world harm by finding vulnerabilities before deployment.

Essential in safety-critical AI (finance, healthcare, autonomous systems).


What is Drift in Structured & Unstructured Data?

Data drift (also called distribution shift) means the statistical properties of input features change over time compared to training data.

Example: A fraud detection model trained on transaction patterns from 2023 may see very different spending patterns in 2025.

Types:

Covariate shift: Change in feature distributions (e.g., average age of customers rises from 30 → 45).

Prior probability shift: Change in target distribution (e.g., fraud rate increases).

Concept drift: Relationship between features and target changes (e.g., fraudsters use new methods).

How Drift is Measured (Structured Data)

You compare the distribution of features (train vs. current data) using statistical tests or divergence metrics.

Common methods:

1. Population Stability Index (PSI)

Used heavily in credit risk / finance.

Measures how much a variable’s distribution has shifted over time.

Rule of thumb:

PSI < 0.1 → no drift

0.1–0.25 → moderate drift

0.25 → significant drift

2. Kullback–Leibler Divergence (KL Divergence)

Measures how one probability distribution diverges from another.

Asymmetric → KL(P‖Q) ≠ KL(Q‖P).

3. Jensen–Shannon Divergence (JS Divergence)

Symmetric version of KL divergence.

Outputs bounded values (0–1).

4. Kolmogorov–Smirnov Test (KS Test)

Non-parametric test comparing cumulative distributions of two samples.

Often used in fraud detection / credit scoring.


 Yes — the algorithms you mentioned (PSI, KL, JS, KS) are all useful for structured data drift detection.

Drift in Unstructured Data

Unstructured data = text, images, audio, video. Drift here is harder to measure because distributions are not just numbers.

Methods:

1. Text Drift

Compare embeddings of text using cosine similarity.

Measure drift in word distributions (TF-IDF, BERT embeddings).

2. Image Drift

Use feature embeddings (CNN, CLIP) → compare with KL/JS divergence or Maximum Mean Discrepancy (MMD).

3. Audio Drift

Extract spectrogram features / embeddings → compare distributions.


So, for unstructured data, embedding-based drift detection is common.

Benefits of Calculating Drift

1. Model Monitoring → Ensures model is still valid in production.

2. Early Warning System → Detect changes in customer behavior, fraud, medical conditions.

3. Data Quality Assurance → Spot broken pipelines (e.g., a column suddenly all zeros).

4. Regulatory Compliance → Finance/healthcare require continuous monitoring.

5. Reduce Business Risk → Prevent degraded predictions causing revenue loss.

Summary

Drift = change in statistical distribution of data between training & production.

Structured data drift → measured via PSI, KL, JS, KS, etc.

Unstructured data drift → embeddings + divergence tests.

Benefits = monitoring, risk management, compliance, early alerts.


What is Blue / Green deployment in Kubernetes? What are best practices for this?

Details about Kubernetes namespaces and Blue / Green deployment 


Namespaces provide isolation and multi-tenancy. Teams can be restricted to their namespace.

Most Kubernetes resources (Pods, Deployments, Services, ConfigMaps, Secrets) are namespace-scoped.

Some resources (Nodes, PVs, ClusterRoles, StorageClasses) are cluster-scoped.

Services are namespaced, but accessible across namespaces using FQDN.

Blue/Green deployment in Kubernetes typically uses two Deployments and a single Service to switch traffic.

Blue/Green does not require separate namespaces, but namespaces can be used if teams want strict separation.

Tools like kubens make namespace management easier.


Expanding on Blue/Green Deployment in Kubernetes


Blue/Green Deployment is a strategy where you run two parallel environments:

- Blue → the current running version

- Green → the new version


After verification, traffic is switched from Blue → Green.


How it works in Kubernetes:

- Typically, two Deployments (blue + green) run in the same namespace.

- Both versions exist simultaneously (e.g., my-app-blue, my-app-green).

- A Service acts as a stable entry point and is switched from pointing to Blue Pods → Green Pods.


Are Blue/Green deployments categorized by namespaces?

- Not necessarily.

- They are usually implemented within the same namespace (e.g., prod) to simplify Service routing.

- But some organizations use separate namespaces (blue-ns, green-ns) for stricter isolation. In that case, Service discovery uses cross-namespace FQDNs.


Are underlying resources the same between Blue/Green?

- No, Blue and Green typically have separate resources (Pods, ConfigMaps, Secrets, PVCs if needed).

- Shared cluster-wide resources like Nodes, PVs, Network Policies may be reused.

- Whether you duplicate configs or not depends on your CI/CD pipeline.


How namespaces help in Blue/Green?

- If you use separate namespaces: you get clean isolation (configs, secrets, RBAC).

- If you use the same namespace: switching traffic is simpler (Service just updates its selector).


What is streamlit_agraph

streamlit_agraph is a Streamlit custom component that allows you to create and display interactive, visual graphs within a Streamlit application. Think of it as a tool that bridges the gap between your data and a compelling, interactive network visualization.

Key Features and Use Cases

Instead of just showing a table or a simple chart, streamlit_agraph lets you render a graph with nodes (the entities) and edges (the relationships between them). This is especially useful for a variety of tasks where data connections are important:

Knowledge Graphs: Visualizing the connections between concepts, people, or events. For example, a graph showing authors and the books they've written.

Social Network Analysis: Mapping relationships between users, showing who follows whom or who is friends with whom.

Bioinformatics: Displaying protein interaction networks or gene regulatory pathways.

The component is built on top of a JavaScript library, which gives it rich interactivity. You can drag nodes around, zoom in and out, and even click on nodes to trigger actions in your Python code. It also offers a high degree of customization, allowing you to control the size, color, and labels of your nodes and edges.

This is built on top of vis.js library. 




Saturday, August 23, 2025

What Is Vibe Coding?

Definition:

Vibe coding is an AI-powered style of programming where you describe your requirements in natural language, and a Large Language Model (LLM) generates working code in response—no manual line-by-line coding required. You guide, refine, and test iteratively


The term was coined by Andrej Karpathy, former AI head at Tesla and co-founder of OpenAI, who put it poetically as:


Why It’s Gaining Attention

Rapid prototyping & MVPs: You can spin up working features or apps almost instantly by prompting an AI.   

Lowering entry barriers: Even non-coders or learners can create tools and apps by describing what they want.    

Enterprise excitement: Businesses now use it to quickly build interfaces and prototypes. Gartner forecasts that 40% of new business software will use AI-assisted techniques in the near future.  

Boosting productivity: Small engineering teams can achieve output typical of far larger teams. Y Combinator’s Garry Tan suggests this may reshape startup paths.  

AI as a coding partner: AWS leadership views vibe coding tools as collaborative—helping developers focus on problem-solving rather than boilerplate.




How It Works


Vibe coding is essentially a dialogue between the developer and the LLM. It goes like this:

1. You express your goal in plain English.

2. The AI generates initial code.

3. You test it, ask for tweaks or debugging, and repeat.    


It’s exploratory and often iterative, leaning into creative flow over formal structure.  



Limitations & Risks

Code quality & maintainability: Auto-generated code may be buggy, insecure, or hard to understand long-term.     

Scaling challenges: Best suited for small projects or throwaway prototypes—not complex, production-grade systems.    

Over-reliance on AI: Blindly accepting AI output can lead to critical flaws. Human oversight remains essential.     

Loss of understanding: Developers may lose deep insight into what the code really does.   

Enterprise governance concerns: Without proper guardrails, AI-generated code may pose security or compliance risks



Wednesday, August 20, 2025

What is LangExtract


That’s where LangExtract comes in. It’s a free, open-source Python tool from Google that does the grunt work for you. It lives on GitHub, runs locally or with cloud AI models, and honestly feels like a friend who’s really good at highlighting exactly what matters.


You hand LangExtract a chunk of text, tell it what to look for, and it hands back a neat list of details — all linked to where they came from in the original.


Shows you exactly where it found each detail

Lets you guide it with a quick example so it knows your style

Handles giant documents without choking

Even makes a clickable webpage so you can explore your results

Works on pretty much anything — fiction, medical notes, contracts, whatever

I’ve been writing about tech for a decade, and this is one of those tools you try once and instantly get hooked on.




Why bother?

Because text is everywhere — emails, reports, books — and it’s rarely tidy. Picking through it manually is slow, boring, and easy to mess up. LangExtract is the shortcut. It’s tweakable, lightweight, and built by people who actually get that not everyone wants to wrestle with overcomplicated software.


pip install langextract


echo 'LANGEXTRACT_API_KEY=your-key' > .env


import langextract as lx


prompt = "Find characters, emotions, and relationships. Use exact words from the text."

examples = [

    lx.data.ExampleData(

        text="ROMEO. But soft! What light through yonder window breaks? It is the east, and Juliet is the sun.",

        extractions=[

            lx.data.Extraction("character", "ROMEO", {"mood": "amazed"}),

            lx.data.Extraction("emotion", "But soft!", {"feeling": "soft wonder"}),

            lx.data.Extraction("relationship", "Juliet is the sun", {"type": "poetry"})

        ]

    )

]

text = "Lady Juliet looked up at the stars, her heart racing for Romeo"

result = lx.extract(

    text_or_documents=text,

    prompt_description=prompt,

    examples=examples,

    model_id="gemini-2.5-flash"

)

lx.io.save_annotated_documents([result], "juliet_stuff.jsonl")

html = lx.visualize("juliet_stuff.jsonl")

with open("juliet_viz.html", "w") as f:

    f.write(html)


Want to run it on the entire Romeo and Juliet text from Project Gutenberg?


result = lx.extract(

    text_or_documents="https://www.gutenberg.org/files/1513/1513-0.txt",

    prompt_description=prompt,

    examples=examples,

    model_id="gemini-2.5-flash",

    extraction_passes=3,

    max_workers=20,

    max_char_buffer=1000

)



It’s not an official Google product — it’s Apache 2.0 licensed


Monday, August 18, 2025

Kubernetes - script for setting up master and worker

 


We’ll create two scripts:

1. k8s-node-setup.sh → run on all nodes (control-plane + workers)

2. k8s-master-init.sh → run only on the control-plane to initialize the cluster



1. k8s-node-setup.sh (all nodes)


This script prepares Ubuntu for Kubernetes, installs containerd, kubeadm, kubelet, kubectl.


#!/bin/bash

set -e


echo "[Step 0] Updating system..."

sudo apt-get update -y


echo "[Step 1] Disabling swap..."

sudo swapoff -a

sudo sed -ri '/\sswap\s/s/^/#/' /etc/fstab


echo "[Step 2] Loading kernel modules..."

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf

overlay

br_netfilter

EOF


sudo modprobe overlay

sudo modprobe br_netfilter


echo "[Step 3] Setting sysctl params..."

cat <<EOF | sudo tee /etc/sysctl.d/kubernetes.conf

net.bridge.bridge-nf-call-iptables  = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.ipv4.ip_forward                 = 1

EOF

sudo sysctl --system


echo "[Step 4] Installing containerd..."

sudo apt-get install -y containerd

sudo mkdir -p /etc/containerd

containerd config default | sudo tee /etc/containerd/config.toml >/dev/null

sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

sudo systemctl enable --now containerd

sudo systemctl restart containerd


echo "[Step 5] Installing kubeadm, kubelet, kubectl..."

sudo apt-get install -y apt-transport-https ca-certificates curl gpg

sudo mkdir -p /etc/apt/keyrings

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key \

  | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg


echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] \

https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /" \

| sudo tee /etc/apt/sources.list.d/kubernetes.list


sudo apt-get update

sudo apt-get install -y kubelet kubeadm kubectl

sudo apt-mark hold kubelet kubeadm kubectl


echo "✅ Node prep complete. Ready for kubeadm init (control-plane) or join (workers)."




2. k8s-master-init.sh (control-plane only)

This initializes the control-plane with Calico networking.


#!/bin/bash

set -e


POD_CIDR="192.168.0.0/16"

API_ADVERTISE_IP=$(hostname -I | awk '{print $1}')


echo "[Step 1] Initializing Kubernetes control-plane..."

sudo kubeadm init \

  --pod-network-cidr=${POD_CIDR} \

  --apiserver-advertise-address=${API_ADVERTISE_IP}


echo "[Step 2] Setting up kubeconfig for current user..."

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config


echo "[Step 3] Installing Calico CNI..."

kubectl apply -f https://docs.projectcalico.org/manifests/tigera-operator.yaml

kubectl apply -f https://docs.projectcalico.org/manifests/custom-resources.yaml


echo "✅ Control-plane initialized. Workers can now join using the kubeadm join command printed above."


3. Worker join command

After running the master init script, copy the kubeadm join ... line that is printed and run it on each worker node.

If you need a new token later:


sudo kubeadm token create --print-join-command


What are steps involved in setting up a multi-node Kubernetes cluster on Ubuntu VMs using kubeadm.

0) Plan & prerequisites (all nodes)

Give each VM a unique hostname and ensure full network connectivity between them.

Recommended (comfortable) sizes: control-plane ≥2 vCPU / 4 GB RAM; workers ≥2 GB RAM.

Make sure your firewall or cloud security groups allow the required Kubernetes ports (you can adjust later).



Open these ports (typical defaults):

Control-plane inbound: 6443/tcp (API server), 2379-2380/tcp (etcd), 10250/tcp (kubelet), 10257/tcp (controller), 10259/tcp (scheduler).  

Workers inbound: 10250/tcp (kubelet), 10256/tcp (kube-proxy), 30000-32767/tcp,udp (NodePort services). 


1) System prep (run on all nodes)


# Update OS

sudo apt-get update -y


# 1.1 Disable swap (kubeadm default expects swap off)

sudo swapoff -a

sudo sed -ri '/\sswap\s/s/^/#/' /etc/fstab



Kernel modules & sysctls for container networking


# 1.2 Load required modules on boot

cat <<'EOF' | sudo tee /etc/modules-load.d/k8s.conf

overlay

br_netfilter

EOF

sudo modprobe overlay

sudo modprobe br_netfilter


# 1.3 Allow bridged traffic to be seen by iptables and enable forwarding

cat <<'EOF' | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf

net.bridge.bridge-nf-call-iptables  = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.ipv4.ip_forward                 = 1

EOF

sudo sysctl --system


These sysctls and modules are the standard container runtime prerequisites.)  


2) Install and configure containerd (all nodes)


# Install containerd

sudo apt-get install -y containerd


# Generate a default config and switch to systemd cgroups (recommended)

sudo mkdir -p /etc/containerd

containerd config default | sudo tee /etc/containerd/config.toml >/dev/null

sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml


# Enable and restart

sudo systemctl enable --now containerd

sudo systemctl restart containerd




3) Install kubeadm, kubelet, kubectl (all nodes)


# Add Kubernetes APT keyring & repo (Kubernetes v1.33 line shown here)

sudo apt-get install -y apt-transport-https ca-certificates curl gpg

sudo mkdir -p /etc/apt/keyrings

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key \

  | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg


echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] \

https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' \

| sudo tee /etc/apt/sources.list.d/kubernetes.list


sudo apt-get update

sudo apt-get install -y kubelet kubeadm kubectl

sudo apt-mark hold kubelet kubeadm kubectl


4) Initialize the control-plane (control-plane node only)


Pick a Pod CIDR that matches your CNI choice. Two popular options:

Calico defaults to 192.168.0.0/16

Flannel defaults to 10.244.0.0/16


Below shows Calico (you can swap to Flannel later—see Step 6):


# Replace the advertise-address with this node's primary IP

POD_CIDR=192.168.0.0/16

API_ADVERTISE_IP=$(hostname -I | awk '{print $1}')


sudo kubeadm init \

  --pod-network-cidr=${POD_CIDR} \

  --apiserver-advertise-address=${API_ADVERTISE_IP}



When it completes, it prints two important things:

A kubeadm join ... command for workers

A note to set up your kubeconfig for kubectl on this node


Set kubeconfig for your current user:


mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown "$(id -u)":"$(id -g)" $HOME/.kube/config


# Verify

kubectl get nodes


(Init/join workflow per kubeadm docs.) 



5) Install a CNI network plugin (control-plane)

You need a CNI so Pods can talk to each other. Choose one:


Option A — Calico (NetworkPolicy-capable)


# Install the Tigera operator

kubectl apply -f https://docs.projectcalico.org/manifests/tigera-operator.yaml

# Create a default Calico installation (uses 192.168.0.0/16 by default)

kubectl apply -f https://docs.projectcalico.org/manifests/custom-resources.yaml


# Wait for calico pods to be Ready

kubectl get pods -n tigera-operator

kubectl get pods -n calico-system


Option B — Flannel (simple & lightweight)


If you prefer Flannel, ensure you used --pod-network-cidr=10.244.0.0/16 in step 4, then:


kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml


(Official Flannel manifest.)  


Give the CNI a minute to roll out. kubectl get nodes should show the control-plane Ready once CNI is settled.


6) Join worker nodes (run on each worker)


On each worker VM, paste the kubeadm join ... command that kubeadm init printed.

It looks like:


sudo kubeadm join <API_SERVER_IP>:6443 \

  --token <token> \

  --discovery-token-ca-cert-hash sha256:<hash>


If you lost it, re-create a fresh join command on the control-plane:


sudo kubeadm token create --print-join-command


(Join procedure is part of standard kubeadm workflow.)  


Verify from the control-plane:


kubectl get nodes -o wide


7) (Optional) Basic sanity tests


# Test DNS & scheduling with a simple deployment and NodePort service

kubectl create deploy hello --image=nginx

kubectl expose deploy hello --port=80 --type=NodePort

kubectl get svc hello -o wide  # Note the NodePort to test via workerIP:nodePort



8) Firewalls and security groups (recap)


If you run a host firewall (ufw, firewalld) or cloud SGs, ensure the required ports from step 0 are open; otherwise, components may be NotReady. Official list here.


Common gotchas

Swap not fully disabled: kubelet won’t start cleanly. Re-run the swap commands.  

Cgroups mismatch: If kubelet logs complain about cgroups, ensure SystemdCgroup = true in /etc/containerd/config.toml, then systemctl restart containerd and systemctl restart kubelet.  

CNI not installed: Nodes stay NotReady. Install Calico/Flannel as in step 5 and wait for pods to be Ready.   

Ports blocked: API at :6443 unreachable or workers can’t join—open the ports listed earlier.




Sunday, August 17, 2025

What is Flowwise?

Flowise is an open-source, low-code platform that enables users to build AI applications, particularly those involving large language models (LLMs), using a drag-and-drop interface. It simplifies the process of creating and customizing LLM flows, AI agents, and other related applications by providing a visual, modular, and highly flexible environment. 

Here's a more detailed look:

Key Features and Capabilities:

Visual, Drag-and-Drop Interface:

Flowise uses a visual interface where users can connect pre-built blocks (like LLM blocks, function blocks, memory blocks, etc.) to create complex AI workflows. 

Low-Code/No-Code Approach:

It reduces the need for extensive coding, making it accessible to users with varying levels of programming expertise. 

LLM Integration:

Flowise seamlessly integrates with various components of LLM applications, including language models, memory, data loaders, and tools. 

AI Agent Building:

It facilitates the creation of both single and multi-agent systems, enabling the development of conversational agents and other complex AI applications. 

Flexibility and Customization:

Flowise allows for customization and fine-tuning of workflows, making it suitable for a wide range of use cases. 

LangChain and LlamaIndex Integration:

It leverages the capabilities of LangChain and LlamaIndex, popular libraries for building LLM-powered applications, to provide a more robust and versatile platform. 

Open Source:

Being open-source, Flowise is freely available for both personal and commercial use, encouraging community contributions and continuous development. 

Use Cases:

Chatbots and Virtual Assistants: Flowise can be used to build conversational interfaces for various applications. 

Automation Solutions: It can be employed to automate tasks and workflows using AI agents. 

Data Analysis Tools: Users can create agents that can analyze datasets and provide insights. 

NLP Applications: Flowise can be utilized to build applications that involve natural language processing tasks. 

RAG (Retrieval-Augmented Generation) Systems: Flowise can be used to build systems that combine retrieval and generation capabilities. 

In essence, Flowise provides a powerful and user-friendly platform for building and deploying a wide variety of AI applications, especially those leveraging the capabilities of large language models. 


Saturday, August 16, 2025

What is difference between CopyRight and CopyLeft

 ðŸ“œ Copyright

What it is:

A legal right automatically given to the creator of an original work (book, music, software, etc.).

It gives the creator exclusive rights to use, distribute, modify, and license the work.

How it works in software:

When you write code, you own the copyright by default.

You can then decide:

Keep it private,

Sell licenses,

Or open-source it under a license (MIT, GPL, etc.).


👉 Copyright = ownership + control.



🔄 Copyleft

What it is:

A licensing strategy that uses copyright law in reverse: instead of restricting sharing, it enforces sharing.

Introduced by Richard Stallman (FSF, GNU project).

How it works:

Copyleft licenses (like GPL, AGPL) say:

“You can use, modify, and distribute this software freely — but if you distribute or offer it as a service, you must also share your modifications under the same license.”

Effect:

Ensures the software and all its derivatives remain open-source.

Prevents companies from taking open-source software, improving it, and releasing it as closed-source.


👉 Copyleft = open-source with mandatory sharing.



⚖️ Example Contrast


Case Copyright Copyleft

You write software You automatically own the copyright You can choose to apply a copyleft license

If others use it They need your permission (license) They can use it, but must share improvements

Closed-source use Allowed (if you sell/proprietary license) Not allowed — derivatives must stay open-source

Example licenses Proprietary EULAs, MIT, Apache GPL, AGPL, LGPL




🔑 Simple Analogy

Copyright is like “No one may copy or use my book without my permission.”

Copyleft is like “You may copy and modify my book, but if you publish it, you must also let others copy and modify your version.”



✅ So:

Copyright = protection & exclusivity.

Copyleft = freedom, but with the condition that freedom must continue downstream.



👉 Would you like me to also make a diagram/visual flow showing how software changes hands differently under Copyright-only, Permissive licenses (MIT/Apache), and Copyleft (GPL/AGPL)?

Friday, August 15, 2025

What is Docling Parser

Docling parses PDF, DOCX, PPTX, HTML, and other formats into a rich unified representation including document layout, tables etc., making them ready for generative AI workflows like RAG. This integration provides Docling's capabilities via the DoclingLoader document loader.

Docling is an open-source document parsing library developed by IBM, designed to extract information from various document formats like PDFs, Word documents, and HTML. It excels at converting these documents into formats like Markdown and JSON, which are suitable for use in AI workflows like Retrieval Augmented Generation (RAG). Docling utilizes fine-tuned table and structure extractors, and also provides OCR (Optical Character Recognition) support, making it effective for handling scanned documents. 

Here's a more detailed breakdown:

Document Parsing:

Docling is built to parse a wide range of document types, including PDF, DOCX, PPTX, XLSX, HTML, and even images. 

Output Formats:

It can convert these documents into Markdown or JSON, making them easily usable in AI pipelines. 

AI Integration:

Docling integrates with popular AI tools like LangChain, Hugging Face, and LlamaIndex, enabling users to build AI applications for document understanding. 

RAG Applications:

Docling is particularly useful for Retrieval Augmented Generation (RAG) workflows, where the ability to accurately extract information from complex documents is crucial. 

Key Features:

Docling's key features include layout analysis, OCR, and object recognition, which help maintain the original document's structure during the parsing process. 



How to workwith gitlabe container registry?

 Here’s the step-by-step procedure for building a Docker image, pushing it to GitLab Container Registry, and then using it in a Kubernetes YAML.

1️⃣ Prepare GitLab for Container Registry

Make sure your GitLab project has Container Registry enabled.

In GitLab: Settings → General → Visibility, project features, permissions → enable Container Registry.

2️⃣ Log in to GitLab Container Registry

Get your GitLab credentials (username = GitLab username or CI_JOB_TOKEN in CI/CD, password = Personal Access Token or GitLab password).

Replace:

registry.gitlab.com with your GitLab registry host (usually registry.gitlab.com for SaaS)

NAMESPACE/PROJECT with your GitLab project path.

docker login registry.gitlab.com

Example:

docker login registry.gitlab.com

Username: your_gitlab_username

Password: your_access_token

3️⃣ Build Your Docker Image

In your local environment:

docker build -t registry.gitlab.com/<namespace>/<project>/<image-name>:<tag> .

Example:

docker build -t registry.gitlab.com/mygroup/myproject/webex-bot:latest .

4️⃣ Push Image to GitLab Registry

docker push registry.gitlab.com/<namespace>/<project>/<image-name>:<tag>

Example:

docker push registry.gitlab.com/mygroup/myproject/webex-bot:latest

You can now see the image in your GitLab project under Packages & Registries → Container Registry.

5️⃣ Use the Image in Kubernetes Deployment YAML

You’ll reference the full registry path in your Deployment manifest.

Example deployment.yaml:


apiVersion: apps/v1

kind: Deployment

metadata:

  name: webex-bot

spec:

  replicas: 2

  selector:

    matchLabels:

      app: webex-bot

  template:

    metadata:

      labels:

        app: webex-bot

    spec:

      containers:

      - name: webex-bot

        image: registry.gitlab.com/mygroup/myproject/webex-bot:latest

        ports:

        - containerPort: 8080

      imagePullSecrets:

      - name: gitlab-registry-secret




6️⃣ Create Kubernetes Image Pull Secret


Since GitLab registry requires authentication, create a pull secret in Kubernetes:


kubectl create secret docker-registry gitlab-registry-secret \

  --docker-server=registry.gitlab.com \

  --docker-username=your_gitlab_username \

  --docker-password=your_access_token \

  --docker-email=you@example.com


This secret matches the imagePullSecrets entry in your Deployment YAML.



7️⃣ Deploy to Kubernetes


kubectl apply -f deployment.yaml




✅ Final Flow Recap:

1. Enable Container Registry in GitLab.

2. Login to GitLab registry (docker login).

3. Build Docker image with GitLab registry path.

4. Push to GitLab registry.

5. Reference image in Kubernetes Deployment YAML.

6. Create image pull secret.

7. Deploy to Kubernetes.



references:

ChatGPT