Wednesday, April 1, 2026

What is Amazon Rekognition (quick context)

Amazon Rekognition is a pre-trained AI service that can:

  • Detect objects, scenes, faces

  • Perform image moderation (unsafe content)

  • Extract text from images

  • Do facial comparison, etc.


๐ŸŽฏ 1. Important Clarification (Your Question)

“Is it about training a model with images to produce a new model?”

๐Ÿ‘‰ Answer: It depends—there are TWO modes


✅ Mode 1 — Pre-trained (Most common)

  • No training needed

  • Works out-of-the-box

Examples:

  • Detect labels → “Car”, “Tree”, “Person”

  • Detect unsafe content

  • Face detection

๐Ÿ‘‰ This is what most people use


✅ Mode 2 — Custom Training (Rekognition Custom Labels)

  • You train your own model using your images

  • Used for domain-specific use cases

Example:

  • Detect:

    • “Damaged equipment”

    • “Specific device types”

    • “Company logos”

๐Ÿ‘‰ This is closer to what you described


๐Ÿ–ฅ️ 2. How to Use Rekognition from AWS Console (UI)


๐ŸŸข A. Using Pre-trained Features (No Training)

Step-by-step:

1️⃣ Go to Rekognition Console

  • Open AWS Console

  • Navigate to Rekognition


2️⃣ Choose Feature

You’ll see options like:

  • Label detection

  • Face analysis

  • Text detection

  • Content moderation


3️⃣ Upload Image

  • Upload from local OR select from:

    • Amazon S3


4️⃣ View Results

  • Labels with confidence scores

  • Bounding boxes

  • Detected text

  • Moderation flags


๐Ÿ‘‰ That’s it—no model training required


๐Ÿง  3. Using Custom Models (Custom Labels)

This is where training happens


๐Ÿ” Workflow:

Upload Images → Label Data → Train Model → Evaluate → Deploy → Use for Prediction

Step-by-step:

1️⃣ Prepare Dataset

  • Upload images to S3

  • Organize:

    • Train set

    • Test set


2️⃣ Create Project

  • Go to “Custom Labels” in Rekognition

  • Create project


3️⃣ Label Images

  • Use built-in labeling tool

  • Draw bounding boxes

  • Assign labels


4️⃣ Train Model

  • Click “Train”

  • AWS trains model automatically


5️⃣ Evaluate Model

  • Check:

    • Accuracy

    • Precision/Recall


6️⃣ Deploy Model

  • Start model endpoint

  • (Billing starts here)


7️⃣ Use Model

  • Call API to classify new images


⚙️ 4. How Prediction Works

For pre-trained:

Image → Rekognition API → Labels + Confidence

For custom:

Image → Your Custom Model → Domain-specific labels

๐Ÿ”— 5. Integration Options

Rekognition integrates with:

  • Amazon S3 → image storage

  • AWS Lambda → processing

  • Amazon API Gateway → expose APIs


Example architecture:

Upload Image → S3 → Lambda → Rekognition → Result → App/UI

๐Ÿ–ผ️ 6. Common Use Cases

✅ Pre-trained:

  • Image tagging

  • Content moderation

  • Face detection

  • OCR (basic)


✅ Custom Labels:

  • Manufacturing defect detection

  • Logo detection

  • Device classification

  • Medical imaging (basic cases)


⚠️ 7. Key Limitations

  • Custom training requires:

    • Labeled data

    • Time & cost

  • Not as flexible as full ML frameworks like:

    • Amazon SageMaker


๐Ÿง  8. When to Use What

Use CaseRecommendation
General object detectionPre-trained
ModerationPre-trained
Domain-specific detectionCustom Labels
Complex ML problemSageMaker

๐Ÿ Final Answer

๐Ÿ‘‰ Using Rekognition from UI does NOT always involve training

  • ✅ Most features → ready-to-use (no training)

  • ✅ Custom Labels → lets you train your own model


๐Ÿ’ฌ One-line takeaway:

Rekognition is both a plug-and-play vision API and a lightweight custom model training platform—depending on your use case.


If you want, I can:

  • Show sample API code (Python / Node.js)

  • Compare Rekognition vs SageMaker for custom vision

  • Or design a pipeline using Rekognition + A2I (human review) ๐Ÿ‘

What is Amazon Lex?

Amazon Lex is AWS’s service for building chatbots and voice bots using natural language understanding (NLU) and automatic speech recognition (ASR).

๐Ÿ‘‰ In simple terms:

It lets users interact with applications using natural language (text or voice)—similar to Alexa (which actually uses Lex under the hood).


๐Ÿง  1. How Amazon Lex Works

Core building blocks:

๐Ÿ”น Intents

  • What the user wants to do

  • Example: “Book a ticket”, “Check order status”


๐Ÿ”น Utterances

  • Different ways users express an intent

  • Example:

    • “I want to book a flight”

    • “Reserve a ticket”


๐Ÿ”น Slots

  • Parameters required to fulfill intent

  • Example:

    • Date

    • Location

    • Ticket type


๐Ÿ”น Fulfillment

  • What happens after intent is understood

  • Typically:

    • Call backend API (via Lambda)

    • Return response


๐Ÿ”น Dialog Management

  • Lex automatically:

    • Prompts for missing slots

    • Handles conversation flow


๐Ÿ” 2. End-to-End Flow

User → Lex Bot → Intent Recognition → Slot Filling → Lambda/API → Response → User

Example:

User: “Book a flight to Delhi tomorrow”

  • Intent → BookFlight

  • Slots → Destination = Delhi, Date = tomorrow

  • Lambda → processes booking

  • Response → “Your flight is booked”


๐Ÿ› ️ 3. Creating a Chatbot using Amazon Lex (Console)

Step-by-step using AWS Console:


1️⃣ Create Bot

  • Go to Amazon Lex console

  • Click Create bot

  • Choose:

    • Blank bot OR template

  • Configure:

    • Language (e.g., English)

    • IAM role


2️⃣ Create Intents

  • Add intent (e.g., BookHotel)

  • Add utterances:

    • “Book a hotel”

    • “Reserve a room”


3️⃣ Define Slots

  • Example:

    • Location

    • Check-in date

  • Define slot types:

    • Built-in OR custom


4️⃣ Configure Prompts

  • Ask user:

    • “Which city?”

    • “What date?”


5️⃣ Fulfillment (Backend Integration)

  • Connect to:

    • AWS Lambda


6️⃣ Build & Test

  • Click Build

  • Test in console chat window


7️⃣ Deploy (Alias)

  • Create bot version + alias

  • Use alias in applications


๐Ÿ”— 4. Integration with Other Applications

✅ Option 1 — Web Application (Most common)

Embed chatbot UI using:

  • Lex Web UI

  • JavaScript SDK


Architecture:

Web App (Angular/React)
        ↓
   Lex API (SDK)
        ↓
    Lex Bot
        ↓
   Lambda / Backend

✅ Option 2 — Mobile Apps

  • iOS / Android SDK

  • Voice + text support


✅ Option 3 — Backend Integration

Call Lex using APIs:

  • RecognizeText

  • RecognizeUtterance


✅ Option 4 — Messaging Platforms

Integrate with:

  • Slack

  • Facebook Messenger

  • Twilio (SMS)


✅ Option 5 — Voice Assistants

  • Telephony systems

  • Contact center bots


๐Ÿ”Œ 5. Example Integration (Angular App)

Since you’re working with Angular:

๐Ÿ‘‰ You can:

  • Use AWS SDK

  • Call Lex runtime APIs


Flow:

Angular UI → API Gateway → Lambda → Lex → Response → Angular UI

๐Ÿ‘‰ Or directly:

Angular UI → AWS SDK → Lex Runtime API

⚙️ 6. Backend Integration (Important)

Lex typically integrates with:

  • AWS Lambda → business logic

  • Databases (DynamoDB / RDS)

  • External APIs


Example:

Intent → “Check device status”
→ Lambda calls your network API
→ Returns result to user


๐Ÿงฉ 7. Advanced Features

๐Ÿ”น Multi-turn conversations

  • Context-aware dialogs


๐Ÿ”น Built-in slot types

  • Dates, numbers, locations


๐Ÿ”น Custom slot types

  • Domain-specific (e.g., device IDs)


๐Ÿ”น Confidence scores

  • Helps fallback handling


๐Ÿ”น Fallback intents

  • Handle unknown queries


๐Ÿš€ 8. Where Lex Fits (Important for YOU)

Given your work (GenAI, planners, network automation):

๐Ÿ‘‰ Lex can act as:

“Conversational Interface Layer”

Example:

User → Lex → Intent → Planner Agent → Execution → Response

๐Ÿ‘‰ You can combine:

  • Lex → intent detection

  • LLM → reasoning

  • APIs → execution


๐Ÿ Final Summary

  • Amazon Lex = chatbot + voice bot builder

  • Uses:

    • NLU (intent detection)

    • Slot filling

  • Built using:

    • Intents, utterances, slots

  • Integrates via:

    • Web apps, mobile apps, APIs, messaging platforms

  • Backend handled via:

    • Lambda or APIs


๐Ÿ’ฌ One-line takeaway:

Lex is a managed conversational interface layer that connects user language → backend execution.


If you want, I can:

  • Show a real Angular + Lex integration code

  • Compare Lex vs Dialogflow vs LLM-based chatbot

  • Or design a GenAI + Lex hybrid architecture (very powerful for your use case)

RTK - RUST token Killer

 RTK (Rust Token Killer) is a fascinating tool that fits perfectly into your blog's second part about **system-level optimizations**. Here’s a high-level overview and a practical example you can include.


### ๐Ÿง  How RTK Works: High-Level Overview


RTK acts as a **transparent CLI proxy** that intercepts commands run by AI coding tools (like Claude Code, Cursor, or Copilot) and filters their output **before** it enters the LLM’s context window.


**Four Core Strategies:**

1.  **Smart Filtering** – Removes noise (comments, whitespace, boilerplate) from command outputs like `ls`, `git status`, or `cargo test`.

2.  **Grouping** – Aggregates similar items (e.g., files by directory, errors by type) to show structure without repetition.

3.  **Truncation** – Keeps only the most relevant context (e.g., first/last N lines, signatures of functions).

4.  **Deduplication** – Collapses repeated log lines into a single line with a count.


**The Result:** The AI tool receives the same *information* but uses **60–90% fewer tokens**. This directly translates to lower API costs, faster context processing, and less chance of hitting context limits.


### ⚙️ Example: Optimizing a `cargo test` Command


This is one of the most impactful use cases. A failed test in a medium-sized Rust project can output hundreds of lines, consuming thousands of tokens. Here’s how RTK transforms it:


**Without RTK (Standard Output)** – Sends ~25,000 tokens

```bash

$ cargo test

   Compiling myproject v0.1.0 (/Users/dev/myproject)

   ...

running 15 tests

test utils::test_parse ... ok

test utils::test_format ... ok

test api::test_login ... ok

test api::test_logout ... ok

test db::test_connection ... ok

test db::test_query ... ok

test auth::test_password_hash ... ok

test auth::test_token_verify ... ok

test handlers::test_index ... ok

test handlers::test_submit ... FAILED

test handlers::test_delete ... ok

test models::test_user ... ok

test models::test_session ... ok

test middleware::test_auth ... ok

test middleware::test_logging ... ok


failures:

---- handlers::test_submit stdout ----

thread 'handlers::test_submit' panicked at 'assertion failed: `(left == right)`

  left: `Some(ValidationError)`,

 right: `None`', src/handlers.rs:42:9

note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:

    handlers::test_submit


test result: FAILED. 14 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out

```


**With RTK (`rtk test cargo test`)** – Sends ~2,500 tokens (90% reduction!)

```bash

$ rtk test cargo test

running 15 tests

FAILED: 1/15 tests

  handlers::test_submit: panicked at src/handlers.rs:42:9 - assertion failed: left == right

```


### ๐Ÿ”ง How to Demonstrate in Your Blog


You can show a **before/after token count** using RTK’s built-in analytics. For example, after running a session with RTK, you can run:


```bash

rtk gain --graph

```


This would produce a simple ASCII graph showing token savings per command, which makes for a compelling visual in a blog post.


RTK is a perfect example of an **infrastructure-level optimization** that sits between the application and the model, dramatically improving efficiency without changing the application’s logic—a key theme for your Part 2.

Tuesday, March 31, 2026

What is Amazon Kendra?

Amazon Kendra is an AI-powered document search service from AWS.

๐Ÿ‘‰ In simple terms:

It lets you index documents from multiple sources into a central repository and enables natural language search over them.

Unlike basic keyword search, Kendra uses ML/NLP to understand intent and return context-aware answers.


๐Ÿ“š 1. Kendra as a Document Search Service

Kendra acts like:

“Google for your enterprise documents”

Key capabilities:

  • Centralized document indexing

  • Natural language querying

  • Extracts answers (not just links)

  • Role-based access filtering


๐Ÿง  2. Does it create a central index?

๐Ÿ‘‰ Yes — this is core to Kendra

  • You create an Index

  • All documents are ingested into this index

  • Search queries run against this index


Architecture:

Data Sources → Kendra Index → Search API → Application / UI

๐Ÿ“„ 3. Supported Document Types

Kendra supports a wide range of formats:

๐Ÿ“ Common formats:

  • PDF

  • Word (DOC, DOCX)

  • Excel (XLS, XLSX)

  • PowerPoint (PPT, PPTX)

  • HTML

  • XML

  • JSON

  • Plain text


๐Ÿงพ Structured + semi-structured:

  • FAQs

  • Knowledge base articles

  • Wiki pages

  • Emails (via connectors)


๐Ÿ–ผ️ Images?

  • Not directly searchable

  • But can be indexed if:

    • Text is extracted using:

      • Amazon Textract


๐Ÿ’ฌ 4. Natural Language Search

๐Ÿ‘‰ One of Kendra’s strongest features

Example queries:

  • “What is the leave policy for contractors?”

  • “How to reset VPN password?”

  • “Show SLA for premium customers”


What happens internally:

  • Query understanding (NLP)

  • Semantic matching (not just keywords)

  • Ranking based on relevance


๐Ÿ‘‰ Output:

  • Direct answers (highlighted)

  • Ranked documents


๐Ÿ”— 5. Integrations (Very Powerful)

Kendra integrates with many enterprise systems:


๐Ÿ“ฆ AWS-native sources:

  • Amazon S3

  • Amazon RDS

  • Amazon DynamoDB


๐Ÿข SaaS / enterprise tools:

  • SharePoint

  • OneDrive

  • Google Drive

  • Confluence

  • Salesforce

  • ServiceNow

๐Ÿ‘‰ (via built-in connectors)


๐Ÿ”Œ Custom sources:

  • Use:

    • Kendra APIs

    • Custom connectors


๐Ÿ–ฅ️ 6. How to Use from AWS Console

Step-by-step:

1️⃣ Create Index

  • Go to Kendra → Create index

  • Configure:

    • Name

    • IAM role

    • Capacity


2️⃣ Add Data Sources

  • Choose connector:

    • S3 / SharePoint / etc.

  • Configure access

  • Start sync


3️⃣ Indexing

  • Documents are:

    • Crawled

    • Parsed

    • Indexed


4️⃣ Search

  • Use:

    • Console search UI

    • API (Query API)


5️⃣ Build Application

  • Integrate search into:

    • Web apps

    • Chatbots

    • Internal tools


๐Ÿ” 7. Authentication & Security

Kendra supports multiple auth mechanisms:


๐Ÿ”‘ 1. IAM (Primary)

  • Access via:

    • AWS SDK / CLI

  • Controlled via IAM roles & policies


๐Ÿง‘‍๐Ÿ’ผ 2. User Context Filtering

  • Document-level permissions

  • Integrated with:

    • Active Directory

    • SSO systems

๐Ÿ‘‰ Ensures:

Users only see documents they are allowed to


๐ŸŒ 3. API Access

  • Signed requests (SigV4)

  • Used by applications


๐Ÿ” 4. Identity Providers

  • SAML-based SSO

  • Integration with enterprise identity systems


⚙️ 8. How Kendra Works Internally (Simplified)

Ingestion → Parsing → NLP Enrichment → Indexing → Query Engine
  • Extracts metadata

  • Understands document structure

  • Builds semantic index


๐Ÿงฉ 9. Advanced Features

๐Ÿ”น FAQ support

  • Direct Q&A matching


๐Ÿ”น Relevance tuning

  • Boost certain documents


๐Ÿ”น Custom metadata

  • Filter search results


๐Ÿ”น Incremental sync

  • Only updates changed documents


๐Ÿš€ 10. Where Kendra Fits (Important Insight)

Given your background (RAG, GenAI, document parsing):

๐Ÿ‘‰ Kendra can replace parts of your pipeline:

Instead of:

Parsing → Chunking → Embedding → Vector DB → Retrieval

You can use:

Kendra Index → Query API → Results

๐Ÿ‘‰ Or combine:

๐Ÿ”ฅ Kendra + LLM (Best pattern)

  • Kendra → retrieval

  • LLM → summarization / reasoning


๐Ÿ Final Summary

  • Amazon Kendra = intelligent document search engine

  • Creates a central index

  • Supports:

    • Multiple document formats

    • Natural language queries

  • Integrates with:

    • AWS + enterprise tools

  • Access via:

    • Console

    • APIs

    • IAM / SSO


๐Ÿ’ฌ One-line takeaway:

Kendra is a managed enterprise search + semantic retrieval system, ideal for building internal knowledge search and RAG-style applications.


If you want, I can:

  • Compare Kendra vs OpenSearch vs Vector DB (very useful for your use case)

  • Show architecture for Kendra + LLM chatbot

  • Or suggest when NOT to use Kendra (cost/performance tradeoffs)

Monday, March 30, 2026

What is Amazon Augmented AI (A2I)?

 

๐Ÿค– What is Amazon Augmented AI (A2I)?

Amazon Augmented AI (A2I) is a service that adds human review into ML workflows.

๐Ÿ‘‰ In simple terms:

It lets you automatically send low-confidence predictions to humans, and then return a validated result to your application.


๐Ÿ” 1. Typical A2I Pipeline (Your understanding is almost correct)

End-to-end flow:

Input Data
   ↓
ML Model / AWS AI Service
   ↓
Confidence Score Check (your logic or built-in)
   ↓
Amazon A2I (if needed)
   ↓
Human Review (Mechanical Turk / private workforce)
   ↓
Aggregated Result
   ↓
Client Application

More detailed breakdown:

1️⃣ Input Data

  • Image / document / text / video

  • Example:

    • Invoice image

    • Moderation image

    • Form data


2️⃣ Prediction Layer

Can be:

✅ AWS AI services:

  • Amazon Rekognition (image moderation, labels)

  • Amazon Textract (OCR, forms)

  • Amazon Comprehend (text analysis)

✅ OR custom model via:

  • Amazon SageMaker


3️⃣ Confidence Check

Two ways:

๐Ÿ”น Built-in (for AWS services)

Example:

  • Textract confidence < 90% → trigger human review

๐Ÿ”น Custom logic (SageMaker)

You define:

if confidence < threshold:
    send_to_A2I()

4️⃣ A2I Human Loop

A2I creates a Human Loop:

  • Task is sent to human workers

  • Workers review UI (HTML template)


5️⃣ Human Workforce Options

  • Amazon Mechanical Turk (public workforce)

  • Private workforce (your employees)

  • Vendor workforce


6️⃣ Aggregation

  • Multiple humans review

  • A2I aggregates responses

  • Final result returned


7️⃣ Output to Client

  • Final validated prediction

  • Stored in S3 / returned via API


๐Ÿง  2. How A2I Integrates with SageMaker

๐Ÿ‘‰ Yes—SageMaker is the primary way to use A2I with custom models


Flow with SageMaker:

Client → API Gateway → Lambda → SageMaker Endpoint
                                      ↓
                           Confidence evaluation
                                      ↓
                             A2I Human Loop
                                      ↓
                             Final result → Client

Key components:

๐Ÿ”น 1. SageMaker Endpoint

  • Hosts your model

๐Ÿ”น 2. Flow Definition (A2I core config)

Defines:

  • When to trigger human review

  • Workforce

  • UI template


๐Ÿ”น 3. Human Task UI

  • HTML template

  • Defines what humans see


๐Ÿ”น 4. Output location

  • S3 bucket


⚙️ 3. How to Access / Use A2I

Step-by-step:

1️⃣ Create Workforce

  • MTurk OR private workforce


2️⃣ Create Flow Definition

Using:

  • AWS Console OR SDK

Includes:

  • Human task UI

  • Role (IAM)

  • S3 output path


3️⃣ Integrate with:

Option A — AWS AI services (easy mode)

Example:

  • Textract + A2I (built-in integration)

Option B — SageMaker (custom)

  • Call:

    start_human_loop()
    

4️⃣ Monitor

  • Human loop status

  • Results in S3


๐Ÿ–ผ️ 4. Can A2I do Image Moderation?

๐Ÿ‘‰ Yes—but indirectly.

A2I itself doesn’t “detect” anything.

Instead:

Flow:

  1. Amazon Rekognition detects:

    • Nudity

    • Violence

    • Unsafe content

  2. If confidence is low:
    → Send to A2I

  3. Human verifies:

    • “Is this image safe?”


๐Ÿ‘‰ So:

A2I = validation layer
Rekognition = detection layer


๐Ÿ“„ 5. Example Use Cases

๐Ÿงพ Document Processing

  • Textract extracts fields

  • Low confidence → human verifies


๐Ÿ–ผ️ Image Moderation

  • Rekognition flags content

  • Human confirms edge cases


๐Ÿง  Custom ML

  • Fraud detection

  • Medical classification

  • Network anomaly detection (your domain!)


๐Ÿ” 6. Internals — What services are used?

A2I itself orchestrates:

  • S3 → input/output storage

  • IAM → permissions

  • CloudWatch → logs

  • SageMaker → integration layer


With AWS AI services:

Use CaseService Used
OCRAmazon Textract
Image analysisAmazon Rekognition
NLPAmazon Comprehend

๐Ÿš€ 7. Advanced Insight (Important for YOU)

Given your GenAI + planner background:

๐Ÿ‘‰ You can extend A2I into:

“AI + Human + Agent loop”

Example:

LLM → uncertain → A2I → human correction
         ↓
Store correction → improve model → feedback loop

๐Ÿ Final Summary

  • A2I = human-in-the-loop orchestration service

  • Works with:

    • AWS AI services (Textract, Rekognition)

    • Custom models (SageMaker)

  • Pipeline:

    • Prediction → Confidence → Human review → Final output

  • Yes, supports:

    • Image moderation (via Rekognition)

    • Document processing

    • Custom ML validation


If you want, I can:

  • Show sample code (SageMaker + A2I integration)

  • Design a real architecture for your use case

  • Or compare A2I vs fully automated LLM pipelines ๐Ÿ‘

What is Agent Skill and what are the files required for creating skills?

 ## What is an Agent Skill?


An **Agent Skill** is a modular, reusable package of instructions, scripts, and resources that gives an AI agent specialized expertise for specific tasks . Think of it as a **"job manual" or "SOP"** for your AI assistant .


### Key Analogy


| Concept | Analogy |

| :--- | :--- |

| **Traditional Prompt** | Giving a new employee a 50-page manual to memorize before starting work |

| **Agent Skill** | Giving the employee a shelf of reference guides they can pull down only when needed |


Instead of cramming every possible instruction into the AI's system prompt (causing context bloat and confusion), Agent Skills let the AI **dynamically load expertise on demand** . The agent scans skill names and descriptions at the start, then loads the full instructions only when it identifies a relevant task .


---


## The Core Innovation: Progressive Disclosure


Agent Skills use a **three-stage "progressive disclosure"** architecture that dramatically reduces token consumption :


| Stage | What Loads | Token Cost | When |

| :--- | :--- | :--- | :--- |

| **L1: Metadata** | Skill name + description (from YAML frontmatter) | Very low (<1%) | Always - at every session start |

| **L2: Instructions** | Full `SKILL.md` body | Medium (5-10%) | Only when the skill is triggered |

| **L3: Resources** | Reference docs, scripts, assets | Variable | Only when explicitly referenced |


**Result:** Studies show this reduces context token consumption by **60-80%** while significantly improving instruction-following accuracy for complex tasks .


---


## Required Files for an Agent Skill


A skill is simply a **directory** containing a mandatory `SKILL.md` file plus optional supporting files .


### Standard Directory Structure


```

skill-name/                    # Any name (lowercase, hyphens only)

├── SKILL.md                   # REQUIRED - The skill definition file

├── scripts/                   # OPTIONAL - Executable code

│   └── helper.py

├── references/                # OPTIONAL - Reference docs (loaded on demand)

│   └── api_documentation.md

└── assets/                    # OPTIONAL - Templates, images, fonts

    └── report-template.docx

```


### The SKILL.md File Format


Every `SKILL.md` must contain **YAML frontmatter** (metadata) followed by **Markdown content** (instructions) :


```markdown

---

name: expense-report

description: File and validate employee expense reports according to company policy. Use when asked about expense submissions, reimbursement rules, or spending limits.

license: Apache-2.0

compatibility: Requires python3

metadata:

  author: finance-team

  version: "2.1"

---


# Expense Report Skill


You are now an expense report specialist.


## Instructions


1. Ask the user for: date, amount, category, receipt

2. Validate against policy in [references/policy.md](references/policy.md)

3. If amount > $500, require manager approval

4. Generate report using [assets/template.docx](assets/template.docx)


## Scripts


Run validation: `python scripts/validate.py --file {receipt_path}`


## Edge Cases


- Missing receipts: Flag as "needs follow-up"

- International currency: Convert using daily exchange rate

```


### Required Frontmatter Fields


| Field | Required | Description |

| :--- | :--- | :--- |

| `name` | **Yes** | Max 64 chars. Lowercase letters, numbers, and hyphens only. Must match parent directory name. |

| `description` | **Yes** | Max 1024 chars. What the skill does AND when to use it. Critical for routing! |

| `license` | No | License name or reference |

| `compatibility` | No | Environment requirements (Python version, network access, etc.) |

| `metadata` | No | Any custom key-value pairs (author, version, etc.) |


> ⚠️ **Critical:** The `description` field is how the agent decides whether to load your skill. Use specific keywords that match real user queries .


---


## How the Agent Processes Skills


### Step 1: Discovery


The agent scans predefined directories for skill folders containing `SKILL.md` . Common locations:


| Level | Path | Scope |

| :--- | :--- | :--- |

| **Project-level** | `./.claude/skills/` or `./.codeartsdoer/skills/` | Specific to current project |

| **User-level** | `~/.claude/skills/` or `~/.codeartsdoer/skills/` | Across all projects |

| **System-level** | Built-in skills | Provided by the tool vendor |


### Step 2: Registration & Metadata Injection


At the start of every session, the agent:

1. Recursively scans skill directories (up to 2 levels deep)

2. Reads only the `name` and `description` from each `SKILL.md` frontmatter

3. Injects a compact **skills manifest** into the system prompt 


**What the agent sees at start:**

```

Available skills:

- expense-report: File and validate employee expense reports according to company policy...

- pdf-processor: Extract text, tables, and form data from PDF documents...

- code-review: Review Python code for style, security, and performance issues...

```


### Step 3: Intent Matching & Loading


When you ask a question, the agent:

1. Compares your query against skill descriptions

2. If a match is found, calls the `load_skill` tool to retrieve the **full SKILL.md body** 

3. The full instructions are injected into the current context


**Example flow :**

```

User: "Process this PDF and extract all tables"

  ↓

Agent scans: "pdf-processor" description matches

  ↓

Agent calls: load_skill("pdf-processor")

  ↓

Full SKILL.md loads with specific extraction instructions

  ↓

Agent executes using referenced scripts/ and references/

```


### Step 4: Resource Loading (On-Demand)


If the skill instructions reference external files (e.g., `See [references/policy.md](references/policy.md)`), the agent:

1. Reads those files **only when needed** 

2. Injects their content into context at that moment

3. Does NOT keep them loaded afterward


### Step 5: Script Execution (Optional)


Skills can include executable scripts (Python, Bash, etc.) that run in a **sandboxed environment** . The agent:

- Executes the script when instructed

- Passes parameters as needed

- Receives output (stdout/stderr)

- Uses output to inform the final response


---


## Skills vs. Rules vs. Commands


Understanding the distinction is crucial for effective implementation :


| Concept | Who Triggers | Best For | Context Cost | Example |

| :--- | :--- | :--- | :--- | :--- |

| **Rules** | The tool (always applied) | Non-negotiable requirements | Always paid | "Never commit .env files" |

| **Commands** | You (explicit intent) | Repeatable workflows | Paid when used | `/deploy` to trigger deployment |

| **Skills** | The agent (automatic) | Task-specific expertise | Paid when needed | PDF processing, code review |


### Litmus Test


> **"Would you want this instruction to apply even when you're not thinking about it?"**

> - Yes → Make it a **Rule**

> - No → Make it a **Skill** 


---


## Agent Skills vs. MCP (Model Context Protocol)


These are complementary, not competing :


| Aspect | MCP (Model Context Protocol) | Agent Skill |

| :--- | :--- | :--- |

| **Role** | Data pipeline | Cognitive schema |

| **Question** | "How does data get here?" | "How is data used?" |

| **Example** | Fetch live stock prices from Yahoo Finance | Format analysis as professional research report |

| **Output** | Raw JSON data | Structured, formatted response following guidelines |


---


## Tools That Support Agent Skills


| Tool/Platform | Support Level | Notes |

| :--- | :--- | :--- |

| **Claude Code** | Native | Originator of the Skills standard  |

| **Microsoft Agent Framework** | Full support | `FileAgentSkillsProvider` class, C# and Python SDKs  |

| **Huawei CodeArts** | Full support | Project-level and user-level skills  |

| **Builder.io** | Full support | Uses `.builder/` or `.claude/` directories  |

| **Minion (open source)** | Full compatibility | Open-source implementation, LLM-agnostic  |

| **OpenAI** | Similar concept | Uses different implementation (package-manager style)  |


---


## Best Practices for Creating Skills


### ✅ Do's


1. **Write descriptions for routing, not reading** 

   - Bad: "Helps with documents"

   - Good: "Extract tables from PDF files. Use when user mentions PDF, tables, or form extraction."


2. **Keep SKILL.md focused (under 500 lines)** 

   - Move detailed references to `references/` folder

   - Keep only core instructions in the main file


3. **Use progressive disclosure naturally**

   - L1: Metadata (name + description)

   - L2: Core workflow in SKILL.md

   - L3: Detailed policies in `references/`


4. **Include concrete examples** in the instructions

   - Show input/output formats

   - Demonstrate edge case handling


### ❌ Don'ts


1. **Don't stuff everything into one file** - Reference external docs instead

2. **Don't write vague descriptions** - The agent will never find your skill

3. **Don't include sensitive data** - Skills are plain text files in your repo

4. **Don't make skills that are really rules** - Use the litmus test above


---


## Example: Complete Skill for PDF Processing


```

project-root/

└── .claude/

    └── skills/

        └── pdf-analyzer/

            ├── SKILL.md

            ├── scripts/

            │   └── extract_tables.py

            └── references/

                └── table_formats.md

```


**SKILL.md:**

```markdown

---

name: pdf-analyzer

description: Extract text, tables, and form data from PDF documents. Use when user asks about PDF files, form extraction, or table parsing.

license: MIT

compatibility: Requires python3, tabula-py, pypdf2

---


# PDF Analyzer Skill


You are a PDF processing specialist.


## Instructions


1. Locate the PDF file path from user input

2. Determine extraction type:

   - Text: Use pypdf2

   - Tables: Use tabula-py

   - Forms: Use pdfplumber


3. Run the appropriate script from `scripts/`


## Table Extraction


Run: `python scripts/extract_tables.py --input {pdf_path} --output {csv_path}`


Refer to [references/table_formats.md](references/table_formats.md) for handling complex multi-page tables.


## Edge Cases


- Scanned PDFs: Flag as "needs OCR" and suggest alternative tool

- Password-protected: Ask user for password before proceeding

```


---


## Summary


| Question | Answer |

| :--- | :--- |

| **What is an Agent Skill?** | A modular package of instructions + resources giving AI specialized expertise |

| **What files are required?** | `SKILL.md` with YAML frontmatter (name + description) and Markdown instructions |

| **What optional files exist?** | `scripts/` (executable code), `references/` (docs), `assets/` (templates) |

| **How does the agent process skills?** | L1 metadata (always) → L2 instructions (on match) → L3 resources (on reference) |

| **What's the key benefit?** | Reduces context token usage by 60-80%, improves instruction following |


Would you like me to help you create a specific skill for your use case, or dive deeper into any of these concepts?

AWS Cloud formation steps to do things practically

Here is a detailed elaboration of each point you listed about AWS CloudFormation, explaining the concepts and the workflow you described.

### CloudFormation Workflow: From Template to Stack and Updates

Your points accurately capture a practical CloudFormation workflow. Here’s a breakdown of each step with more context and technical detail.

---

#### 1. Create a stack. There can be multiple templates; some templates also exist.


- **What is a Stack?** A stack is the fundamental unit of deployment in CloudFormation. It represents a collection of AWS resources (like EC2 instances, VPCs, Security Groups) that you create and manage as a single group. When you create a stack, CloudFormation provisions all the resources defined in a template.

- **Templates:** A template is the blueprint (YAML or JSON file) that defines *what* resources you want. You can have many different templates for different purposes (e.g., one for a web app, another for a database cluster). You can also reuse the same template to create multiple stacks (e.g., a dev stack and a prod stack).


#### 2. The resource YAML file can have a Resource block where you need to give the Instance name (e.g., MyInstance).


- **`Resources` Block:** This is the **only required** section in a CloudFormation template. It's where you declare each AWS component you want to provision.

- **Logical ID (Instance Name):** Inside the `Resources` block, you give each resource a **Logical ID** (like `MyInstance`). This is a name you use *within the template* to refer to that resource. It's not the actual name of the EC2 instance in the AWS console (though it can be similar).

- **Example:**

  ```yaml

  Resources:

    MyInstance:   # <-- This is the Logical ID

      Type: AWS::EC2::Instance

      Properties:

        ImageId: ami-0abcdef1234567890

        InstanceType: t2.micro

  ```


#### 3. This file can then be uploaded, and this can be viewed in Application Composer.


- **Uploading:** When you create or update a stack via the AWS Console, you upload your YAML/JSON template file. CloudFormation validates the syntax and structure.

- **Application Composer:** As we discussed earlier, AWS Infrastructure Composer is a visual tool. You can open your uploaded template in Composer, and it will generate a **visual diagram** of all the resources and their relationships. This is extremely helpful for complex templates.


#### 4. It gives a visual understanding of the YAML file.


- **Visual Benefits:** The diagram shows icons for each resource (e.g., an EC2 logo, a VPC icon). Arrows indicate connections, like which Security Group is attached to an EC2 instance. This makes it far easier to understand the architecture at a glance compared to reading raw YAML. It also helps spot errors (e.g., a resource not connected where it should be).


#### 5. Now can proceed to creating the stack.


- **Stack Creation Wizard:** After uploading the template, you proceed through the CloudFormation console wizard. Key steps include:

    - Specifying a **Stack name** (e.g., `MyWebAppStack`).

    - Entering any **Parameters** (if your template uses them for custom inputs).

    - Adding **Tags** (key-value pairs for cost tracking or organization).

    - Reviewing **Capabilities** (you must explicitly acknowledge if the template creates IAM roles or macros).

- **Creation Process:** Clicking "Create stack" starts the provisioning. CloudFormation manages dependencies (e.g., creating a Security Group *before* the EC2 instance that uses it).


#### 6. The code then gets the resource. Now we can see the MyInstance is running. The AMI ID will be the one specified in the template.


- **Result:** Once the stack creation completes successfully, you will see an EC2 instance running in the EC2 console. Its configuration (AMI ID, instance type, security groups, etc.) will exactly match what you wrote in the template's `Properties` for the `MyInstance` resource.


#### 7. By default, a set of tags will be applied by this process.


- **Default Tags:** CloudFormation automatically applies a standard set of stack-level tags to all resources it creates. The most important ones are:

    - `aws:cloudformation:stack-name` : The name of your stack.

    - `aws:cloudformation:stack-id` : The unique ID of the stack.

    - `aws:cloudformation:logical-id` : The Logical ID from your template (e.g., `MyInstance`).

- **Why?** These tags are crucial for **resource management**. They allow you to see which stack owns a resource, troubleshoot by correlating resources to a stack, and manage costs by grouping resources from the same stack.


#### 8. We can specify security group, etc., in the resource YAML file.


- **Example Security Group:** You add a `SecurityGroup` resource and then reference it in the EC2 instance's `SecurityGroupIds` property.

  ```yaml

  Resources:

    MyInstanceSecurityGroup:

      Type: AWS::EC2::SecurityGroup

      Properties:

        GroupDescription: Allow SSH and HTTP

        SecurityGroupIngress:

          - IpProtocol: tcp

            FromPort: 22

            ToPort: 22

            CidrIp: 0.0.0.0/0

          - IpProtocol: tcp

            FromPort: 80

            ToPort: 80

            CidrIp: 0.0.0.0/0


    MyInstance:

      Type: AWS::EC2::Instance

      Properties:

        ImageId: ami-0abcdef1234567890

        InstanceType: t2.micro

        SecurityGroupIds:

          - !Ref MyInstanceSecurityGroup   # Reference the security group

  ```


#### 9. We can give Elastic IP; we can give security group for the port, etc.


- **Elastic IP (EIP):** You can add an `AWS::EC2::EIP` resource and associate it with your instance using the `AWS::EC2::EIPAssociation` resource. This gives your instance a static public IP address.

- **Port Security:** As shown above, you define port access rules inside the `SecurityGroupIngress` block for your Security Group (e.g., opening port 22 for SSH or port 443 for HTTPS).


#### 10. Server security group can be applied.


- This essentially repeats point #8. You define the Security Group resource and then apply it to the EC2 instance by referencing it in the instance's `SecurityGroupIds` property.


#### 11. We can now then apply the modified template.


- **Making Changes:** You edit your local YAML file (e.g., changing the `InstanceType` from `t2.micro` to `t3.micro`, or adding a new tag). Then you initiate an **Update** operation on the existing stack using the modified template.


#### 12. This will give a Change Set which gives an idea of what changes are going to be applied.


- **Change Sets:** Before actually applying any modifications, CloudFormation generates a **Change Set**. This is a preview that lists:

    - **What will be added** (new resources).

    - **What will be modified** (which properties of which resources).

    - **What will be replaced** (the old resource is deleted, a new one is created).

    - **What will be removed** (resources no longer in the template).

- **Crucial Safety Step:** Reviewing the Change Set allows you to catch unintended consequences (e.g., accidentally replacing a database instance, causing data loss) *before* executing the update.


#### 13. Now one updates the template; internally it understands what exactly to be done. It will remove the previous EC2 instance and it is taken care automatically.


- **CloudFormation's Logic:** When you execute an update from a Change Set, CloudFormation intelligently compares the current stack's state with the desired state in the new template.

- **Replacement vs. Modification:** For some property changes (like changing an EC2 instance's `ImageId` or `InstanceType` for certain instance families), CloudFormation knows it cannot modify the resource in place. It therefore **automatically orchestrates a replacement**:

    1.  Creates the new resource (e.g., a new EC2 instance with the new AMI).

    2.  If successful, deletes the old resource (the previous instance).

    3.  This is all done without you having to manually terminate anything. **Warning:** This will cause downtime and data loss on the replaced resource unless you have external backups (like EBS snapshots).


#### 14. Now if we view in the Application Template Viewer, we can see the new one is now applied.


- **Verification:** After the update completes successfully, you can open the stack again in the AWS Infrastructure Composer (or the CloudFormation Designer) and see the updated visual diagram reflecting the new state. The EC2 console will show the new instance running with the updated configuration.


#### 15. To remove things, it is recommended to go via the template itself than doing anything manually.


- **Why This is Critical (Drift):** This is a core best practice of Infrastructure as Code.

    - **Manual Deletion (Bad):** If you manually delete a resource (e.g., terminate an EC2 instance in the console) that was created by CloudFormation, the stack becomes "out of sync". CloudFormation's record says the resource should exist, but it doesn't. This state is called **Drift**.

    - **Consequences of Drift:** Future stack operations (updates, deletions) can fail because CloudFormation expects the resource to be there. The only way to fix it is to manually re-import the resource or delete the entire stack, which can be messy.

    - **Correct Way (Good):** To remove a resource, you **remove its definition from the template's `Resources` block** and perform a **stack update**. CloudFormation will then automatically and cleanly delete that resource for you. The stack remains the single source of truth.


### Summary Table of Your Workflow


| Step | Action | CloudFormation Concept |

| :--- | :--- | :--- |

| **1-4** | Write & visualize YAML template | Authoring IaC; using Infrastructure Composer for clarity |

| **5-7** | Upload and create stack | Stack creation; automatic tagging for governance |

| **8-10** | Add security, networking (EIP, SG) | Defining full resource context in the template |

| **11-13** | Modify template and update stack | Change Sets for safe, previewed updates; automatic resource replacement logic |

| **14-15** | Verify update and delete correctly | Visual verification; always modify/delete via template to avoid Drift |


This workflow is the essence of managing infrastructure reliably and repeatably with CloudFormation. Would you like to dive deeper into any specific concept, such as Change Sets in detail, or how to handle data persistence when resources must be replaced?