Tuesday, March 17, 2026

How to server OCR local model and provide inference

vllm serve nanonets/Nanonets-OCR2-3B

from openai import OpenAI

import base64


client = OpenAI(base_url="http://localhost:8000/v1")

model = "nanonets/Nanonets-OCR2-3B"


def encode_image(image_path):

    with open(image_path, "rb") as image_file:

        return base64.b64encode(image_file.read()).decode("utf-8")


def infer(img_base64):

    response = client.chat.completions.create(

        model=model,

        messages=[

            {

                "role": "user",

                "content": [

                    {

                        "type": "image_url",

                        "image_url": {"url": f"data:image/png;base64,{img_base64}"},

                    },

                    {

                        "type": "text",

                        "text": "Extract the text from the above document as if you were reading it naturally.",

                    },

                ],

            }

        ],

        temperature=0.0,

        max_tokens=15000

    )

    return response.choices[0].message.content



What are various OCR services?

 Beyond OCR: Advanced Document Intelligence

Visual Document Retrieval

Retrieve the most relevant documents when given a text query. You can build multimodal RAG pipelines by combining these with vision language models.


Document Question Answering

Instead of converting documents to text and passing to LLMs, feed your document and query directly to advanced vision language models like Qwen3-VL to preserve all context, especially for complex layouts.


The Future is Open

The past year has seen an incredible wave of new open OCR models, with organizations like AllenAI releasing not just models but also the datasets used to train them. This openness accelerates innovation across the community.


However, we need more open training and evaluation datasets to unlock even greater advances. Promising approaches include:


Synthetic data generation

VLM-generated transcriptions filtered manually or through heuristics

Using existing OCR models to generate training data for new, more efficient models

Leveraging existing corrected datasets

Tuesday, March 10, 2026

What is OpenStack

OpenStack is a popular open-source cloud computing platform used to build and manage private and public clouds, acting as an Infrastructure-as-a-Service (IaaS) solution. It pools, provisions, and manages large-scale computing, storage, and networking resources across data centers via APIs, providing a flexible, scalable alternative to proprietary cloud services. [1, 2, 3, 4]


Key Aspects of OpenStack:
  • Functionality: It functions like a "cloud operating system," controlling diverse hardware resources (virtual machines, bare-metal, containers) to create a self-service, on-demand IT environment.
  • Key Components:
    • Nova: Computing power.
    • Neutron: Networking services.
    • Swift: Object storage.
    • Cinder: Block storage.
    • Keystone: Identity and authentication services.
    • Horizon: Dashboard interface.
  • Origins & Benefits: Launched by NASA and Rackspace in 2010, it offers high scalability, no vendor lock-in, and cost-effective management for large-scale IT infrastructure.
  • Use Cases: Ideal for telecommunications, NFV (Network Functions Virtualization), edge computing, and high-performance computing tasks. [1, 3, 5, 6, 7, 8, 9]
While powerful, it is known for having a steep learning curve and high complexity in,setting up and managing, particularly for complex deployments. [5, 6, 10]



AI can make mistakes, so double-check responses


Sunday, March 1, 2026

What is OpenClaw

 OpenClaw is a viral, open-source autonomous AI agent designed to act as a proactive personal assistant. Unlike traditional chatbots that only respond to prompts, OpenClaw runs continuously in the background and can execute real-world tasks on your behalf.

Core Functionality
  • "The AI that does things": It can manage emails, schedule calendar events, book flights, and browse the web autonomously.
  • Persistent Memory: It stores conversation history and user preferences locally (as Markdown files), allowing it to "remember" and learn your patterns over time.
  • Proactive "Heartbeat": It features a "wake-up" loop that allows it to initiate actions—like alerting you to an urgent email—without being prompted first.
  • Messaging Interface: You interact with it through everyday apps like WhatsAppTelegramDiscord, and Slack rather than a dedicated website.
Technical Setup
  • Self-Hosted: It runs on your own hardware (Mac, Windows, Linux) or a private server (VPS), giving you control over your data.
  • Model Agnostic: It acts as a "harness" for Large Language Models; you "bring your own key" for models like ClaudeGPT-4, or DeepSeek, or run local models via Ollama.
  • Skill Ecosystem: It supports over 100+ community-built "AgentSkills" through the ClawHub registry to extend its capabilities.
History & Renaming
The project was created by developer Peter Steinberger (founder of PSPDFKit) in late 2025. It underwent two rapid rebrands due to trademark concerns:
  1. Clawdbot: Original name (Nov 2025).
  2. Moltbot: Second name (Jan 2026).
  3. OpenClaw: Final name (Jan 30, 2026).
Critical Security Warnings
Because OpenClaw requires deep system access (shell access, file reading/writing), it is considered high-risk for non-technical users.
  • "Lethal Trifecta": Security researchers warn that it can see sensitive data, read untrusted external info (like emails), and take actions, making it vulnerable to prompt injection.
  • Malicious Skills: A significant percentage of community-contributed skills have been found to contain vulnerabilities or malware.
  • Isolation is Required: Experts recommend running it only in a dedicated Virtual Machine or an isolated "disposable" device rather than your primary computer.
Would you like to know how to safely set up OpenClaw in an isolated environment or see examples of custom skills you can build for it?