Wednesday, March 13, 2024

A sample for defining rule based logic using LLM

import requests

def prompt_llm(prompt):

  # Replace with your LLM access method (e.g., API call)

  response = requests.post("https://your-llm-api/generate", data={"prompt": prompt})

  return response.json()["text"]


def generate_rules(data_source, domain):

  # Simulate data access (replace with actual data retrieval logic)

  data = retrieve_data(data_source)


  # Prompt the LLM to analyze data and suggest rules

  prompt = f"Analyze the provided data related to {domain} and suggest a set of 'if-then' rules for automating tasks. \n Data: {data}"

  llm_response = prompt_llm(prompt)


  # Parse LLM response and extract rules (replace with logic for structured output)

  rules = []

  for line in llm_response.splitlines():

    if line.startswith("if "):

      rules.append(line.strip())


  return rules


# Example Usage

data_source = "your_data_source"  # Replace with actual data source

domain = "customer_support"  # Replace with specific domain

rules = generate_rules(data_source, domain)


print(f"Generated rules for {domain}:")

for rule in rules:

  print(rule)



We define functions:


prompt_llm: Simulates sending a prompt to an LLM and receiving a response (replace with your actual LLM access method).

generate_rules: Takes a data source and domain as input.

Retrieves data (replace with logic to access your data source).

Creates a prompt for the LLM, including the data and domain information.

Sends the prompt to the LLM and parses the response to extract potential rules (replace with logic for handling structured output).

Returns a list of generated rules.

The example usage demonstrates how to call generate_rules with a specific data source and domain.


The extracted rules are then printed.


Important Considerations:


This is a simplified conceptual example. Real-world LLM integration involves specific APIs or access methods.

The quality of generated rules depends on the LLM's training data and prompting strategies.

Extracting structured rules from the LLM response might require additional Natural Language Processing (NLP) techniques.

Human review and refinement are crucial before deploying AI-generated rules in a production system.


Sample data could be like this below 


[

  {

    "ticket_id": 123,

    "category": "billing",

    "description": "Customer cannot access their online invoice.",

    "resolution": "Reset customer password and sent a new link to access invoices."

  },

  {

    "ticket_id": 456,

    "category": "technical issue",

    "description": "Error message '404 Not Found' when trying to access a product page.",

    "resolution": "Identified a broken link and redirected the customer to the correct product page."

  },

  {

    "ticket_id": 789,

    "category": "subscription",

    "description": "Customer wants to cancel their subscription.",

    "resolution": "Processed customer's request for subscription cancellation."

  }

  # ... (more data samples)

]


No comments:

Post a Comment