Using **AWS AppSync with Lambda resolvers** is a flexible way to integrate GraphQL with **Amazon Bedrock**. While AppSync now supports direct integration with Bedrock (no-code), using a Lambda resolver is still preferred when you need to perform **data validation, prompt engineering, or complex post-processing** before returning the AI's response to the client.
### The Architectural Flow
1. **Client Request:** A user sends a GraphQL query or mutation (e.g., `generateSummary(text: String!)`) to the AppSync endpoint.
2. **AppSync Resolver:** AppSync identifies the field and triggers the associated **Lambda Data Source**.
3. **Lambda Function:** The function receives the GraphQL arguments, constructs a prompt, and calls the **Bedrock Runtime API**.
4. **Bedrock Inference:** Bedrock processes the prompt and returns a JSON response.
5. **Return to Client:** Lambda parses the result and returns it to AppSync, which maps it back to the GraphQL schema.
---
### Step-by-Step Implementation
#### 1. Define the GraphQL Schema
In the AppSync console, define the types and the mutation that will trigger the AI.
```graphql
type AIResponse {
content: String
usage: String
}
type Mutation {
askBedrock(prompt: String!): AIResponse
}
```
#### 2. Create the Lambda Resolver (Node.js Example)
The Lambda function acts as the "middleman." It uses the `@aws-sdk/client-bedrock-runtime` to communicate with the foundation models.
```javascript
import { BedrockRuntimeClient, InvokeModelCommand } from "@aws-sdk/client-bedrock-runtime";
const client = new BedrockRuntimeClient({ region: "us-east-1" });
export const handler = async (event) => {
// Extract the prompt from the AppSync 'arguments' object
const { prompt } = event.arguments;
const input = {
modelId: "anthropic.claude-3-haiku-20240307-v1:0",
contentType: "application/json",
accept: "application/json",
body: JSON.stringify({
anthropic_version: "bedrock-2023-05-31",
max_tokens: 500,
messages: [{ role: "user", content: prompt }],
}),
};
try {
const command = new InvokeModelCommand(input);
const response = await client.send(command);
// Decode and parse the binary response body
const responseBody = JSON.parse(new TextDecoder().decode(response.body));
return {
content: responseBody.content[0].text,
usage: "Success"
};
} catch (error) {
console.error(error);
throw new Error("Failed to invoke Bedrock");
}
};
```
#### 3. Configure IAM Permissions
Your Lambda function's execution role **must** have permission to call the specific Bedrock model.
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "bedrock:InvokeModel",
"Resource": "arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-haiku-20240307-v1:0"
}
]
}
```
---
### Why use Lambda instead of AppSync's Direct Bedrock Integration?
While AppSync can now talk to Bedrock directly using specialized "Direct Data Source" resolvers, the **Lambda approach** is better for:
* **Prompt Orchestration:** You can fetch additional data from DynamoDB or a vector database (like Pinecone or OpenSearch) to augment the prompt (**RAG architecture**) before sending it to Bedrock.
* **Response Sanitization:** You can filter the AI's output for PII (Personally Identifiable Information) or toxic content before it reaches the user.
* **Logging & Auditing:** You can easily log exact input/output tokens to CloudWatch for cost tracking and performance monitoring.
* **Error Handling:** You can provide custom "fallback" responses if the AI service is throttled or the prompt violates safety filters.
### Handling Long-Running Queries
Standard Lambda-based GraphQL queries have a **30-second timeout**. If the model (like Claude 3 Opus) takes longer to generate a response, the query will fail. In those cases, it is recommended to use **AppSync Subscriptions** to stream the response back to the client token-by-token.