Mastering Amazon Bedrock: A Full Hands-On Guide to Intelligent Agents and Knowledge Bases

Ayushmaan Srivastav
21 min readMar 10, 2024

--

Generative AI functions as a creative robot, producing new content like pictures or stories based on its learned knowledge. A prompt serves as an input cue for the AI to generate text or other outputs, guiding its content creation. Generative AI Ops acts as a smart helper for computer systems, utilizing AI to identify and address issues, predict problems, and enhance operational efficiency with minimal human intervention. Companies employ generative AI to automate tasks, saving costs by replacing human activities such as content writing and design creation. AWS Bedrock Service by Amazon Web Services provides a robust foundation for cloud application development, ensuring reliability and security. Artificial intelligence (AI) involves making computers intelligent enough to think and learn independently, enabling them to perform tasks that typically require human intelligence. Natural language refers to human communication using words and sentences, and models in AI are simplified representations that simulate real-world processes. Large Language Models (LLM) and Generative Pre-trained Transformers (GPT) are advanced AI programs that understand and generate human-like text, with GPT serving various purposes such as text, image, or music generation based on its training data. Unsupervised learning allows computers to find patterns in data without explicit guidance, and models like Anthropic’s CLAUDE aid in natural language tasks such as text completion and dialogue generation. Foundation models form the basis for specialized AI applications, trained on vast amounts of data to understand language and generate text, similar to how AWS Bedrock Service provides essential tools and infrastructure for building and managing cloud applications.

Step-by-Step Guide: Getting Started with AWS Bedrock

1. Account Setup:

  • If you don’t have an AWS account, sign up for one on the AWS website.
  • Log in to your AWS Management Console.

2. Navigate to AWS Bedrock:

  • In the AWS Management Console, search for “AWS Bedrock” in the service search bar.

3. Access AWS Bedrock Service:

  • Click on the AWS Bedrock service from the search results to enter the Bedrock dashboard.

4. Explore Dashboard Sections:

  • Familiarize yourself with the different sections on the Bedrock dashboard, such as Model Access, Playgrounds, and Activity Logs.

5. Request Model Access:

  • In the Model Access section, click on “Manage Model Access.”
  • Select the models you need access to and click “Save Changes.”

6. Utilize Playgrounds for Testing:

  • Navigate to the Playgrounds section for testing and experimenting with software applications in a controlled environment.
  • Click on ‘Select Model’ to choose the desired model for your testing purposes.

7. Request Model Access in Activity Page:

  • Log into the Activity Page within AWS Bedrock.
  • Click on “Manage Model Access” to request access to specific models.

8. Test Models and Retrieve Responses:

  • Once model access is granted, input prompts or requests in the designated areas.
  • Click on “Run” to execute the prompt and receive responses generated by the selected model.

9. Control Creativity with Parameters and Temperature:

  • Understand the concept of parameters and temperature in generative AI.
  • Experiment with different values to control the randomness and creativity of the model’s output.

10. Access Additional AWS Bedrock Features:

  • Explore additional features within AWS Bedrock, such as Stability AI and various models like TitanTextG1 and SDXL1.0.

11. Explore Natural Language Understanding:

  • Dive into natural language understanding with models like CLAUDE by Anthropic.
  • Test its capabilities by inputting prompts and analyzing the generated responses.

12. Save and Share Outputs:

  • Save the outputs generated by the models for future reference or sharing.
  • Explore options for exporting or integrating the results into your applications.

13. Stay Informed:

  • Keep an eye on AWS Bedrock updates and announcements for new features, models, and improvements.

14. Troubleshooting:

  • If you encounter any issues or have questions, refer to the documentation or AWS support resources for assistance.

Retrieval-Augmented Generation (RAG): A Brief Overview

Enhancing Accuracy and Relevance: RAG is a cutting-edge technique that combines information retrieval with text generation, addressing the limitations of large language models (LLMs). In Amazon Bedrock, RAG ensures that responses are not only accurate and up-to-date but also tailored to specific domains or tasks. This is achieved by allowing LLMs to access external knowledge bases, making them more precise and contextually relevant.

Runtime Execution: At runtime, RAG utilizes an embedding model to convert user queries into vectors. These vectors are then used to query a vector index, retrieving semantically similar chunks from external knowledge bases. The user prompt is then enriched with this additional context before being sent to the model for response generation. This dynamic process ensures real-time, context-aware interactions.

Improving Transparency: RAG doesn’t just focus on results; it prioritizes transparency and explainability. Users are provided with information about the sources used to generate responses, empowering them to understand the model’s decision-making process and assess the credibility of the information provided.

Why Knowledge Bases Matter:

Knowledge bases serve as a crucial component for applications seeking to enhance responses through the integration of retrieved information. By associating a knowledge base with an agent, users can augment their queries and prompts, providing a richer context for more informed responses.

Prerequisites for setting up a Knowledge Base in Amazon Bedrock

Prepare Information Files:

  • Before creating a knowledge base, gather the information you want it to have. Imagine these are like chapters in a book. Once you’ve got these “chapters” ready, upload them to an Amazon S3 bucket, which is like a virtual storage space.

Vector Store (Optional):

  • Think of a vector store as a special library where your knowledge base can find related information efficiently. If you don’t have a preference, the AWS Management Console can set up this library for you automatically. It’s like letting someone else organize your bookshelf.

IAM Service Role (Optional):

  • Your knowledge base needs certain permissions to do its job effectively. This is like giving your knowledge base a key to access the information. You can either create a custom IAM service role with the right permissions or let the AWS Management Console handle it for you. If you’re not into making keys, the console has got you covered.

Extra Security Configurations (Optional):

  • For an extra layer of protection, you can set up some security configurations. It’s like adding a lock to your already secured door. If you prefer a little extra security, follow the steps for encrypting your knowledge base resources. If not, feel free to skip this step.

So, get your information ready, upload it to the virtual storage (S3 bucket), decide if you want a special library (vector store), make sure your knowledge base has the right key (IAM service role), and add an extra lock if you want (security configurations). Now, your knowledge base is good to go!

Steps to Set Up Your Knowledge Base:

Step 1: Accessing the Amazon Bedrock Console

  1. Navigate to the Amazon Bedrock console.
  2. Log in with your IAM user credentials; note that a root user cannot create a knowledge base.

Step 2: Initiating Knowledge Base Creation

  1. In the left navigation pane, select “Knowledge base.”
  2. Click on “Create knowledge base.”

Step 3: Providing Knowledge Base Details

  1. Knowledge Base Details Section:
  • Provide a name and description for your knowledge base.
  • In the IAM Permissions section, choose an IAM role with the necessary permissions for Amazon Bedrock. You can either let Bedrock create the role or select a custom role.
  • Optionally, add tags for better organization.
  • Click “Next.”

Step 4: Setting Up Data Sources

Data Source Details:

  • Provide a name for the data source and the URI of the Amazon S3 object containing your data.
  • Optionally, if your S3 data is encrypted with a customer-managed key, select “Add customer-managed AWS KMS key for Amazon S3 data” and choose a KMS key.
  • Choose a chunking strategy (default, fixed size, or no chunking).
  • Click “Next.”

Embeddings Model Section:

  • Choose an embeddings model for converting knowledge base data into embeddings (e.g., Titan G1 Embeddings — Text).
  • In the Vector Database section, choose to either:
  • Quick create a new vector store (let Amazon Bedrock manage it).
  • Choose an existing vector store that you have created.
  • Configure additional settings as needed.
  • Click “Next.”

Step 5: Review and Create

Reviewing Configuration:

  • Check all details on the “Review and create” page.
  • Select “Edit” in any section if modifications are needed.

Initiating Knowledge Base Creation:

  • When satisfied, click “Create knowledge base.”
  • The creation process starts, and the status changes to “In progress.”

Completion:

Once completed, a green success banner appears, and the status becomes “Ready.”

Sync to ingest your data sources into the knowledge base

Step 1: Access the Amazon Bedrock Console

Step 2: Select Your Knowledge Base

  • In the left navigation pane, click on “Knowledge base.”
  • Choose the specific knowledge base where you want to ingest data.

Step 3: Access Data Source Section

  • Within the knowledge base, locate the “Data source” section.

Step 4: Initiate Data Ingestion (Sync)

  • Click on “Sync” to begin the data ingestion process.

Step 5: Monitor Ingestion Progress

  • Wait for the process to complete. A green success banner will appear if the ingestion is successful.

Step 6: Check for Warnings (if needed)

  • If there are any issues or warnings during the data ingestion, you can click on “View warnings” to understand why a particular job failed.

And that’s it! You’ve successfully ingested your data sources into the knowledge base using the Amazon Bedrock console.

Note: Remember to ensure that your files are in a supported format and do not exceed the maximum file size. Additionally, if you make changes to the files in the S3 bucket, you’ll need to sync again to update the knowledge base incrementally.

Test your knowledge base using the Amazon Bedrock console:

Step 1: Access Amazon Bedrock Console

Step 2: Navigate to Knowledge Base

  • In the left navigation pane, click on “Knowledge base.”

Step 3: Choose Knowledge Base for Testing

  • In the “Knowledge bases” section, either:
  • Choose the radio button next to the knowledge base you want to test and select “Test knowledge base.”
  • Alternatively, choose the knowledge base directly, and a test window will expand from the right.

Step 4: Configure Test Settings (Optional)

  • Optionally, click on the configurations icon to open up “Configurations.”
  • If you are using an Amazon OpenSearch Serverless vector store with a filterable text field, you can modify the “Search type” according to your preferences (Default, Hybrid, or Semantic).

Step 5: Choose Response Generation Settings

  • Decide whether you want to generate responses for your query or not.
  • If you turn on “Generate responses,” Amazon Bedrock will return text chunks directly from your data sources relevant to the query.
  • If turned off, responses will be generated based on your data sources, citing information with footnotes.

Step 6: Model Selection (If generating responses)

  • If generating responses, choose “Select model” to pick a model for response generation.
  • Click “Apply” to confirm your selection.

Step 7: Enter and Run Query

  • In the chat window, enter your query in the text box.
  • Click on “Run” to execute the query and receive responses.

Step 8: Examine Responses

  • If you didn’t generate responses, text chunks are returned directly in order of relevance.
  • If generating responses, you can:
  • Select a footnote to see an excerpt from the cited source.
  • Choose the link to navigate to the S3 object containing the file.
  • Click “Show result details” to see the chunks cited for each footnote.

Step 9: Additional Actions in Chat Window

  • While using the chat window to test your knowledge base, you can:
  • Select “Change model” to switch to a different model for response generation.
  • Switch between generating responses and returning direct quotations by selecting or clearing “Generate responses.”
  • Clear the chat window by selecting the broom icon.
  • Copy all the output in the chat window by selecting the copy icon.

And there you have it! You’ve successfully tested your knowledge base using the Amazon Bedrock console.

Harnessing the Power of Your Knowledge Base:

Retrieval and Generation (RAG):

Configure your RAG application to use the RetrieveAndGenerate API for querying and generating responses.

Agent Integration:

  • Associate your knowledge base with an agent to enable RAG capabilities, aiding in reasoning through user queries.

Custom Orchestration:

  • Create a custom flow in your application by utilizing the Retrieve API to directly retrieve information from the knowledge base.

Contextual Prompts:

  • Knowledge base responses can enhance prompts for foundation models, providing context and citations for users to verify information.

Understanding Amazon Bedrock Agents

At its core, an agent is your virtual assistant, orchestrating interactions between foundation models, data sources, applications, and user conversations. It seamlessly handles tasks, making API calls, and even taps into knowledge bases to enhance its capabilities. Think of it as your AI sidekick, ready to assist your end-users without the hassle of managing infrastructure or writing custom code.

Tasks Performed by Agents: Amazon Bedrock Agents are multitaskers. They extend foundation models, understand user requests, engage in natural conversations to gather additional information, make API calls to fulfill requests, and boost performance by querying data sources.

How to Use an Agent:

  1. Create a Knowledge Base (Optional): Store private data in a knowledge base for added intelligence.
  2. Configure Your Agent: Define your use case, add actions, and write Lambda functions to guide your agent’s behavior.
  3. Associate a Knowledge Base: Enhance your agent’s performance by linking it to a knowledge base.
  4. Customize Agent Behavior (Optional): Tailor your agent’s behavior by modifying prompt templates for various steps.
  5. Test Your Agent: Use the Amazon Bedrock console or API calls to test and trace your agent’s reasoning process.
  6. Deploy Your Agent: When ready, create an alias pointing to your agent’s version for seamless deployment.
  7. Set Up Your Application: Make API calls to your agent alias from your application.

Benefits of Amazon Bedrock Agents:

  • No need to provision capacity or manage infrastructure.
  • Handles prompt engineering, memory, monitoring, encryption, and user permissions.
  • Accelerates generative AI application delivery.

Prerequisites for using Amazon Bedrock Agents:

Permissions for IAM Role:

  • Make sure your IAM role (Identity and Access Management) has the right permissions. This is like giving your agent the green light to do its job.

Action Groups:

  • Think of action groups as the to-do list for your agent. It tells the agent what actions it can help users with, like making calls to certain APIs, and how to handle those actions. If you’re not ready to decide on this to-do list now, no worries — you can always add it later.

Knowledge Bases:

  • Imagine knowledge bases as your agent’s encyclopedia. It’s a collection of information that the agent can use to answer customer questions and improve its responses. If you have specific data you want your agent to know, like private information, you’ll need to set up at least one knowledge base. But again, if you’re not ready for this step, you can skip it.

IAM Service Role:

  • Your agent needs a special role to do its job effectively. It’s like giving your agent a superhero cape. You can either create a custom IAM service role with the right powers, or you can let the AWS Management Console automatically set it up for you. If you’re not sure, the console can handle it.

So, in a nutshell, make sure your agent has permission, decide on its to-do list (action groups), consider giving it some knowledge (knowledge bases), and make sure it has the right superhero role (IAM service role). Now, your agent is all set to help your customers!

Creating an Amazon Bedrock Agent: Step-by-Step Guide

Step 1: Access Amazon Bedrock Console

Open the Console:

Navigate to Agents:

  • In the left navigation pane, select “Agents.”

Start Agent Creation:

  • Click on “Create Agent” to begin setting up your new agent.

Step 2: Agent Configuration

Provide Agent Details:

  • Enter a name and an optional description for your agent.
  • Choose whether the agent can request more information from the user if needed.

IAM Permissions:

  • Select or create an IAM service role for your agent. This is like giving your agent the right permissions to do its job.

Encryption Settings (Optional):

  • Decide if you want to use your own encryption key for added security.

Idle Session Timeout:

  • Set a duration. If the user hasn’t responded within this time, the conversation history won’t be maintained.

Tags (Optional):

  • Add tags if you want to associate specific labels with your agent.

Proceed to the Next Step:

  • Click “Next” when you’ve completed the agent details.

Step 3: Select Model

Choose a Model:

  • Select a model provider and the specific model your agent will use.

Instructions for the Agent:

  • Provide details to guide your agent’s behavior. It’s like giving your agent a script to follow.

Proceed to the Next Step:

  • Click “Next” when you’re done configuring the foundation model.

Step 4: Add Action Groups (Optional)

Add Action Groups:

  • If your agent needs to perform specific actions, add action groups.
  • Enter a name and description for the action group.
  • Choose a Lambda function that contains the business logic for the action.
  • Provide the API schema if needed.

Add More Action Groups (Optional):

  • If your agent requires multiple action groups, click “Add another action group.”

Proceed to the Next Step:

  • Click “Next” when you’ve added all necessary action groups.

Step 5: Add Knowledge Bases (Optional)

Add Knowledge Bases:

  • If your agent needs additional knowledge to enhance responses, associate knowledge bases.
  • Choose an existing knowledge base or create a new one.

Knowledge Base Instructions:

  • Provide instructions on how the agent should use the knowledge base.

Add More Knowledge Bases (Optional):

  • If your agent requires multiple knowledge bases, click “Add another knowledge base.”

Proceed to the Next Step:

  • Click “Next” when you’ve added all necessary knowledge bases.

Step 6: Review and Create

Review Configuration:

  • Check all the configurations you’ve made for your agent.

Edit Any Section (Optional):

  • If you need to make changes, click “Edit” for the respective section.

Create the Agent:

  • When you’re ready, click “Create” to initiate the agent creation process.

Confirmation:

  • Wait for the process to finish. A banner will appear at the top confirming the successful creation of your agent.

Creating Action Groups with AWS

Step 1: Set Up OpenAPI Schema

Define API Structure: Start by outlining the structure of your API. Specify the required parameters, responses, and any other relevant details. Consider using the OpenAPI specification for consistency.

Create OpenAPI Schema:

  • Option 1: Create a JSON or YAML OpenAPI schema file and upload it to an Amazon S3 bucket.
  • Ensure your schema adheres to OpenAPI standards.
  • Upload the file to an S3 bucket.
  • Option 2: Use the AWS Management Console.
  • If your agent is already created, go to the AWS Management Console.
  • Navigate to the action group settings and use the inline OpenAPI schema editor to define your API.

Step 2: Develop Lambda Function

Lambda Function Creation:

  • Go to the AWS Lambda service in the AWS Management Console.
  • Click on “Create function” and choose “Author from scratch.”
  • Configure the function details (name, runtime, etc.) and click “Create function.”

Implement Business Logic:

  • In the Lambda function editor, write the business logic for your action group.
  • Utilize the parameters received from the API call to perform the desired action.
  • Ensure the function returns a response according to the API schema.

Step 3: Link Action Group with Lambda Function

Configure Action Group:

  • Navigate to the AWS Console for Lex, the service for building conversational interfaces.
  • Locate your agent and access the settings for the action group.
  • Add the OpenAPI schema either by uploading the S3 file or using the inline editor.

Associate Lambda Function:

  • In the action group settings, link the Lambda function you created to handle the business logic.

Step 4: Test Your Action Group

Test in AWS Console:

  • Use the AWS Lex Test Bot console to simulate a conversation with your agent.
  • Invoke the action group to see if the API is called, and the Lambda function executes correctly.

Monitoring and Debugging:

  • Monitor your Lambda function’s logs for any errors or unexpected behavior.
  • Utilize AWS CloudWatch for detailed logging and debugging.

Step-by-Step Guide to Creating Action Groups in Amazon Bedrock with OpenAPI Schemas

In Amazon Bedrock, creating action groups involves defining API operations using OpenAPI schemas. This guide will walk you through the process of creating OpenAPI schemas for three action groups: PDF, RSS, and Insurance Claims Automation. We’ll focus on the Insurance Claims Automation example for detailed illustration.

Prerequisites

Before you begin, ensure you have:

  • An AWS account with access to Amazon Bedrock and AWS Lambda.
  • Basic knowledge of OpenAPI specifications.

Step 1: Understanding OpenAPI Schema Basics

To start, let’s understand the basic structure of an OpenAPI schema. Below is a template:

{
“openapi”: “3.0.0”,
“paths”: {
“/path”: {
“method”: {
“description”: “string”,
“operationId”: “string”,
“parameters”: [ … ],
“requestBody”: { … },
“responses”: { … }
}
}
}
}
*make sure to make the indentation right.

  • openapi: The version of OpenAPI, must be "3.0.0" or higher.
  • paths: Contains relative paths to individual endpoints.
  • method: Defines the HTTP method (GET, POST, etc.).
  • description: Describes the API operation.
  • operationId: Unique identifier for the operation.
  • parameters: Information about parameters.
  • requestBody: Fields in the request body.
  • responses: Properties that the agent returns.

Step 2: Example OpenAPI Schema for Insurance Claims Automation

{
“openapi”: “3.0.0”,
“info”: {
“title”: “Insurance Claims Automation API”,
“version”: “1.0.0”,
“description”: “APIs for managing insurance claims…”
},
“paths”: {
“/claims”: {
“get”: {
“summary”: “Get a list of all open claims”,
“description”: “Get the list of all open insurance claims…”,
“operationId”: “getAllOpenClaims”,
“responses”: {
“200”: {
“description”: “Gets the list of all open insurance claims…”,
“content”: {
“application/json”: {
“schema”: {
“type”: “array”,
“items”: {
“type”: “object”,
“properties”: {
“claimId”: {
“type”: “string”,
“description”: “Unique ID of the claim.”
},
“policyHolderId”: {
“type”: “string”,
“description”: “Unique ID of the policy holder…”
},
“claimStatus”: {
“type”: “string”,
“description”: “The status of the claim…”
}
}
}
}
}
}
}
}
}
},
“/claims/{claimId}/identify-missing-documents”: { … },
“/send-reminders”: { … }
}
}

  • make sure to make the indentation right.

In this example, we’ve defined the getAllOpenClaims operation. Now, let's break down the structure.

Understanding Example API Schema

/claims Endpoint (GET Method):

  • summary: Brief summary of the operation.
  • description: Detailed description of the operation.
  • operationId: Unique identifier.
  • responses: Describes the expected response.

Response Schema (200 OK):

  • content: Describes the content type.
  • schema: Defines the data type of the response body.
  • properties: Fields in the response.

Step 3: Creating Additional Endpoints

For brevity, we’ve shown only one endpoint. Repeat the process for the /claims/{claimId}/identify-missing-documents and /send-reminders endpoints following a similar structure.

You’ve successfully created an OpenAPI schema for an action group in Amazon Bedrock. Repeat these steps for your specific action groups, and don’t forget to test your APIs in the AWS console to ensure they work as expected. Feel free to explore more advanced features based on your use case.

For more examples of OpenAPI schemas, see https://github.com/OAI/OpenAPI-Specification/tree/main/examples/v3.0 on the GitHub website.

Define Lambda functions for your agent’s action groups in Amazon Bedrock

Step 1: Setting Up Your Lambda Function

1.1 Create a new Lambda Function:

  • Log in to the AWS Management Console.
  • Navigate to Lambda service and click “Create Function.”
  • Choose a function name, runtime (e.g., Python), and set up basic configurations.

1.2 Write the Lambda Handler:

  • In the “Function code” section, replace the default code with the following Python example:

def lambda_handler(event, context):
# Your custom logic here

return {
‘messageVersion’: ‘1.0’,
‘response’: {
‘actionGroup’: event[‘actionGroup’],
‘apiPath’: event[‘apiPath’],
‘httpMethod’: event[‘httpMethod’],
‘httpStatusCode’: 200,
‘responseBody’: {
‘application/json’: {
‘body’: ‘sample response’
}
}
},
‘sessionAttributes’: event[‘sessionAttributes’],
‘promptSessionAttributes’: event[‘promptSessionAttributes’]
}

  • make sure to make the indentation right.

1.3 Save and Deploy:

  • Save the Lambda function, and click “Deploy” to make it accessible.

Step 2: Understanding the Input Event

2.1 Lambda Input Event Structure:

  • Familiarize yourself with the structure of the Lambda input event.
  • Understand key fields like actionGroup, apiPath, httpMethod, and parameters.

2.2 Customize Business Logic:

  • Use input event fields to manipulate business logic within the Lambda function.

Step 3: Crafting the Lambda Response

3.1 Lambda Response Event Structure:

  • Learn about the structure of the Lambda response event.
  • Key fields include actionGroup, apiPath, httpMethod, and responseBody.

3.2 Building the Response:

  • Customize the Lambda function to create a response with relevant information.
  • Include session attributes and prompt session attributes as needed.

Step 4: Attach Resource-Based Policy

4.1 Grant Permissions:

  • Navigate to the Lambda function’s “Permissions” tab.
  • Attach a resource-based policy to grant Amazon Bedrock the necessary permissions.

Step 5: Testing Your Custom Action Group

5.1 Invoke the Lambda Function:

  • Use the sample input event to test your Lambda function.
  • Observe the response and ensure it aligns with your business logic.

5.2 Debugging and Iteration:

  • If needed, adjust your Lambda function based on debugging insights.
  • Re-test until you achieve the desired behavior.

You’ve successfully created a Lambda function to handle custom action groups in Amazon Bedrock. This guide has walked you through the setup, customization, and testing processes, empowering you to extend the functionality of your conversational agent.

Testing Amazon Bedrock Agents: Console and API

Console Method:

Step 1: Accessing Amazon Bedrock Console

  1. Sign in to the AWS Management Console.
  2. Open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock/.

Step 2: Selecting and Accessing Your Agent

  1. Navigate to the left navigation pane and select “Agents.”
  2. Choose the desired agent from the Agents section.

Step 3: Opening the Test Window

  1. In the Agents section, click on the link for the chosen agent.
  2. The Test window will appear on the right. If closed, reopen it by selecting “Test” at the top of the agent details page.

Step 4: Preparing the Agent

  1. After creating an agent, prepare it by selecting “Prepare” in the Test window or in the Working draft page.
  2. Always check the Last prepared time to ensure you’re testing with the latest configurations.

Step 5: Choosing Alias and Version

  1. Use the dropdown menu at the top of the Test window to choose an alias and associated version.
  2. By default, TestAlias: Working draft is selected.

Step 6: Testing the Agent

  1. Enter a message and click “Run” to test the agent.
  2. Options during and after response generation:
  • Show trace: View detailed step-by-step reasoning process.
  • Click footnotes to view S3 object links.
  • Start a new conversation, view the Trace window, or close the Test window.

Step 7: Enabling/Disabling Action Groups or Knowledge Bases

  1. In the Working draft section, choose the link for the Working draft.
  2. In Action groups or Knowledge bases, hover over the State and select the edit icon.
  3. Choose Enabled or Disabled as needed.
  4. Use the Test window to troubleshoot your agent.

Step 8: Applying Changes

  1. Click “Prepare” to apply changes before testing.

API Method:

Step 1: Sending Input with InvokeAgent

  1. Use the working draft (DRAFT version) with InvokeAgent.
  2. Send input using the test alias (TSTALIASID) or another alias pointing to a static version.

Step 2: Viewing Trace with API

  1. To troubleshoot, view the trace during a session using the Agents for Amazon Bedrock API.
  2. Understand the step-by-step reasoning process for each interaction.

you can confidently test your Amazon Bedrock agent using both the Console and API methods. Troubleshoot effectively, view detailed traces, and iterate on your agent’s configurations until you achieve the desired results.

Deploying Your Amazon Bedrock Agent

Console Method:

Step 1: Accessing Amazon Bedrock Console

  1. Sign in to the AWS Management Console.
  2. Open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock/.

Step 2: Selecting and Accessing Your Agent

  1. Navigate to the left navigation pane and select “Agents.”
  2. Choose the desired agent from the Agents section.

Step 3: Creating an Alias

  1. In the Aliases section, click “Create.”
  2. Enter a unique name for the alias and provide an optional description.

Step 4: Choose Alias Options

  1. To create a new version, select “Create a new version” and associate it with this alias.
  2. Alternatively, to use an existing version, choose “Use an existing version” from the dropdown menu and select the version to associate with the alias.

Step 5: Confirm and Create Alias

  1. Click “Create alias.”
  2. A success banner will appear at the top, indicating the successful creation of the alias.

Step 6: Deploying Your Agent

  1. Deploy your agent by setting up your application to make an InvokeAgent request.
  2. Specify the agentAliasId field with the ID of the alias pointing to the version you want to use.

API Method:

Step 1: Sending Input with InvokeAgent

  1. Use the working draft (DRAFT version) with InvokeAgent.
  2. Send input using the test alias (TSTALIASID) or another alias pointing to a static version.

Step 2: Viewing Trace with API

  1. To troubleshoot, view the trace during a session using the Agents for Amazon Bedrock API.
  2. Understand the step-by-step reasoning process for each interaction.

You’ve successfully deployed your Amazon Bedrock agent. With this comprehensive guide, you can confidently create aliases, versions, and seamlessly integrate your agent into your applications. Whether you prefer the Console or API method.

Understanding Amazon Bedrock Quotas:

If you’re diving into the world of Amazon Bedrock, it’s crucial to be aware of the default quotas that come with your AWS account. These quotas, formerly known as limits, are specific to each AWS service and region within your account.

Adjustable Quotas:

Before we explore specific quotas for Amazon Bedrock, it’s essential to understand the Adjustable column in the tables:

  • Yes: You can adjust the quota through the Service Quotas User Guide.
  • No: You can submit a request through the limit increase form for consideration.

Runtime Quotas:

When it comes to model inference, consider the following runtime quotas:

  • Requests processed per minute
  • Tokens processed per minute

Here are some examples:

  • AI21 Labs Jurassic-2 Mid: 400 requests/min, 300,000 tokens/min (Not Adjustable)
  • Amazon Titan Embeddings G1 — Text: 2,000 requests/min, 300,000 tokens/min (Not Adjustable)

Model-Specific Inference Quotas:

For specific Amazon Titan Text models, check out the dedicated tabs for detailed quotas.

Batch Inference Quotas:

Running batch inference? Keep these quotas in mind based on the modality of input and output data.

  • Text to embeddings: 75 MB to 500 MB
  • Text to text: 20 MB to 150 MB
  • Text/image to image: 1 MB to 50 MB

Knowledge Base Quotas:

For the Knowledge base in Amazon Bedrock, consider the following quotas:

  • Knowledge bases per account per region: Maximum 50 (Not Adjustable)
  • Data source file size: Maximum 50 MB (Not Adjustable)
  • Data source chunk size: Various sizes (Not Adjustable)

Agent Quotas:

Agents play a crucial role, and here are some key quotas to keep in mind:

  • Agents per account: Maximum 50 (Adjustable)
  • Aliases per Agent: Maximum 10 (Not Adjustable)
  • Characters in Agent instructions: Maximum 1,200 (Not Adjustable)

Model Customization Quotas:

If you’re into model customization, these quotas matter:

  • Scheduled training jobs per account: Maximum 2 (Not Adjustable)
  • Custom models per account: Maximum 100 (Adjustable)

Explore model-specific quotas for training and validation datasets on different foundation models.

Provisioned Throughput Quotas:

Finally, for Provisioned Throughput, be aware of these quotas:

  • Model units for a base model: Default 0 (Adjustable)
  • Model units for a custom model: Default 2 (Adjustable)

Understanding and managing these quotas is essential for optimizing your experience with Amazon Bedrock. Whether it’s runtime, batch inference, knowledge base, agents, model customization, or throughput, knowing the limits empowers you to make the most of this powerful AWS service.

Conclusion: In conclusion, this blog has unveiled the intricate world of Amazon Bedrock, showcasing its capabilities in building intelligent agents and knowledge bases. From setting up knowledge bases in Amazon Bedrock to creating agents and orchestrating complex tasks, the step-by-step guide ensures a thorough understanding of the platform’s potential. Whether you’re navigating the console, creating action groups, or testing agents through the console and API, the blog covers it all. It lays the foundation for harnessing the power of knowledge bases, explores the multitasking nature of agents, and delves into the creation of action groups using OpenAPI schemas and Lambda functions. With insights into testing methodologies and deployment procedures, this blog serves as a comprehensive resource for anyone embarking on the Amazon Bedrock journey.

--

--

Responses (1)