An artificial intelligence (AI) agent is a software program that can interact with its environment, collect data, and use the data to perform self-determined tasks to meet predetermined goals. Humans set goals, but an AI agent independently chooses the best actions it needs to perform to achieve those goals.
Enable generative AI applications to automate multistep tasks by seamlessly connecting with company systems, APIs, and data sources
Amazon Bedrock Agents use the reasoning of foundation models (FMs), APIs, and data to break down user requests, gather relevant information, and efficiently complete tasks
Amazon Bedrock supports multi-agent collaboration, allowing multiple specialized agents to work together on complex business challenges.
An agent is composed of the following key components:
Foundation Model (FM) – You select a foundation model that the agent leverages to interpret user input and subsequent prompts throughout its orchestration process. The agent also relies on the FM to generate responses and determine follow-up steps.
Instructions – You define the agent's purpose through written instructions. By using advanced prompts, you can further refine these instructions at each stage of orchestration and incorporate Lambda functions to parse outputs from different steps.
At least one of the following:
Action Groups – These define the actions the agent should perform on behalf of the user. You provide below resources:
One of the following schemas to specify the parameters the agent needs to gather from the user (each action group can use a different schema):
An OpenAPI schema to define API operations the agent can invoke, including the required parameters.
A function detail schema that outlines the parameters the agent should refer to , which can then be used for further orchestration or use within your application.
(Optional) A Lambda function that processes:
Input – API operations and/or parameters identified during orchestration.
Output – The response from the API invocation.
Knowledge Bases – You can associate knowledge bases with the agent, enabling it to query them for additional context. This enhances response generation and informs various steps within the orchestration process.
Prompt Templates – These templates serve as the foundation for creating prompts provided to the FM. Amazon Bedrock Agents offers four default base prompt templates for different stages, including pre-processing, orchestration, knowledge base response generation, and post-processing. You can modify these templates to fine-tune the agent's behavior at each stage or disable specific steps for troubleshooting or optimization. For more details, see Enhance agent accuracy using advanced prompt templates in Amazon Bedrock.
During the build phase, all these components come together to generate base prompts that guide the agent’s orchestration until the user’s request is fulfilled. Advanced prompts allow you to refine these base prompts with additional logic and few-shot examples to enhance accuracy at each step. These templates include instructions, action descriptions, knowledge base references, and conversation history, all of which can be customized to align the agent with your specific needs.
Once the agent is prepared, all components—including security configurations—are packaged together, making it ready for testing in a runtime environment.
First select a model and write a few instructions in natural language,
for example, “you are an inventory management agent that determines product availability in the inventory system”.
Agents orchestrate and analyze the task and break it down into the correct logical sequence using the FM’s reasoning abilities.
Agents automatically call the necessary APIs to transact with the company systems and processes to fulfill the request, determining along the way if they can proceed or if they need to gather more information.
Foundation model – You choose a foundation model (FM) that the agent invokes.
Instructions – You write instructions that describe what the agent is designed to do.
At least one of the following:
Action groups -You define the actions that the agent should perform for the user through providing Schema , Lambda
Knowledge bases – Associate knowledge bases with an agent.
Prompt templates – Prompt templates are the basis for creating prompts to be provided to the FM.
Pre-processing – Manages how the agent contextualizes and categorizes user input
Orchestration – Interprets the user input, invokes action groups and queries knowledge bases, and returns output to the user or as input to continued orchestration.
This loop continues until the agent returns a response to the user or until it needs to prompt the user for extra information.
Post-processing – The agent formats the final response to return to the user. This step is turned off by default.
is designed as a Directed Acyclic Graph (DAG) system rather than an orchestrator for cyclic agentic architectures.
Key characteristics of Bedrock Flows:
It follows a DAG structure where nodes represent specific tasks/components, and edges represent the flow of information between them.
The "acyclic" part is crucial - it means flows must progress forward without loops or cycles. Each task can only be executed once per flow run.
Bedrock Flows is optimized for predictable, deterministic sequences where the path through components is known in advance.
This design differs from what's needed for true agentic architectures, which often require:
Cyclic/iterative flows where an agent might return to previous steps
Dynamic decision-making about which components to invoke next
Self-reflection and adjustment of workflow based on intermediate results
If you need to implement more complex agentic patterns with Bedrock, you'd typically need to use AWS Step Functions or build custom orchestration logic to manage iterative behaviors that aren't natively supported by Bedrock Flows' DAG structure.
Lets see an example where we would use combination of Stepfunctions and bedrock flows
Research Assistant Agent for Scientific Literature Review
Imagine an AI system that helps researchers conduct comprehensive literature reviews. The agent would need to:
Initial Query Processing: Parse the researcher's request for a specific scientific topic.
Search & Retrieval: Find relevant papers from academic databases.
Analysis & Synthesis: Read each paper and extract key findings.
Gap Identification: Identify missing information or contradictions in the current set of papers.
Query Refinement: Based on gaps found, reformulate search queries to find additional papers.
Relevance Check: Determine if the newly found papers are relevant to the original request.
Decision Point: Choose to either:
Return to step 2 (search) with refined queries
Continue analyzing more papers from step 3
Proceed to final synthesis
Knowledge Integration: Combine insights from all papers, highlighting agreements and disagreements.
Report Generation: Create a comprehensive literature review.
This requires a cyclic architecture because:
The agent must repeatedly loop back to previous steps based on its own findings
The path isn't predetermined but depends on what's discovered during execution
The agent needs to make dynamic decisions about when to continue searching vs. when to finalize
The stopping condition isn't a fixed number of iterations but based on reaching sufficient coverage
A DAG system like Bedrock Flows couldn't handle this because it doesn't support the cyclical "search → analyze → refine → search again" pattern that's essential for thorough literature review. An orchestrator that allows for conditional looping, dynamic component selection, and state tracking across iterations would be necessary.
Here's a sub-scenario from the Research Assistant Agent example that could be implemented using a DAG structure with Bedrock Flows:
Paper Analysis Pipeline (DAG Component)
This would be a well-defined subprocess within the larger cyclic architecture that handles the analysis of individual papers once they've been retrieved:
Document Processing: Convert the PDF/document to a text format.
Section Identification: Identify the abstract, introduction, methodology, results, and conclusion sections.
Key Finding Extraction: Extract the main findings and contributions from the paper.
Methodology Analysis: Identify the research methods, sample sizes, and experimental design.
Citation Network Analysis: Extract the paper's references and identify frequently cited works.
Statistical Validation: Verify statistical methods and results for soundness.
Data Visualization Detection: Identify charts, graphs, and tables that represent key data.
Limitation Recognition: Extract author-stated limitations of the research.
Summary Generation: Create a concise summary of the paper's contributions.
How a Combined Approach Would Work:
The cyclic orchestrator would handle the overall literature review process, including search iterations and decision-making, while delegating the analysis of each individual paper to this Bedrock Flows DAG:
Orchestrator (Cyclic): Decides which papers to search for and retrieve.
Bedrock Flows (DAG): For each paper, runs the complete analysis pipeline from document processing to summary generation.
Orchestrator (Cyclic): Receives the analysis results, integrates them into the growing knowledge base, identifies gaps, and decides whether to:
Trigger another search iteration
Process more papers from the current batch
Finalize the literature review
This combined approach leverages the strengths of both architectures:
The predictable, step-by-step paper analysis process uses Bedrock Flows' DAG structure
The dynamic, iterative research process that requires loops and conditional branching uses a cyclic orchestrator
This gives an example of how tasks that have well-defined, linear subcomponents can use DAGs even within a larger system that requires cyclic behaviors.
Amazon Bedrock Agents sits in an interesting position relative to both DAG-based flows and cyclic orchestrators. Here's how it compares:
Bedrock Agents vs. Bedrock Flows vs. Cyclic Orchestrators
Bedrock Agents:
Designed for task-oriented conversations and actions
Uses a reasoning-planning-action loop that allows for some iterative behavior
Can call APIs and access knowledge bases to complete tasks
Works within a single conversation/session context
Supports a form of self-correction through observation of API responses
Can dynamically select which actions to take based on user requests
Key Differences:
Scope and Purpose:
Bedrock Flows: Pipeline-oriented, focused on sequential data processing
Bedrock Agents: Task-oriented, focused on conversational problem-solving
Cyclic Orchestrators: Process-oriented, focused on complex multi-stage workflows
Iteration Capability:
Bedrock Flows: No native iteration (strictly acyclic)
Bedrock Agents: Limited iteration within a session (can make multiple API calls based on responses)
Cyclic Orchestrators: Full iteration with complex decision loops
State Management:
Bedrock Flows: Minimal state management between steps
Bedrock Agents: Maintains conversation state and memory within a session
Cyclic Orchestrators: Comprehensive state management across multiple processes
Decision Making:
Bedrock Flows: Predetermined branching paths
Bedrock Agents: LLM-based reasoning for next action selection
Cyclic Orchestrators: Complex conditional logic with feedback loops
Bedrock Agents is more powerful than pure DAG-based flows for interactive tasks but has limitations compared to full cyclic orchestrators for complex multi-stage processes that require extensive iteration and replanning.
A good way to think about it: Bedrock Agents is better suited for helping a single user accomplish specific tasks in a conversational context, while more complex cyclic orchestrators would be needed for autonomous long-running processes that require extensive iteration and self-correction.
The explanation uses a screwdriver metaphor to show how different AI architectures are suited for different purposes:
Bedrock Flows (DAG) - Like a standard screwdriver:
Best for straightforward, linear processes
Works well for predictable workflows like customer ticket processing
One-way flow without loops
Simple but effective for the right tasks
Cyclic Orchestrator - Like a power screwdriver:
Designed for complex, iterative work
Perfect for processes requiring repeated loops and refinement
Handles dynamic decision-making
Great for research tasks that need multiple passes
Bedrock Agents - Like a multi-bit screwdriver:
Versatile for conversational task completion
Offers some flexibility with limited iteration
Maintains context within a session
Balances simplicity and adaptability
Combined Approach - Like a toolkit:
Uses cyclic orchestrators for the overall process
Embeds DAG components for well-defined subtasks
Leverages the strengths of each architecture
Perfect for complex workflows with predictable components
Just as you wouldn't use a Phillips head on a flathead screw, choosing the right AI architecture for your specific use case ensures the most efficient and effective solution.