An artificial intelligence (AI) agent is a software program that can interact with its environment, collect data, and use the data to perform self-determined tasks to meet predetermined goals. Humans set goals, but an AI agent independently chooses the best actions it needs to perform to achieve those goals.
A development framework for building and managing multi-agent systems, providing tools and APIs to create, deploy, and orchestrate agent interactions and workflows. Supports both Bedrock and external LLMs (e.g., Anthropic Claude) , comes with 20+ prebuilt tools, that you do NOT have to code. For example:
current_time (gives current time)
use_aws (uses boto3 under the hood to list AWS resources)
http_request (figures out API calls automatically)
or use Frameworks like LangGraph, CrewAI, and LlamaIndex etc.
Is an enterprise-grade suite of services designed to accelerate moving your agentic applications from POC to Production. AgentCore enables you to deploy and operate agents securely, at scale. AgentCore services can be used together or independently and work with any framework including Strands Agents, LangGraph, CrewAI, and LlamaIndex, as well as any foundation model in or outside of Amazon Bedrock, giving you the ultimate flexibility.  
It serves developers and enterprises who need
robust, secure, and scalable infrastructure to support dynamic execution paths at runtime
controls to monitor behavior
powerful tools to enhance agents, and
the flexibility to adapt as the landscape evolves.
Amazon Bedrock AgentCore services are composable and work with popular open-source frameworks and any model, so you don’t have to choose between open-source flexibility and enterprise-grade security and reliability.
AgentCore includes foundational tools required by agents to execute real-world workflows:
AgentCore Gateway. Enable agents to seamlessly discover and securely connect to tools, data, and other agents.
AgentCore Memory. Allow agents to retain both short-term and long-term memory with high accuracy.
AgentCore Runtime. Deploy agents securely at scale. Built for dynamic agentic workload demands including the longest session runtime in the industry for asynchronous workloads.
AgentCore Identity. Enables AI agents to securely access AWS services and third-party tools on behalf of users or autonomously with pre-authorization..
AgentCore Observability. Gives developers complete visibility into agent workflows to trace, debug, and monitor AI agents' performance in production environments. With support for OpenTelemetry compatible telemetry and detailed visualizations of each step of the agent workflow, AgentCore enables developers to easily gain visibility into agent behavior and maintain quality standards at scale.
Browser tool: Provides a fast, secure, cloud-based browser runtime to enable AI agents to interact with websites at scale.
Code Interpreter: Enables AI agents to write and execute code securely in sandbox environments, enhancing their accuracy and expanding their ability to solve complex end-to-end tasks.
The Amazon Bedrock AgentCore Python SDK acts as a wrapper that:
Transforms your agent code into AgentCore's standardized protocols
Handles HTTP and MCP server infrastructure automatically
Lets you focus on your agent's core functionality
Supports two protocol types:
HTTP Protocol: Traditional request/response REST API endpoints
MCP Protocol: Model Context Protocol for tools and agent servers
The SDK automatically:
Hosts your agent on port 8080
Provides two key endpoints:
/invocations: Primary agent interaction (JSON input → JSON/SSE output)
/ping: Health check for monitoring
You can convert your existing agent function into an Amazon Bedrock AgentCore-compatible service with just four steps:
Import the Runtime App with from bedrock_agentcore.runtime import BedrockAgentCoreApp
Initialize the App in our code with app = BedrockAgentCoreApp()
Decorate the invocation function with the @app.entrypoint decorator
Let AgentCoreRuntime control the running of the agent with app.run()
You can invoke the agent using the InvokeAgentRuntime operation:
import boto3
import json
# Initialize the Bedrock AgentCore client
agent_core_client = boto3.client('bedrock-agentcore', region_name="us-east-1")
# Prepare the payload
payload = json.dumps({"prompt": prompt}).encode()
# Invoke the agent
response = agent_core_client.invoke_agent_runtime(
agentRuntimeArn=agent_arn,
runtimeSessionId=session_id,
payload=payload
)
Function Calling
The ability for AI models to invoke external functions, APIs, or tools during conversations, enabling agents to perform actions beyond text generation like database queries or API calls.
These technologies work together to create sophisticated AI systems where specialized agents can collaborate, use external tools, and be managed through unified orchestration platforms on AWS.
is the broader architectural pattern and technical framework for coordinating multiple AI agents. It encompasses the underlying infrastructure, protocols, APIs, and coordination logic that manages how agents communicate, share tasks, resolve conflicts, and work together. It's platform-agnostic and can work with various types of agents from different providers.
AgentCore Runtime.
AgentCore Runtime is a secure, serverless runtime purpose-built for deploying and scaling dynamic AI agents and tools using any open-source framework (including Strands Agents, LangGraph, and CrewAI), any protocol, and any model. Runtime was built to work for agentic workloads with industry-leading extended runtime support, fast cold starts, true session isolation, built-in identity, and support for multi-modal payloads. Developers can focus on innovation while Amazon Bedrock AgentCore Runtime handles infrastructure and security—accelerating time-to-market.
Deploy agents securely at scale. Built for dynamic agentic workload demands including the longest session runtime in the industry for asynchronous workloads.
Step 1 : Adapt strands code to run on Agent Core runtime
Add couple lines, majority of your code stays same. If you take the above Strands code, you need to make the following changes
Import necessary library - pretty standard. then instantiate a BedRock AgentCore app
Encapsulate your code in a function (in this case invoke()) add @app.entrypoint above it. This simply tells AgentCore when your program is invoked what is the entry point
Standard app.run() function to execute the code
Step 2 : Containerize your code (You don't have to write Dockerfile!)
You simply run command "agentcore configure" and it creates a Dockerfile for you!
AgentCore creates the Dockerfile which will package the code and requirements.txt file
Step 3 : Deploy and Run your Agent
You run command "agentcore launch" and lot of things happen.
AgentCore copies the python code, requirements.txt, and Dockerfile from your local to a S3 bucket. It creates the bucket for you
It runs the Dockerfile in AWS CodeBuild which creates a container image for the agent code
It saves the image in an ECR repo (it creates the repo for you)
It starts running the container in AgentCore ready to be invoked
It creates logging and tracing for you. AWS even created new Gen AI Observability
Below are the characteristics of the AgentCore:
Agents run on Serverless microVMs managed by AWS
Out of the box Gen AI logging and tracing
Convert any Lambda or APIs into MCP
Supports third party Agent Frameworks – Crew, LangGraph etc. if you have written your agents on these framework, you can still utilize AgentCore to run them on AWS
Easy to add AuthN/Z
no need to write and manage individual Lambda for each tool unlike Amazon Bedrock Agents
AgentCore Identity. Enables AI agents to securely access AWS services and third-party tools on behalf of users or autonomously with pre-authorization..
AgentCore Observability. Gives developers complete visibility into agent workflows to trace, debug, and monitor AI agents' performance in production environments. With support for OpenTelemetry compatible telemetry and detailed visualizations of each step of the agent workflow, AgentCore enables developers to easily gain visibility into agent behavior and maintain quality standards at scale.