To evaluate the network performance of an Amazon EC2 instance (such as a c5g instance) with enhanced networking, you can use various Linux commands and tools to monitor and analyze network performance. Enhanced networking on AWS leverages technologies like Elastic Network Adapter (ENA) or Intel SR-IOV to provide higher bandwidth, lower latency, and lower jitter. Below are the steps and tools you can use to measure and compare network performance between AWS EC2 and Azure VMs:
Before measuring performance, ensure that enhanced networking is enabled on your EC2 instance.
Check for the ENA driver:
ethtool -i eth0
Look for driver: ena in the output.
Check for SR-IOV (if applicable):
lspci | grep -i ethernet
Look for Virtual Function in the output.
Use the following tools to measure network performance:
a. Ping (Latency)
Measure latency between the EC2 instance and the target (e.g., S3 endpoint):
ping s3.amazonaws.com
b. iPerf3 (Bandwidth and Throughput)
Install iPerf3:
sudo apt-get install iperf3 # For Ubuntu/Debian
sudo yum install iperf3 # For CentOS/RHEL
Run iPerf3 in server mode on one instance:
iperf3 -s
Run iPerf3 in client mode on another instance (e.g., Azure VM):
iperf3 -c <server-ip>
This will measure bandwidth and throughput between the two instances.
c. S3 Performance Testing
To test S3 performance specifically, use tools like s3bench or s3cmd:
Install s3cmd:
sudo apt-get install s3cmd # For Ubuntu/Debian
sudo yum install s3cmd # For CentOS/RHEL
Configure s3cmd with your AWS credentials:
s3cmd --configure
Test upload/download performance:
s3cmd put largefile.txt s3://your-bucket/
s3cmd get s3://your-bucket/largefile.txt
d. Netstat (Network Statistics)
Monitor network connections and performance:
netstat -s
e. SAR (System Activity Reporter)
Monitor network interface performance over time:
Install sysstat:
sudo apt-get install sysstat # For Ubuntu/Debian
sudo yum install sysstat # For CentOS/RHEL
Start sar to collect network statistics:
sar -n DEV 1 10 # Monitor network devices every 1 second, 10 times
Check system logs and configurations to ensure optimal network performance:
a. Network Interface Configuration
Check MTU size:
ip link show eth0
AWS recommends an MTU of 9001 for Jumbo Frames.
Check TCP window scaling:
sysctl net.ipv4.tcp_window_scaling
b. Kernel Logs
Check kernel logs for network-related errors:
dmesg | grep -i eth0
c. Cloud-Init Logs
Check cloud-init logs for network configuration during boot:
cat /var/log/cloud-init.log
AWS EC2 instances with enhanced networking benefit from:
Higher Packet Per Second (PPS): ENA supports up to 20 Gbps and 500,000 PPS.
Lower Latency: SR-IOV bypasses the hypervisor for direct network access.
Jumbo Frames: AWS supports MTU 9001 for improved throughput.
To verify these settings:
Check ENA driver version:
modinfo ena
Check SR-IOV settings:
lspci -v | grep -i ethernet
To compare AWS EC2 and Azure VM performance:
Use the same tools (e.g., iPerf3, ping, s3bench) on both platforms.
Ensure both instances are in the same region as the S3 bucket.
Compare metrics like latency, throughput, and PPS.
Network Stack: AWS uses ENA/SR-IOV, while Azure uses Accelerated Networking with SR-IOV.
S3 Integration: AWS has native integration with S3, reducing latency and improving throughput compared to Azure accessing S3 over the public internet.
Region Proximity: Ensure both instances are in the same region as the S3 bucket for a fair comparison.
By using the above tools and methods, you can analyze and compare the network performance of AWS EC2 and Azure VMs when accessing S3.
# Simple Download with Range Support
import boto3
# Basic Download Manager
def download_large_file(bucket, key, local_file):
s3_client = boto3.client('s3')
# Get file size
response = s3_client.head_object(Bucket=bucket, Key=key)
file_size = response['ContentLength']
# Using Transfer Config (handles ranges automatically)
config = boto3.s3.transfer.TransferConfig(
multipart_threshold=1024 * 1024, # 1MB
multipart_chunksize=1024 * 1024, # 1MB chunks
max_concurrency=10 # Number of concurrent threads
)
# Download with automatic range handling
s3_client.download_file(
Bucket=bucket,
Key=key,
Filename=local_file,
Config=config
)
More Advanced Example with Progress:
import boto3
from boto3.s3.transfer import TransferConfig
# Download with progress tracking
def download_with_progress(bucket, key, local_file):
s3 = boto3.client('s3')
# Configure chunked download
config = TransferConfig(
multipart_threshold=1024 * 1024, # 1MB
multipart_chunksize=1024 * 1024, # 1MB per chunk
max_concurrency=10, # 10 threads
use_threads=True # Enable multi-threading
)
# Get file size
file_size = s3.head_object(Bucket=bucket, Key=key)['ContentLength']
# Progress callback
def progress(bytes_transferred):
percentage = (bytes_transferred * 100) / file_size
print(f"Downloaded: {percentage:.2f}%")
# Download file
s3.download_file(
Bucket=bucket,
Key=key,
Filename=local_file,
Config=config,
Callback=progress
)
Key Points:
SDK Handles:
- Range calculations
- Chunking
- Reassembly
- Concurrent downloads
- Error retry
- Progress tracking
The TransferConfig parameters control:
multipart_threshold: When to use multipart (1MB)
multipart_chunksize: Size of each chunk (1MB)
max_concurrency: Number of parallel downloads (10)
use_threads: Enable/disable threading