Use Runpod’s S3-compatible API to access and manage your network volumes.
Datacenter | Endpoint URL |
---|---|
EUR-IS-1 | https://s3api-eur-is-1.runpod.io/ |
EU-RO-1 | https://s3api-eu-ro-1.runpod.io/ |
EU-CZ-1 | https://s3api-eu-cz-1.runpod.io/ |
US-KS-2 | https://s3api-us-ks-2.runpod.io/ |
Create a network volume
Create an S3 API key
user_***...
) and secret (e.g., rps_***...
) to use in the next step.Configure AWS CLI
aws configure
in your terminal.Shared Secret for user_2f21CfO73Mm2Uq2lEGFiEF24IPw 1749176107073
. user_2f21CfO73Mm2Uq2lEGFiEF24IPw
is the user ID (yours will be different).json
.~/.aws/credentials
).ls
, cp
, mv
, rm
, and sync
function as expected.
aws s3
commands, you must pass in the endpoint URL for your network volume using the --endpoint-url
flag and the datacenter ID using the --region
flag.
--region
flag is case-sensitive. For instance, --region EU-RO-1
is a valid input, whereas --region eu-ro-1
will be rejected.#
) may need to be URL-encoded to ensure proper processing.
ls
to list objects in a network volume directory:
ls
and ListObjects
operations will list empty directories.
ls
operations may take a long time when used on a directory containing many files (over 10,000) or large amounts of data (over 10GB), or when used recursively on a network volume containing either.cp
to copy a file to a network volume:
cp
to copy a file from a network volume to a local directory:
rm
to remove a file from a network volume:
AWS_MAX_ATTEMPTS
to 10 or more:aws s3api
commands (instead of aws s3
) to interact with the S3-compatible API.
For example, here’s how you could use aws s3api get-object
to download an object from a network volume:
[LOCAL_FILE]
with the desired path and name of the file after download—for example: ~/local-dir/my-file.txt
.
For a list of available s3api
commands, see the AWS s3api reference.
AWS_ACCESS_KEY_ID
: Should be set to your Runpod S3 API key access key (e.g., user_***...
).AWS_SECRET_ACCESS_KEY
: Should be set to your Runpod S3 API key’s secret (e.g., rps_***...
).put_objects
method above, you must specify these arguments:
file_path
: The local source file (e.g., local_directory/file.txt
).object_name
: The remote destination file to be created on the network volume (e.g., remote_directory/file.txt
).Operation | Supported | CLI Command | Notes |
---|---|---|---|
CopyObject | ✅ | aws s3 cp , aws s3api copy-object | Copy objects between locations |
DeleteObject | ✅ | aws s3 rm , aws s3api delete-object | Remove individual objects |
GetObject | ✅ | aws s3 cp , aws s3api get-object | Download objects |
HeadBucket | ✅ | aws s3 ls , aws s3api head-bucket | Verify bucket exists and permissions |
HeadObject | ✅ | aws s3api head-object | Retrieve object metadata |
ListBuckets | ✅ | aws s3 ls , aws s3api list-buckets | List available network volumes |
ListObjects | ✅ | aws s3 ls , aws s3api list-objects | List objects in a bucket (includes empty directories) |
ListObjectsV2 | ✅ | aws s3 ls , aws s3api list-objects-v2 | Enhanced version of ListObjects |
PutObject | ✅ | aws s3 cp , aws s3api put-object | Upload objects (<500MB) |
DeleteObjects | ❌ | aws s3api delete-objects | Planned |
RestoreObject | ❌ | aws s3api restore-object | Not supported |
Operation | Supported | CLI Command | Notes |
---|---|---|---|
CreateMultipartUpload | ✅ | aws s3api create-multipart-upload | Start multipart upload for large files |
UploadPart | ✅ | aws s3api upload-part | Upload individual parts |
CompleteMultipartUpload | ✅ | aws s3api complete-multipart-upload | Finish multipart upload |
AbortMultipartUpload | ✅ | aws s3api abort-multipart-upload | Cancel multipart upload |
ListMultipartUploads | ✅ | aws s3api list-multipart-uploads | View in-progress uploads |
ListParts | ✅ | aws s3api list-parts | List parts of a multipart upload |
Operation | Supported | CLI Command | Notes |
---|---|---|---|
CreateBucket | ❌ | aws s3api create-bucket | Use the Runpod console to create network volumes |
DeleteBucket | ❌ | aws s3api delete-bucket | Use the Runpod console to delete network volumes |
GetBucketLocation | ❌ | aws s3api get-bucket-location | Datacenter info available in the Runpod console |
GetBucketVersioning | ❌ | aws s3api get-bucket-versioning | Versioning is not supported |
PutBucketVersioning | ❌ | aws s3api put-bucket-versioning | Versioning is not supported |
Operation | Supported | CLI Command | Notes |
---|---|---|---|
GetBucketAcl | ❌ | N/A | ACLs are not supported |
PutBucketAcl | ❌ | N/A | ACLs are not supported |
GetObjectAcl | ❌ | N/A | ACLs are not supported |
PutObjectAcl | ❌ | N/A | ACLs are not supported |
GetBucketPolicy | ❌ | N/A | Bucket policies are not supported |
PutBucketPolicy | ❌ | N/A | Bucket policies are not supported |
Operation | Supported | CLI Command | Notes |
---|---|---|---|
GetObjectTagging | ❌ | N/A | Object tagging is not supported |
PutObjectTagging | ❌ | N/A | Object tagging is not supported |
DeleteObjectTagging | ❌ | N/A | Object tagging is not supported |
Operation | Supported | CLI Command | Notes |
---|---|---|---|
GetBucketEncryption | ❌ | N/A | Encryption is not supported |
PutBucketEncryption | ❌ | N/A | Encryption is not supported |
GetObjectLockConfiguration | ❌ | N/A | Object locking is not supported |
PutObjectLockConfiguration | ❌ | N/A | Object locking is not supported |
ListObjects
operations may take a long time when used on a directory containing many files (over 10,000) or large amounts of data (over 10GB), or when used recursively on a network volume containing either.aws s3 ls
or ListObjects
on a directory with many files or large amounts of data (typically >10,000 files or >10 GB of data) for the first time, it may run very slowly, or you may encounter the following error:
ListObjects
request must wait until the checksum is ready.
Workarounds:
CopyObject
and UploadPart
actions do not check for available free space beforehand and may fail if the volume runs out of space.#
) may need to be URL encoded to ensure proper processing..s3compat_uploads/
folder. This folder and its contents are automatically cleaned up when you call CompleteMultipartUpload
or AbortMultipartUpload
.CompleteMultipartUpload
operation. To resolve this, increase the timeout settings in your AWS tools:
aws s3
and aws s3api
, use the --cli-read-timeout
parameter:~/.aws/config
: