Skip to content

Multi-bucket access using temp AWS credentials#564

Open
bsatoriu wants to merge 1 commit intodevelopfrom
feat/multi-bucket-access
Open

Multi-bucket access using temp AWS credentials#564
bsatoriu wants to merge 1 commit intodevelopfrom
feat/multi-bucket-access

Conversation

@bsatoriu
Copy link
Contributor

@bsatoriu bsatoriu commented Mar 2, 2026

[Preview of new doc page version]

Access workspace bucket data with temporary AWS credentials

When logged into the ADE, temporary s3 credentials can be issued using the maap-py function maap.aws.workspace_bucket_credentials()

This command issues a set of AWS credentials that grant full read/write access to your own user folder within the workspace bucket, as well as any additional S3 buckets your organization has been granted access to.

The response includes an authorized_s3_paths array listing all S3 paths accessible with the returned credentials:

  • Your default workspace path (e.g. s3://maap-ops-workspace/maap_user)
  • Any custom organization-level S3 bucket/prefix grants configured by an admin

1. Retrieve temporary credentials

import json
from maap.maap import MAAP
maap = MAAP()

creds = maap.aws.workspace_bucket_credentials()
print(json.dumps(creds, indent=2))
>>> {
  \"authorized_s3_paths\": [
    \"s3://maap-ops-workspace/maap_user\",
    \"s3://shared-project-bucket/team-data\"
  ],
  \"aws_access_key_id\": \"...\",
  \"aws_secret_access_key\": \"...\",
  \"aws_session_expiration\": \"...\",
  \"aws_session_token\": \"...\"
}

2. Create a boto3 session from the credentials

Use the temporary credentials to create a boto3 session. This session can be used to interact with any of the buckets listed in authorized_s3_paths.

import boto3

session = boto3.Session(
    aws_access_key_id=creds[\"aws_access_key_id\"],
    aws_secret_access_key=creds[\"aws_secret_access_key\"],
    aws_session_token=creds[\"aws_session_token\"],
)
s3 = session.client(\"s3\")

3. List objects in an authorized bucket

from urllib.parse import urlparse

# Pick any path from the authorized list
s3_url = urlparse(creds[\"authorized_s3_paths\"][0])
bucket = s3_url.netloc
prefix = s3_url.path.lstrip(\"/\")

response = s3.list_objects_v2(Bucket=bucket, Prefix=prefix + \"/\", MaxKeys=10)
for obj in response.get(\"Contents\", []):
    print(obj[\"Key\"])

4. Download (GET) an object

s3.download_file(
    Bucket=bucket,
    Key=f\"{prefix}/my_file.csv\",
    Filename=\"my_file.csv\",
)

5. Upload (PUT) an object

s3.upload_file(
    Filename=\"local_results.csv\",
    Bucket=bucket,
    Key=f\"{prefix}/local_results.csv\",
)

6. Working with an organization shared bucket

If your organization has been granted access to additional buckets, they appear as extra entries in authorized_s3_paths. Use them the same way:

# Example: access the second authorized path (an org-shared bucket)
shared_url = urlparse(creds[\"authorized_s3_paths\"][1])
shared_bucket = shared_url.netloc
shared_prefix = shared_url.path.lstrip(\"/\")

# List files
response = s3.list_objects_v2(Bucket=shared_bucket, Prefix=shared_prefix + \"/\", MaxKeys=10)
for obj in response.get(\"Contents\", []):
    print(obj[\"Key\"])

# Download a file
s3.download_file(
    Bucket=shared_bucket,
    Key=f\"{shared_prefix}/shared_dataset.tif\",
    Filename=\"shared_dataset.tif\",
)

# Upload a file
s3.upload_file(
    Filename=\"my_output.tif\",
    Bucket=shared_bucket,
    Key=f\"{shared_prefix}/my_output.tif\",
)

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants