Skip to content
Discussion options

You must be logged in to vote

Looks like you are uploading a 100gb file using aws s3 cp. This will automatically be converted to a multi-part upload per the S3 docs. Internally Ozone will allocate one block for each part of the upload, and IIRC the default part size used is quite small, like a single digit number of MBs. Can you try the test with a larger part size, like 256MB which aligns with Ozone's default block size? Also, you can try the test using using an Ozone client directly using ozone sh or ozone fs (or HDFS which also uses ofs like ozone fs).

This is actually a known issue, but I can't seem to find a Jira corresponding to it. Here's a technical breakdown of what's happening if you are interested:

Internal…

Replies: 6 comments 6 replies

Comment options

You must be logged in to vote
3 replies
@vasflam
Comment options

@errose28
Comment options

Answer selected by ivandika3
@ivandika3
Comment options

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
1 reply
@guillaume6pl
Comment options

Comment options

You must be logged in to vote
2 replies
@errose28
Comment options

errose28 Jul 8, 2024
Collaborator

@adoroszlai
Comment options

Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
FAQ
Labels
None yet
7 participants