Say you have 10 WAL files ready for archiving and maxParallel is set to 3.
- PG runs
archive_command for WAL file 1:
- barman-plugin looks at ready files and run the archiver for WAL 1, 2 and 3.
- Those 3 files are uploaded to the archive.
- Command exits successfully and PG marks WAL 1 as done.
- Then PG runs
archive_command for WAL file 2:
- barman-plugin looks are ready file and runs the archiver for WAL 2, 3 and 4
- WAL file 2 and 3 are already archived.
- WAL file 4 is uploaded to the archive.
- Command exits successfully and PG marks WAL 2 as done.
- And so on, PG archives WAL file 3 and barman-plugin uploads WAL file 5.
This results in always uploading one single WAL file at a time which is pretty slow and is contrary to what the doc says: “Number of WAL files to be […] archived in parallel”.
What I would expect to happen is:
- PG runs
archive_command for WAL file 1:
- WAL file 1, 2 and 3 are uploaded
- WAL file 1 is marked as done
- PG runs
archive_command for WAL file 2:
- WAL file 4, 5, 6 are uploaded
- WAL file 2 is marked as done
- And so on until there are no more files to upload and the
archive_command simply exits successfully without doing anything.
I think this happens because internalRun does not pass WALs that have already been archived to GatherReadyWALFiles:
|
SkipWALs: []string{baseWalName}, |
.
Say you have 10 WAL files ready for archiving and maxParallel is set to 3.
archive_commandfor WAL file 1:archive_commandfor WAL file 2:This results in always uploading one single WAL file at a time which is pretty slow and is contrary to what the doc says: “Number of WAL files to be […] archived in parallel”.
What I would expect to happen is:
archive_commandfor WAL file 1:archive_commandfor WAL file 2:archive_commandsimply exits successfully without doing anything.I think this happens because
internalRundoes not pass WALs that have already been archived toGatherReadyWALFiles:plugin-barman-cloud/internal/cnpgi/common/wal.go
Line 206 in 376e178