Create 'heavy' queue for large conversion tasks#302
Merged
Conversation
Collaborator
Author
|
I'm taking a look and reviewing #293 right now. I think that PR should be merged before this one. I can resolve any conflicts that arise in this one. |
Collaborator
Agreed :)
I love this PR so much! All the ideas you have implemented here should significantly improve v2c reliability 👍 |
e19f4a8 to
be4ae06
Compare
Collaborator
Author
|
It looks like the JS build validator doesn't verify the edge case where the file was removed prior to committing 🫠 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Resolves #284
This patch creates a new queue for processing memory-intensive tasks called 'heavy'. The purpose of this queue is to send tasks that require more than 50% of the memory of a worker (>8GB) to instances that process a single task at a time to prevent OOM errors from interrupting encoding tasks and bringing down workers.
Implementation
vcodec=copy. I believe those videos can technically be processed in the default queue without issue, but it's something I would have to test. For now anything fitting the criteria is sent to the heavy queue regardless if it can be copied or not.Changes
encoding05andencoding06as the heavy queue workers. We can discuss and adjust this as needed.encoding05andencoding06are currently configured to process tasks from both queues so that those workers don't sit idle while there are lighter tasks that are waiting to be processed.Deployment
The deployment process using the script remains the same as the setup of the 'heavy' workers is done in the puppet manifest by checking the hostname of the worker and tweaking the celery params as necessary. To verify that the deployment was successful you can check if the command-line arguments of the workers have the correct values set, assuming the workers have had the opportunity to finish their tasks and restart first of course.