-
Notifications
You must be signed in to change notification settings - Fork 2.1k
telco: PP: configure the isolated and reserved cpus on gcp #73835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
telco: PP: configure the isolated and reserved cpus on gcp #73835
Conversation
670dae8 to
29b76e0
Compare
|
/pj-rehease list |
|
/pj-rehearse pull-ci-openshift-cluster-node-tuning-operator-main-e2e-gcp-pao-updating-profile |
|
@shajmakh: now processing your pj-rehearse request. Please allow up to 10 minutes for jobs to trigger or cancel. |
|
The rehearsal must-gather shows the new values of cpus are in affect: |
29b76e0 to
d044543
Compare
|
/pj-rehearse list |
|
@shajmakh: now processing your pj-rehearse request. Please allow up to 10 minutes for jobs to trigger or cancel. |
|
/pj-rehearse max |
|
@shajmakh: now processing your pj-rehearse request. Please allow up to 10 minutes for jobs to trigger or cancel. |
|
[REHEARSALNOTIFIER]
Interacting with pj-rehearseComment: Once you are satisfied with the results of the rehearsals, comment: |
GCP cluster profile uses ipi-gcp flow which by default uses 6 vCPUs for compute machines (see `step-registry/ipi/conf/gcp/ipi-conf-ref.yaml`).The performance profile suites configures a profile with `reserved: "0"` and `isolated: "1-3"` (see openshift/cluster-node-tuning-operator#909), unless environment vars are specificed. In general this is the good practice to include all node's cpus in the PP cpu section, but reason why we need this now is that we have some new tests that requires most the cpus to be all distributed using PP (see openshift/cluster-node-tuning-operator#1432 (comment)). Note: this is subject to change should the CPU specifications on GCP get modified. Signed-off-by: Shereen Haj <shajmakh@redhat.com>
|
[REHEARSALNOTIFIER]
Interacting with pj-rehearseComment: Once you are satisfied with the results of the rehearsals, comment: |
|
/lgtm but the moment the config changes (e.g. the plan becomes cheap enough or OCP decides to switch to 8-VCPU machines, says) it will break subtly. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ffromani, shajmakh The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
true, that is not worse than the situation that was before because before this update PP has been configured with inproper CPU set, now at least if CPUs changed there is a higher (than ever but small) chance to catch it earlier. |
|
/pj-rehearse ack |
|
@shajmakh: now processing your pj-rehearse request. Please allow up to 10 minutes for jobs to trigger or cancel. |
|
@shajmakh: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
643840f
into
openshift:master
GCP cluster profile uses ipi-gcp flow which by default uses 6 vCPUs for compute machines (see
step-registry/ipi/conf/gcp/ipi-conf-ref.yaml).The performance profile suites configures a profile withreserved: "0"andisolated: "1-3"(see openshift/cluster-node-tuning-operator#909), unless environment vars are specificed. In general this is the good practice to include all node's cpus in the PP cpu section, but reason why we need this now is that we have some new tests that requires most the cpus to be all distributed using PP (see openshift/cluster-node-tuning-operator#1432 (comment)).In this commit we start updating only the affected job on which the test would run, later we will need to add this setting to all other jobs that consume ipi-gcp cluster configuration.
Note: this is subject to change should the CPU specifications on GCP get modified.