vmoperator v0.65.0
I just applied change to upgrade the image tags for all 3 components and also added the GOMAXPROCS to extraEnvs. Instead of waiting for vmstorage, then vmselect to complete their rollout, all 3 were updated in parallel instead.
Example VMCluster CR:
apiVersion: operator.victoriametrics.com/v1beta1
kind: VMCluster
metadata:
name: canary
namespace: monitoring
spec:
replicationFactor: 2
retentionPeriod: "3"
vminsert:
extraEnvs:
- name: GOMAXPROCS
value: "2"
hpa:
maxReplicas: 12
metrics:
- pods:
metric:
name: vm_concurrent_insert_utilization
target:
averageValue: 800m
type: AverageValue
type: Pods
minReplicas: 6
image:
tag: v1.131.0-cluster
minReadySeconds: 180
podDisruptionBudget:
maxUnavailable: 1
port: "8480"
priorityClassName: monitoring-canary
replicaCount: 6
resources:
limits:
memory: 2Gi
requests:
cpu: 500m
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
serviceSpec:
metadata:
annotations:
service.kubernetes.io/topology-mode: Auto
name: vminsert-canary-az-aware
spec: {}
terminationGracePeriodSeconds: 300
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
vmselect:
cacheMountPath: /select-cache
clusterNativeListenPort: "8401"
extraArgs:
dedup.minScrapeInterval: 60s
extraEnvs:
- name: GOMAXPROCS
value: "4"
hpa:
maxReplicas: 6
metrics:
- pods:
metric:
name: vm_concurrent_select_utilization
target:
averageValue: 800m
type: AverageValue
type: Pods
minReplicas: 3
image:
tag: v1.131.0-cluster
minReadySeconds: 30
podDisruptionBudget:
maxUnavailable: 1
port: "8481"
priorityClassName: monitoring-canary
replicaCount: 2
resources:
limits:
memory: 3Gi
requests:
cpu: 1500m
rollingUpdateStrategy: RollingUpdate
serviceSpec:
metadata:
annotations:
service.kubernetes.io/topology-mode: Auto
name: vmselect-canary-az-aware
spec: {}
storage:
emptyDir: {}
volumeClaimTemplate:
spec:
resources:
requests:
storage: 2Gi
terminationGracePeriodSeconds: 60
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
vmstorage:
extraArgs:
dedup.minScrapeInterval: 60s
extraEnvs:
- name: GOMAXPROCS
value: "8"
image:
tag: v1.131.0-cluster
minReadySeconds: 180
podDisruptionBudget:
maxUnavailable: 1
priorityClassName: canary-monitoring
replicaCount: 6
resources:
requests:
cpu: '7'
limits:
memory: 27Gi
rollingUpdateStrategy: RollingUpdate
storage:
volumeClaimTemplate:
spec:
resources:
requests:
storage: 500Gi
storageClassName: hdd
storageDataPath: /vm-data
terminationGracePeriodSeconds: 900
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
vmoperator v0.65.0
I just applied change to upgrade the image tags for all 3 components and also added the
GOMAXPROCSto extraEnvs. Instead of waiting for vmstorage, then vmselect to complete their rollout, all 3 were updated in parallel instead.Example VMCluster CR: