diff --git a/modules/nw-openstack-sr-iov-testpmd-pod.adoc b/modules/nw-openstack-sr-iov-testpmd-pod.adoc index ae98dd0983d9..b7ac59fb9267 100644 --- a/modules/nw-openstack-sr-iov-testpmd-pod.adoc +++ b/modules/nw-openstack-sr-iov-testpmd-pod.adoc @@ -3,12 +3,14 @@ // * networking/hardware_networks/configuring-sriov-device.adoc :_mod-docs-content-type: REFERENCE -[id="nw-openstack-ovs-sr-iov-testpmd-pod_{context}"] +[id="nw-openstack-sr-iov-testpmd-pod_{context}"] = A test pod template for clusters that use SR-IOV on OpenStack -The following `testpmd` pod demonstrates container creation with huge pages, reserved CPUs, and the SR-IOV port. +[role="_abstract"] +The following `testpmd` pod example demonstrates container creation with huge pages, reserved CPUs, and the SR-IOV port. + +This example assumes that the name of the performance profile is `cnf-performance profile`. -.An example `testpmd` pod [source,yaml] ---- apiVersion: v1 @@ -45,10 +47,9 @@ spec: - mountPath: /dev/hugepages name: hugepage readOnly: False - runtimeClassName: performance-cnf-performanceprofile <1> + runtimeClassName: performance-cnf-performanceprofile volumes: - name: hugepage emptyDir: medium: HugePages ---- -<1> This example assumes that the name of the performance profile is `cnf-performance profile`. \ No newline at end of file diff --git a/modules/nw-sr-iov-network-node-configuration-examples.adoc b/modules/nw-sr-iov-network-node-configuration-examples.adoc index 6d83bf3a9a82..927c77ad49be 100644 --- a/modules/nw-sr-iov-network-node-configuration-examples.adoc +++ b/modules/nw-sr-iov-network-node-configuration-examples.adoc @@ -6,9 +6,9 @@ [id="nw-sr-iov-network-node-configuration-examples_{context}"] = SR-IOV network node configuration examples +[role="_abstract"] The following example describes the configuration for an InfiniBand device: -.Example configuration for an InfiniBand device [source,yaml] ---- apiVersion: sriovnetwork.openshift.io/v1 @@ -33,7 +33,6 @@ spec: The following example describes the configuration for an SR-IOV network device in a {rh-openstack} virtual machine: -.Example configuration for an SR-IOV device in a virtual machine [source,yaml] ---- apiVersion: sriovnetwork.openshift.io/v1 @@ -45,13 +44,12 @@ spec: resourceName: nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" - numVfs: 1 <1> + numVfs: 1 nicSelector: vendor: "" deviceID: "" - netFilter: "openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509" <2> + netFilter: "openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509" # ... ---- -<1> When configuring the node network policy for a virtual machine, the `numVfs` parameter is always set to `1`. -<2> When the virtual machine is deployed on {rh-openstack}, the `netFilter` parameter must refer to a network ID. Valid values for `netFilter` are available from an `SriovNetworkNodeState` object. - +* When configuring the node network policy for a virtual machine, the `numVfs` parameter is always set to `1`. +* When the virtual machine is deployed on {rh-openstack}, the `netFilter` parameter must refer to a network ID. Valid values for `netFilter` are available from an `SriovNetworkNodeState` object. diff --git a/modules/nw-sriov-configuring-device.adoc b/modules/nw-sriov-configuring-device.adoc index eb74a07373b4..393276922bdf 100644 --- a/modules/nw-sriov-configuring-device.adoc +++ b/modules/nw-sriov-configuring-device.adoc @@ -69,13 +69,13 @@ spec: deviceType: vfio-pci isRdma: false ---- -** `metadata.name` specifies a name for the `SriovNetworkNodePolicy` object. -** `metadata.namespace` specifies the namespace where the SR-IOV Network Operator is installed. -** `spec.resourceName` specifies the resource name of the SR-IOV device plugin. You can create multiple `SriovNetworkNodePolicy` objects for a resource name. -** `spec.nodeSelector.feature.node.kubernetes.io/network-sriov.capable` specifies the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes. -** `spec.priority` is an optional field that specifies an integer value between `0` and `99`. A smaller number gets higher priority, so a priority of `10` is higher than a priority of `99`. The default value is `99`. -** `spec.mtu` is an optional field that specifies a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models. -** `spec.numVfs` specifies the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than `127`. +** `metadata.name` defines a name for the `SriovNetworkNodePolicy` object. +** `metadata.namespace` defines the namespace where the SR-IOV Network Operator is installed. +** `spec.resourceName` defines the resource name of the SR-IOV device plugin. You can create multiple `SriovNetworkNodePolicy` objects for a resource name. +** `spec.nodeSelector.feature.node.kubernetes.io/network-sriov.capable` defines the node selector to select which nodes are configured. Only SR-IOV network devices on selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed only on selected nodes. +** `spec.priority` is an optional field that defines an integer value between `0` and `99`. A smaller number gets higher priority, so a priority of `10` is higher than a priority of `99`. The default value is `99`. +** `spec.mtu` is an optional field that defines a value for the maximum transmission unit (MTU) of the virtual function. The maximum MTU value can vary for different NIC models. +** `spec.numVfs` defines the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than `127`. ** `spec.nicSelector` selects the Ethernet device for the Operator to configure. You do not need to specify values for all the parameters. + [NOTE] @@ -85,12 +85,12 @@ If you specify `rootDevices`, you must also specify a value for `vendor`, `devic ==== + If you specify both `pfNames` and `rootDevices` at the same time, ensure that they point to an identical device. -** `spec.nicSelector.vendor` is an optional field that specifies the vendor hex code of the SR-IOV network device. The only allowed values are either `8086` or `15b3`. -** `spec.nicSelector.deviceID` is an optional field that specifies the device hex code of SR-IOV network device. The only allowed values are `158b`, `1015`, `1017`. -** `spec.nicSelector.pfNames` is an optional field that specifies an array of one or more physical function (PF) names for the Ethernet device. -** `spec.nicSelector.rootDevices` is an optional field that specifies an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: `0000:02:00.1`. -** `spec.deviceType` specifies the driver type. The `vfio-pci` driver type is required for virtual functions in {VirtProductName}. -** `spec.isRdma` is an optional field that specifies whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set `isRdma` to `false`. The default value is `false`. +** `spec.nicSelector.vendor` is an optional field that defines the vendor hex code of the SR-IOV network device. The only allowed values are either `8086` or `15b3`. +** `spec.nicSelector.deviceID` is an optional field that defines the device hex code of SR-IOV network device. The only allowed values are `158b`, `1015`, `1017`. +** `spec.nicSelector.pfNames` is an optional field that defines an array of one or more physical function (PF) names for the Ethernet device. +** `spec.nicSelector.rootDevices` is an optional field that defines an array of one or more PCI bus addresses for the physical function of the Ethernet device. Provide the address in the following format: `0000:02:00.1`. +** `spec.deviceType` defines the driver type. The `vfio-pci` driver type is required for virtual functions in {VirtProductName}. +** `spec.isRdma` is an optional field that defines whether to enable remote direct memory access (RDMA) mode. For a Mellanox card, set `isRdma` to `false`. The default value is `false`. + [NOTE] ==== diff --git a/modules/nw-sriov-device-discovery.adoc b/modules/nw-sriov-device-discovery.adoc index 5e5ad983406d..192640abe80a 100644 --- a/modules/nw-sriov-device-discovery.adoc +++ b/modules/nw-sriov-device-discovery.adoc @@ -4,11 +4,11 @@ // * virt/vm_networking/virt-connecting-vm-to-sriov.adoc :_mod-docs-content-type: REFERENCE -[id="discover-sr-iov-devices_{context}"] +[id="nw-sriov-device-discovery_{context}"] = Automated discovery of SR-IOV network devices The SR-IOV Network Operator searches your cluster for SR-IOV capable network devices on worker nodes. -The Operator creates and updates a SriovNetworkNodeState custom resource (CR) for each worker node that provides a compatible SR-IOV network device. +The Operator creates and updates a `SriovNetworkNodeState` custom resource (CR) for each worker node that provides a compatible SR-IOV network device. The CR is assigned the same name as the worker node. The `status.interfaces` list provides information about the network devices on a node. @@ -19,18 +19,14 @@ Do not modify a `SriovNetworkNodeState` object. The Operator creates and manages these resources automatically. ==== -[id="example-sriovnetworknodestate_{context}"] -== Example SriovNetworkNodeState object - The following YAML is an example of a `SriovNetworkNodeState` object created by the SR-IOV Network Operator: -.An SriovNetworkNodeState object [source,yaml] ---- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: - name: node-25 <1> + name: node-25 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 @@ -41,7 +37,7 @@ metadata: spec: dpConfigVersion: "39824" status: - interfaces: <2> + interfaces: - deviceID: "1017" driver: mlx5_core mtu: 1500 @@ -79,5 +75,5 @@ status: vendor: "8086" syncStatus: Succeeded ---- -<1> The value of the `name` field is the same as the name of the worker node. -<2> The `interfaces` stanza includes a list of all of the SR-IOV devices discovered by the Operator on the worker node. +* The value of the `name` field is the same as the name of the worker node. +* The `interfaces` stanza includes a list of all of the SR-IOV devices discovered by the Operator on the worker node. diff --git a/modules/nw-sriov-networknodepolicy-object.adoc b/modules/nw-sriov-networknodepolicy-object.adoc index f955a91f677e..81c41b3ed784 100644 --- a/modules/nw-sriov-networknodepolicy-object.adoc +++ b/modules/nw-sriov-networknodepolicy-object.adoc @@ -6,7 +6,8 @@ [id="nw-sriov-networknodepolicy-object_{context}"] = SR-IOV network node configuration object -You specify the SR-IOV network device configuration for a node by creating an SR-IOV network node policy. The API object for the policy is part of the `sriovnetwork.openshift.io` API group. +[role="_abstract"] +You can specify the SR-IOV network device configuration for a node by creating an SR-IOV network node policy. The API object for the policy is part of the `sriovnetwork.openshift.io` API group. The following YAML describes an SR-IOV network node policy: @@ -15,38 +16,35 @@ The following YAML describes an SR-IOV network node policy: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: - name: <1> - namespace: openshift-sriov-network-operator <2> + name: + namespace: openshift-sriov-network-operator spec: - resourceName: <3> + resourceName: nodeSelector: - feature.node.kubernetes.io/network-sriov.capable: "true" <4> - priority: <5> - mtu: <6> - needVhostNet: false <7> - numVfs: <8> - externallyManaged: false <9> - nicSelector: <10> - vendor: "" <11> - deviceID: "" <12> - pfNames: ["", ...] <13> - rootDevices: ["", ...] <14> - netFilter: "" <15> - deviceType: <16> - isRdma: false <17> - linkType: <18> - eSwitchMode: "switchdev" <19> - excludeTopology: false <20> + feature.node.kubernetes.io/network-sriov.capable: "true" + priority: + mtu: + needVhostNet: false + numVfs: + externallyManaged: false + nicSelector: + vendor: "" + deviceID: "" + pfNames: ["", ...] + rootDevices: ["", ...] + netFilter: "" + deviceType: + isRdma: false + linkType: + eSwitchMode: "switchdev" + excludeTopology: false ---- -<1> The name for the custom resource object. - -<2> The namespace where the SR-IOV Network Operator is installed. - -<3> The resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name. +** `metadata.name` defines the name for the custom resource object. +** `metadata.namespace` defines the namespace where the SR-IOV Network Operator is installed. +** `spec.resourceName` defines the resource name of the SR-IOV network device plugin. You can create multiple SR-IOV network node policies for a resource name. + When specifying a name, be sure to use the accepted syntax expression `^[a-zA-Z0-9_]+$` in the `resourceName`. - -<4> The node selector specifies the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only. +** `spec.nodeSelector` defines the nodes to configure. Only SR-IOV network devices on the selected nodes are configured. The SR-IOV Container Network Interface (CNI) plugin and device plugin are deployed on selected nodes only. + [IMPORTANT] ==== @@ -55,9 +53,8 @@ The SR-IOV Network Operator applies node network configuration policies to nodes To avoid a node in an unhealthy MCP from blocking the application of node network configuration policies to other nodes, including nodes in other MCPs, you must create a separate node network configuration policy for each MCP. ==== -<5> Optional: The priority is an integer value between `0` and `99`. A smaller value receives higher priority. For example, a priority of `10` is a higher priority than `99`. The default value is `99`. - -<6> Optional: The maximum transmission unit (MTU) of the physical function and all its virtual functions. The maximum MTU value can vary for different network interface controller (NIC) models. +** `spec.priority` is an optional field that defines priority as an integer value between `0` and `99`. A smaller value receives higher priority. For example, a priority of `10` is a higher priority than `99`. The default value is `99`. +** `spec.mtu` is an optional field that defines the maximum transmission unit (MTU) of the physical function and all its virtual functions. The maximum MTU value can vary for different network interface controller (NIC) models. + [IMPORTANT] ==== @@ -67,11 +64,9 @@ If you want to modify the MTU of a single virtual function while the function is Otherwise, the SR-IOV Network Operator reverts the MTU of the virtual function to the MTU value defined in the SR-IOV network node policy, which might trigger a node drain. ==== -<7> Optional: Set `needVhostNet` to `true` to mount the `/dev/vhost-net` device in the pod. Use the mounted `/dev/vhost-net` device with Data Plane Development Kit (DPDK) to forward traffic to the kernel network stack. - -<8> The number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than `127`. - -<9> The `externallyManaged` field indicates whether the SR-IOV Network Operator manages all, or only a subset of virtual functions (VFs). With the value set to `false` the SR-IOV Network Operator manages and configures all VFs on the PF. +** `spec.needVhostNet` is an optional field that defines whether the `/dev/vhost-net` device is mounted in the pod. Set `needVhostNet` to `true` to mount the `/dev/vhost-net` device in the pod. Use the mounted `/dev/vhost-net` device with Data Plane Development Kit (DPDK) to forward traffic to the kernel network stack. +** `spec.numVfs` defines the number of the virtual functions (VF) to create for the SR-IOV physical network device. For an Intel network interface controller (NIC), the number of VFs cannot be larger than the total VFs supported by the device. For a Mellanox NIC, the number of VFs cannot be larger than `127`. +** `spec.externallyManaged` defines whether the SR-IOV Network Operator manages all, or only a subset of virtual functions (VFs). With the value set to `false` the SR-IOV Network Operator manages and configures all VFs on the PF. + [NOTE] ==== @@ -82,25 +77,20 @@ When `externallyManaged` is set to `false`, the SR-IOV Network Operator automati To use VFs on the host system, you must create them through NMState, and set `externallyManaged` to `true`. In this mode, the SR-IOV Network Operator does not modify the PF or the manually managed VFs, except for those explicitly defined in the `nicSelector` field of your policy. However, the SR-IOV Network Operator continues to manage VFs that are used as pod secondary interfaces. ==== -<10> The NIC selector identifies the device to which this resource applies. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally. +** `spec.nicSelector` defines the device to which this resource applies. You do not have to specify values for all the parameters. It is recommended to identify the network device with enough precision to avoid selecting a device unintentionally. + If you specify `rootDevices`, you must also specify a value for `vendor`, `deviceID`, or `pfNames`. If you specify both `pfNames` and `rootDevices` at the same time, ensure that they refer to the same device. If you specify a value for `netFilter`, then you do not need to specify any other parameter because a network ID is unique. +*** `spec.nicSelector.vendor` is an optional field that defines the vendor hexadecimal vendor identifier of the SR-IOV network device. The only allowed values are `8086` (Intel) and `15b3` (Mellanox). +*** `spec.nicSelector.deviceID` is an optional field that defines the device hexadecimal device identifier of the SR-IOV network device. For example, `101b` is the device ID for a Mellanox ConnectX-6 device. +*** `spec.nicSelector.pfNames` is an optional field that defines an array of one or more physical function (PF) names the resource must apply to. +*** `spec.nicSelector.rootDevices` is an optional field that defines an array of one or more PCI bus addresses the resource must apply to. For example `0000:02:00.1`. +*** `spec.nicSelector.netFilter` is an optional field that defines the platform-specific network filter. The only supported platform is {rh-openstack-first}. Acceptable values use the following format: `openstack/NetworkID:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. Replace `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` with the value from the `/var/config/openstack/latest/network_data.json` metadata file. This filter ensures that VFs are associated with a specific OpenStack network. The operator uses this filter to map the VFs to the appropriate network based on metadata provided by the OpenStack platform. -<11> Optional: The vendor hexadecimal vendor identifier of the SR-IOV network device. The only allowed values are `8086` (Intel) and `15b3` (Mellanox). - -<12> Optional: The device hexadecimal device identifier of the SR-IOV network device. For example, `101b` is the device ID for a Mellanox ConnectX-6 device. - -<13> Optional: An array of one or more physical function (PF) names the resource must apply to. - -<14> Optional: An array of one or more PCI bus addresses the resource must apply to. For example `0000:02:00.1`. - -<15> Optional: The platform-specific network filter. The only supported platform is {rh-openstack-first}. Acceptable values use the following format: `openstack/NetworkID:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. Replace `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` with the value from the `/var/config/openstack/latest/network_data.json` metadata file. This filter ensures that VFs are associated with a specific OpenStack network. The operator uses this filter to map the VFs to the appropriate network based on metadata provided by the OpenStack platform. - -<16> Optional: The driver to configure for the VFs created from this resource. The only allowed values are `netdevice` and `vfio-pci`. The default value is `netdevice`. +** `spec.deviceType` is an optional field that defines the driver to configure for the VFs created from this resource. The only allowed values are `netdevice` and `vfio-pci`. The default value is `netdevice`. + For a Mellanox NIC to work in DPDK mode on bare metal nodes, use the `netdevice` driver type and set `isRdma` to `true`. -<17> Optional: Configures whether to enable remote direct memory access (RDMA) mode. The default value is `false`. +** `spec.isRdma` is an optional field that defines whether remote direct memory access (RDMA) mode is enabled. The default value is `false`. + If the `isRdma` parameter is set to `true`, you can continue to use the RDMA-enabled VF as a normal network device. A device can be used in either mode. + @@ -111,12 +101,11 @@ Set `isRdma` to `true` and additionally set `needVhostNet` to `true` to configur You cannot set the `isRdma` parameter to `true` for intel NICs. ==== -<18> Optional: The link type for the VFs. The default value is `eth` for Ethernet. Change this value to 'ib' for InfiniBand. +** `spec.linkType` is an optional field that defines the link type for the VFs. The default value is `eth` for Ethernet. Change this value to 'ib' for InfiniBand. + When `linkType` is set to `ib`, `isRdma` is automatically set to `true` by the SR-IOV Network Operator webhook. When `linkType` is set to `ib`, `deviceType` should not be set to `vfio-pci`. + -Do not set linkType to `eth` for SriovNetworkNodePolicy, because this can lead to an incorrect number of available devices reported by the device plugin. - -<19> Optional: To enable hardware offloading, you must set the `eSwitchMode` field to `"switchdev"`. For more information about hardware offloading, see "Configuring hardware offloading". +Do not set `linkType` to `eth` for `SriovNetworkNodePolicy`, because this can lead to an incorrect number of available devices reported by the device plugin. -<20> Optional: To exclude advertising an SR-IOV network resource's NUMA node to the Topology Manager, set the value to `true`. The default value is `false`. +** `spec.eSwitchMode` is an optional field that defines whether hardware offloading is enabled. To enable hardware offloading, you must set the `eSwitchMode` field to `"switchdev"`. For more information about hardware offloading, see "Configuring hardware offloading". +** `spec.excludeTopology` is an optional field that defines whether advertising an SR-IOV network resource's NUMA node to the Topology Manager should be excluded. To exclude advertising an SR-IOV network resource's NUMA node to the Topology Manager, set the value to `true`. The default value is `false`. diff --git a/modules/nw-sriov-nic-mlx-secure-boot.adoc b/modules/nw-sriov-nic-mlx-secure-boot.adoc index 3f2fc7907bfb..57e58f89bb64 100644 --- a/modules/nw-sriov-nic-mlx-secure-boot.adoc +++ b/modules/nw-sriov-nic-mlx-secure-boot.adoc @@ -6,6 +6,7 @@ [id="nw-sriov-nic-mlx-secure-boot_{context}"] = Configuring the SR-IOV Network Operator on Mellanox cards when Secure Boot is enabled +[role="_abstract"] The SR-IOV Network Operator supports an option to skip the firmware configuration for Mellanox devices. This option allows you to create virtual functions by using the SR-IOV Network Operator when the system has secure boot enabled. You must manually configure and allocate the number of virtual functions in the firmware before switching the system to secure boot. [NOTE] @@ -19,10 +20,10 @@ The number of virtual functions in the firmware is the maximum number of virtual + [source,terminal] ---- -$ mstconfig -d -0001:b1:00.1 set SRIOV_EN=1 NUM_OF_VFS=16 <1> <2> +$ mstconfig -d -0001:b1:00.1 set SRIOV_EN=1 NUM_OF_VFS=16 ---- -<1> The `SRIOV_EN` environment variable enables the SR-IOV Network Operator support on the Mellanox card. -<2> The `NUM_OF_VFS` environment variable specifies the number of virtual functions to enable in the firmware. +** The `SRIOV_EN` environment variable enables the SR-IOV Network Operator support on the Mellanox card. +** The `NUM_OF_VFS` environment variable specifies the number of virtual functions to enable in the firmware. . Configure the SR-IOV Network Operator by disabling the Mellanox plugin. See the following `SriovOperatorConfig` example configuration: + @@ -53,7 +54,8 @@ spec: $ oc -n openshift-sriov-network-operator get sriovnetworknodestate.sriovnetwork.openshift.io worker-0 -oyaml ---- + -.Example output +Example output: ++ [source,yaml] ---- - deviceID: 101d @@ -64,13 +66,14 @@ $ oc -n openshift-sriov-network-operator get sriovnetworknodestate.sriovnetwork. mac: 08:c0:eb:96:31:25 mtu: 1500 name: ens3f1np1 - pciAddress: 0000:b1:00.1 <1> + pciAddress: 0000:b1:00.1 totalvfs: 16 vendor: 15b3 ---- -<1> The `totalvfs` value is the same number used in the `mstconfig` command earlier in the procedure. ++ +The `totalvfs` value is the same number used in the `mstconfig` command earlier in the procedure. -. Enable secure boot to prevent unauthorized operating systems and malicious software from loading during the device's boot process. +. Enable secure boot to prevent unauthorized operating systems and malicious software from loading during the device's boot process. + .. Enable secure boot by using the BIOS (Basic Input/Output System) to set values for the following parameters: + diff --git a/modules/nw-sriov-nic-partitioning.adoc b/modules/nw-sriov-nic-partitioning.adoc index 21a9750c9deb..f6abe5d9ce36 100644 --- a/modules/nw-sriov-nic-partitioning.adoc +++ b/modules/nw-sriov-nic-partitioning.adoc @@ -6,7 +6,7 @@ [id="nw-sriov-nic-partitioning_{context}"] = Virtual function (VF) partitioning for SR-IOV devices -In some cases, you might want to split virtual functions (VFs) from the same physical function (PF) into many resource pools. For example, you might want some of the VFs to load with the default driver and the remaining VFs load with the `vfio-pci` driver. +In some cases, you might want to split virtual functions (VFs) from the same physical function (PF) into many resource pools. For example, you might want some of the VFs to load with the default driver and the remaining VFs load with the `vfio-pci` driver. For example, the following YAML shows the selector for an interface named `netpf0` with VF `2` through `7`: @@ -70,17 +70,18 @@ spec: deviceType: vfio-pci ---- -.Verifying that the interface is successfully partitioned +// this should be a module but it's out of scope for the current PR - follow up +You can confirm that the interface partitioned to virtual functions (VFs) for the SR-IOV device by running the following command: -* Confirm that the interface partitioned to virtual functions (VFs) for the SR-IOV device by running the following command. -+ [source,terminal] ---- -$ ip link show <1> +$ ip link show ---- -<1> Replace `` with the interface that you specified when partitioning to VFs for the SR-IOV device, for example, `ens3f1`. -+ -.Example output + +Replace `` with the interface that you specified when partitioning to VFs for the SR-IOV device, for example, `ens3f1`. + +Example output: + [source,terminal] ---- 5: ens3f1: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 diff --git a/modules/virt-about-instance-types.adoc b/modules/virt-about-instance-types.adoc index 1a1029dacd40..6ac6b933bedf 100644 --- a/modules/virt-about-instance-types.adoc +++ b/modules/virt-about-instance-types.adoc @@ -44,8 +44,8 @@ spec: memory: guest: 128Mi ---- -* `spec.cpu.guest` is a required field that specifies the number of vCPUs to allocate to the guest. -* `spec.memory.guest` is a required field that specifies an amount of memory to allocate to the guest. +* `spec.cpu.guest` is a required field that defines the number of vCPUs to allocate to the guest. +* `spec.memory.guest` is a required field that defines an amount of memory to allocate to the guest. You can create an instance type manifest by using the `virtctl` CLI utility. For example: diff --git a/modules/virt-cloning-pvc-to-dv-cli.adoc b/modules/virt-cloning-pvc-to-dv-cli.adoc index 2ad9822d6194..573a16033428 100644 --- a/modules/virt-cloning-pvc-to-dv-cli.adoc +++ b/modules/virt-cloning-pvc-to-dv-cli.adoc @@ -61,17 +61,20 @@ provisioner: openshift-storage.rbd.csi.ceph.com apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: - name: <1> + name: spec: source: pvc: - namespace: "" <2> - name: "" <3> + namespace: "" + name: "" storage: {} ---- -<1> Specify the name of the new data volume. -<2> Specify the namespace of the source PVC. -<3> Specify the name of the source PVC. ++ +where: ++ +``:: Specifies the name of the new data volume. +``:: Specifies the namespace of the source PVC. +``:: Specifies the name of the source PVC. . Create the data volume by running the following command: + diff --git a/modules/virt-configuring-cdiuploadproxy-routes.adoc b/modules/virt-configuring-cdiuploadproxy-routes.adoc index a0f5a533c8a6..ea745217cea4 100644 --- a/modules/virt-configuring-cdiuploadproxy-routes.adoc +++ b/modules/virt-configuring-cdiuploadproxy-routes.adoc @@ -27,9 +27,9 @@ $ oc create route reencrypt -n openshift-cnv \ ---- + where: - -:: Specifies the name to assign to this custom route. -:: Specifies the fully qualified domain name or IP address of the external host providing image upload access. ++ +``:: Specifies the name to assign to this custom route. +``:: Specifies the fully qualified domain name or IP address of the external host providing image upload access. . Run the following command to annotate the route. This ensures that the correct Containerized Data Importer (CDI) CA certificate is injected when certificates are rotated: + @@ -40,5 +40,5 @@ $ oc annotate route -n openshift-cnv \ ---- + where: - -:: The name of the route you created. ++ +``:: Specifies the name of the route you created. diff --git a/modules/virt-configuring-secondary-network-vm-live-migration.adoc b/modules/virt-configuring-secondary-network-vm-live-migration.adoc index 5a1a5c4a346c..a429b742fd47 100644 --- a/modules/virt-configuring-secondary-network-vm-live-migration.adoc +++ b/modules/virt-configuring-secondary-network-vm-live-migration.adoc @@ -41,10 +41,10 @@ spec: } }' ---- -** `metadata.name` specifies the name of the `NetworkAttachmentDefinition` object. -** `config.master` specifies the name of the NIC to be used for live migration. -** `config.type` specifies the name of the CNI plugin that provides the network for the NAD. -** `config.range` specifies an IP address range for the secondary network. This range must not overlap the IP addresses of the main network. +** `metadata.name` defines the name of the `NetworkAttachmentDefinition` object. +** `config.master` defines the name of the NIC to be used for live migration. +** `config.type` defines the name of the CNI plugin that provides the network for the NAD. +** `config.range` defines an IP address range for the secondary network. This range must not overlap the IP addresses of the main network. . Open the `HyperConverged` CR in your default editor by running the following command: + @@ -73,7 +73,7 @@ spec: progressTimeout: 150 # ... ---- -** `spec.liveMigrationConfig.network` specifies the name of the Multus `NetworkAttachmentDefinition` object to be used for live migrations. +** `spec.liveMigrationConfig.network` defines the name of the Multus `NetworkAttachmentDefinition` object to be used for live migrations. . Save your changes and exit the editor. The `virt-handler` pods restart and connect to the secondary network. diff --git a/modules/virt-creating-storage-class-csi-driver.adoc b/modules/virt-creating-storage-class-csi-driver.adoc index 90cd9bfad120..a13b95385e69 100644 --- a/modules/virt-creating-storage-class-csi-driver.adoc +++ b/modules/virt-creating-storage-class-csi-driver.adoc @@ -42,10 +42,9 @@ volumeBindingMode: WaitForFirstConsumer <2> parameters: storagePool: my-storage-pool <3> ---- -+ -* `reclaimPolicy` specifies whether the underlying storage is deleted or retained when a user deletes a PVC. The two possible `reclaimPolicy` values are `Delete` and `Retain`. If you do not specify a value, the default value is `Delete`. -* `volumeBindingMode` specifies the timing of PV creation. The `WaitForFirstConsumer` configuration in this example means that PV creation is delayed until a pod is scheduled to a specific node. -* `parameters.storagePool` specifies the name of the storage pool defined in the HPP custom resource (CR). +** `reclaimPolicy` defines whether the underlying storage is deleted or retained when a user deletes a PVC. The two possible `reclaimPolicy` values are `Delete` and `Retain`. If you do not specify a value, the default value is `Delete`. +** `volumeBindingMode` defines the timing of PV creation. The `WaitForFirstConsumer` configuration in this example means that PV creation is delayed until a pod is scheduled to a specific node. +** `parameters.storagePool` defines the name of the storage pool defined in the HPP custom resource (CR). . Save the file and exit. diff --git a/modules/virt-creating-vm-cli.adoc b/modules/virt-creating-vm-cli.adoc index 680eb83113f2..ab3d2372f8b5 100644 --- a/modules/virt-creating-vm-cli.adoc +++ b/modules/virt-creating-vm-cli.adoc @@ -30,7 +30,8 @@ $ virtctl create vm --name rhel-9-minimal --volume-import type:ds,src:openshift- This example manifest does not configure VM authentication. ==== + -.Example manifest for a {op-system-base} VM +Example manifest for a {op-system-base} VM: ++ [source,yaml] ---- apiVersion: kubevirt.io/v1 @@ -70,14 +71,12 @@ spec: name: imported-volume-mk4lj name: imported-volume-mk4lj ---- -+ - -* `name: rhel-9-minimal` specifies the name of the VM. -* `name: rhel9` specifies the boot source for the guest operating system in the `sourceRef` section. -* `namespace: openshift-virtualization-os-images` specifies the namespace for the boot source. Golden images are stored in the `openshift-virtualization-os-images` namespace. -* `instancetype: inferFromVolume: imported-volume-mk4lj` specifies the instance type inferred from the selected `DataSource` object. -* `preference: inferFromVolume: imported-volume-mk4lj` specifies that the preference is inferred from the selected `DataSource` object. -* `type: virtio` specifies the use of a custom video device (a VirtIO device in this example) to enable hardware graphics acceleration. Enabling a custom video device is in Technology Preview for {VirtProductName} 4.21. +** `metadata.name` defines the name of the VM. +** `spec.dataVolumeTemplates.spec.sourceRef.name` defines the boot source for the guest operating system in the `sourceRef` section. +** `spec.dataVolumeTemplates.spec.sourceRef.namespace` defines the namespace for the boot source. Golden images are stored in the `openshift-virtualization-os-images` namespace. +** `spec.instancetype.inferFromVolume` defines the instance type inferred from the selected `DataSource` object. +** `spec.preference.inferFromVolume` defines the preference that is inferred from the selected `DataSource` object. +** `spec.template.spec.domain.devices.video.type` defines the use of a custom video device to enable hardware graphics acceleration. This example uses a VirtIO device. Enabling a custom video device is in Technology Preview for {VirtProductName} 4.21. . Create a virtual machine by using the manifest file: + diff --git a/modules/virt-creating-vm-cloned-pvc-data-volume-template.adoc b/modules/virt-creating-vm-cloned-pvc-data-volume-template.adoc index 59711fa413f1..70bff0d16ff0 100644 --- a/modules/virt-creating-vm-cloned-pvc-data-volume-template.adoc +++ b/modules/virt-creating-vm-cloned-pvc-data-volume-template.adoc @@ -6,6 +6,7 @@ [id="virt-creating-vm-cloning-pvc-data-volume-template_{context}"] = Creating a VM from a cloned PVC by using a data volume template +[role="_abstract"] You can create a virtual machine (VM) that clones the persistent volume claim (PVC) of an existing VM by using a data volume template. This method creates a data volume whose lifecycle is independent on the original VM. .Prerequisites @@ -30,7 +31,7 @@ $ virtctl create vm --name rhel-9-clone --volume-import type:pvc,src:my-project/ apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: - name: rhel-9-clone # <1> + name: rhel-9-clone spec: dataVolumeTemplates: - metadata: @@ -38,15 +39,15 @@ spec: spec: source: pvc: - name: imported-volume-q5pr9 # <2> - namespace: my-project # <3> + name: imported-volume-q5pr9 + namespace: my-project storage: resources: {} instancetype: - inferFromVolume: imported-volume-h4qn8 # <4> + inferFromVolume: imported-volume-h4qn8 inferFromVolumeFailurePolicy: Ignore preference: - inferFromVolume: imported-volume-h4qn8 # <5> + inferFromVolume: imported-volume-h4qn8 inferFromVolumeFailurePolicy: Ignore runStrategy: Always template: @@ -62,11 +63,11 @@ spec: name: imported-volume-h4qn8 name: imported-volume-h4qn8 ---- -<1> The VM name. -<2> The name of the source PVC. -<3> The namespace of the source PVC. -<4> If the PVC source has appropriate labels, the instance type is inferred from the selected `DataSource` object. -<5> If the PVC source has appropriate labels, the preference is inferred from the selected `DataSource` object. +** `metadata.name` defines the VM name. +** `spec.dataVolumeTemplates.spec.source.pvc.name` defines the name of the source PVC. +** `spec.dataVolumeTemplates.spec.source.pvc.namespace` defines the namespace of the source PVC. +** `spec.instancetype.inferFromVolume` defines appropriate labels for the PVC source so that the instance type is inferred from the selected `DataSource` object. +** `spec.preference.inferFromVolume` defines appropriate labels for the PVC source so that the preference is inferred from the selected `DataSource` object. . Create the virtual machine with the PVC-cloned data volume: + diff --git a/modules/virt-creating-vm-container-disk-cli.adoc b/modules/virt-creating-vm-container-disk-cli.adoc index 93d2c3f4d9a2..94edeba2c755 100644 --- a/modules/virt-creating-vm-container-disk-cli.adoc +++ b/modules/virt-creating-vm-container-disk-cli.adoc @@ -51,10 +51,10 @@ spec: image: registry.redhat.io/rhel9/rhel-guest-image:9.5 # <4> name: vm-rhel-9-containerdisk-0 ---- -<1> The VM name. -<2> The instance type to use to control resource sizing of the VM. -<3> The preference to use. -<4> The URL of the container disk. +** `metadata.name` defines the VM name. +** `spec.instancetype.name` defines the instance type to use to control resource sizing of the VM. +** `spec.preference.name` defines the preference to use. +** `spec.template.spec.volumes.containerDisk.image` defines the URL of the container disk. . Create the VM by running the following command: + diff --git a/modules/virt-creating-vm-web-page-cli.adoc b/modules/virt-creating-vm-web-page-cli.adoc index 161b5d1ac6ab..17d69a72f093 100644 --- a/modules/virt-creating-vm-web-page-cli.adoc +++ b/modules/virt-creating-vm-web-page-cli.adoc @@ -3,7 +3,7 @@ // * virt/creating_vms_advanced/creating_vms_advanced_web/virt-creating-vms-from-web-images.adoc :_mod-docs-content-type: PROCEDURE -[id="virt-creating-vm-import-cli_{context}"] +[id="virt-creating-vm-web-page-cli_{context}"] = Creating a VM from an image on a web page by using the CLI [role="_abstract"] @@ -33,23 +33,23 @@ $ virtctl create vm --name vm-rhel-9 --instancetype u1.small --preference rhel.9 apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: - name: vm-rhel-9 # <1> + name: vm-rhel-9 spec: dataVolumeTemplates: - metadata: - name: imported-volume-6dcpf # <2> + name: imported-volume-6dcpf spec: source: http: - url: https://example.com/rhel9.qcow2 # <3> + url: https://example.com/rhel9.qcow2 storage: resources: requests: - storage: 10Gi # <4> + storage: 10Gi instancetype: - name: u1.small # <5> + name: u1.small preference: - name: rhel.9 # <6> + name: rhel.9 runStrategy: Always template: spec: @@ -62,12 +62,12 @@ spec: name: imported-volume-6dcpf name: imported-volume-6dcpf ---- -<1> The VM name. -<2> The data volume name. -<3> The URL of the image. -<4> The size of the storage requested for the data volume. -<5> The instance type to use to control resource sizing of the VM. -<6> The preference to use. +** `metadata.name` defines the VM name. +** `spec.dataVolumeTemplates.metadata.name` defines the data volume name. +** `spec.dataVolumeTemplates.spec.source.http.url` defines the URL of the image. +** `spec.dataVolumeTemplates.spec.storage.resources.requests.storage` defines the size of the storage requested for the data volume. +** `spec.instancetype.name` defines the instance type to use to control resource sizing of the VM. +** `spec.preference.name` defines the preference to use. . Create the VM by running the following command: + diff --git a/modules/virt-disabling-tls-for-registry.adoc b/modules/virt-disabling-tls-for-registry.adoc index 08e0548600f0..b65be0faecd4 100644 --- a/modules/virt-disabling-tls-for-registry.adoc +++ b/modules/virt-disabling-tls-for-registry.adoc @@ -35,8 +35,9 @@ metadata: namespace: {CNVNamespace} spec: storageImport: - insecureRegistries: <1> + insecureRegistries: - "private-registry-example-1:5000" - "private-registry-example-2:5000" ---- -<1> Replace the examples in this list with valid registry hostnames. ++ +Replace the examples in the `spec.storageImport.insecureRegistries` list with valid registry hostnames. diff --git a/modules/virt-install-ibm-cloud-cluster-network-access.adoc b/modules/virt-install-ibm-cloud-cluster-network-access.adoc index 143988fecdd9..1a8aefe5c5f3 100644 --- a/modules/virt-install-ibm-cloud-cluster-network-access.adoc +++ b/modules/virt-install-ibm-cloud-cluster-network-access.adoc @@ -2,7 +2,7 @@ // // * virt/install/virt-install-ibm-cloud-bm-nodes.adoc -:_mod-docs-content-type: PROCEDURE +:_mod-docs-content-type: PROCEDURE [id="virt-install-ibm-cloud-cluster-network-access_{context}"] = Configuring cluster networking and access @@ -78,10 +78,10 @@ $ sudo MotionPro --host $ --user $ --passwd $:: The appropriate SSL VPN endpoint. -:: The SSL VPN user name you configured. -:: The SSL VPN password you configured. ++ +``:: Specifies the appropriate SSL VPN endpoint. +``:: Specifies the SSL VPN user name you configured. +``:: Specifies the SSL VPN password you configured. + [NOTE] ==== diff --git a/modules/virt-install-ibm-cloud-complete-cluster-config.adoc b/modules/virt-install-ibm-cloud-complete-cluster-config.adoc index 8f412de19b7a..1bb8f2c1ab09 100644 --- a/modules/virt-install-ibm-cloud-complete-cluster-config.adoc +++ b/modules/virt-install-ibm-cloud-complete-cluster-config.adoc @@ -2,7 +2,7 @@ // // * virt/install/virt-install-ibm-cloud-bm-nodes.adoc -:_mod-docs-content-type: PROCEDURE +:_mod-docs-content-type: PROCEDURE [id="virt-install-ibm-cloud-complete-cluster-config_{context}"] = Completing the cluster configuration @@ -34,7 +34,7 @@ The IP address and credentials for IPMI console access is available in the *Remo . Select the *Install {VirtProductName}* and *Install {rh-storage}* checkboxes in the *Assisted Installer* options. -. Select a role for each host. +. Select a role for each host. + [NOTE] ==== @@ -69,7 +69,7 @@ The IP address and credentials for IPMI console access is available in the *Remo .. Download the `kubeconfig` file. .. Save the `kubeadmin` password. -. Install `haproxy` on the Bastion virtual server instance. +. Install `haproxy` on the Bastion virtual server instance. . Configure `haproxy` for your environment. The following is an example configuration: + @@ -179,15 +179,15 @@ backend insecure ---- + where: - -::: The front end IP address and port used by the Kubernetes API server. -::: The front end IP address and port used for internal cluster management. -::: The front end IP address and port used for HTTPS traffic for hosted applications. -::: The front end IP address and port used for HTTP traffic for hosted applications. -::: The back end IP address and port used by the Kubernetes API server. -::: The back end IP address and port used for internal cluster management. -::: The back end IP address and port used for HTTPS traffic for hosted applications. -::: The back end IP address and port used for HTTP traffic for hosted applications. ++ +`:`:: Specifies the front end IP address and port used by the Kubernetes API server. +`:`:: Specifies the front end IP address and port used for internal cluster management. +`:`:: Specifies the front end IP address and port used for HTTPS traffic for hosted applications. +`:`:: Specifies the front end IP address and port used for HTTP traffic for hosted applications. +`:`:: Specifies the back end IP address and port used by the Kubernetes API server. +`:`:: Specifies the back end IP address and port used for internal cluster management. +`:`:: Specifies the back end IP address and port used for HTTPS traffic for hosted applications. +`:`:: Specifies the back end IP address and port used for HTTP traffic for hosted applications. + [NOTE] ==== @@ -205,10 +205,10 @@ Replace the example values with values applicable to your network configuration. ---- + where: - -:: The externally available IP address of the Bastion virtual server instance. -:: The name assigned to the cluster. -:: The domain assigned to the cluster. ++ +``:: Specifies the externally available IP address of the Bastion virtual server instance. +``:: Specifies the name assigned to the cluster. +``:: Specifies the domain assigned to the cluster. .Verification @@ -221,8 +221,8 @@ $ export KUBECONFIG= ---- + where: - -:: The path to the downloaded `kubeconfig` file. ++ +``:: Specifies the path to the downloaded `kubeconfig` file. .. Check cluster node status: + [source,terminal] diff --git a/modules/virt-install-ibm-cloud-config-new-cluster.adoc b/modules/virt-install-ibm-cloud-config-new-cluster.adoc index 8c0e3b79cf1f..3659e3bbd701 100644 --- a/modules/virt-install-ibm-cloud-config-new-cluster.adoc +++ b/modules/virt-install-ibm-cloud-config-new-cluster.adoc @@ -2,7 +2,7 @@ // // * virt/install/virt-install-ibm-cloud-bm-nodes.adoc -:_mod-docs-content-type: PROCEDURE +:_mod-docs-content-type: PROCEDURE [id="virt-install-ibm-cloud-config-new-cluster_{context}"] = Configuring {ibm-cloud-title} for the new cluster @@ -10,7 +10,7 @@ Configure and provision the {ibm-cloud-title} environment to establish the operational framework and nodes for your {VirtProductName} cluster. .Procedure -. Create a new virtual server instance in {ibm-cloud-title} at link:https://cloud.ibm.com/gen1/infrastructure/provision/vs[Virtual Server for Classic] to be the Bastion server. This instance is used to run the installation and provide environment services. +. Create a new virtual server instance in {ibm-cloud-title} at link:https://cloud.ibm.com/gen1/infrastructure/provision/vs[Virtual Server for Classic] to be the Bastion server. This instance is used to run the installation and provide environment services. . Change the default properties of the new virtual server instance to the following values. Use the provided defaults for all other values. + @@ -66,15 +66,15 @@ subnet netmask { ---- + where: - -:: The default domain name for DNS clients. -:: A comma-seperated list of DNS server IP addresses. -:: The default number of seconds a client keeps an assigned address. -:: The maximum number of seconds a client keeps an assigned address. -:: The start of the subnet IP address range. -:: The subnet mask of the subnet IP address range. -:: The broadcast IP address to use when to use sending a message to every device on the subnet. -:: The default gateway of the subnet. ++ +``:: Specifies the default domain name for DNS clients. +``:: Specifies a comma-seperated list of DNS server IP addresses. +``:: Specifies the default number of seconds a client keeps an assigned address. +``:: Specifies the maximum number of seconds a client keeps an assigned address. +``:: Specifies the start of the subnet IP address range. +``:: Specifies the subnet mask of the subnet IP address range. +``:: Specifies the broadcast IP address to use when to use sending a message to every device on the subnet. +``:: Specifies the default gateway of the subnet. . Restart DHCP on the Bastion virtual server instance: + @@ -138,4 +138,4 @@ $ firewall-cmd --add-masquerade --permanent [source,terminal] ---- $ firewall-cmd --reload ----- \ No newline at end of file +---- diff --git a/modules/virt-uploading-image-virtctl.adoc b/modules/virt-uploading-image-virtctl.adoc index 396bb1771698..c7330f12f06c 100644 --- a/modules/virt-uploading-image-virtctl.adoc +++ b/modules/virt-uploading-image-virtctl.adoc @@ -29,9 +29,11 @@ $ virtctl image-upload dv \ --image-path= ---- + -``:: The name of the data volume. -``:: The size of the data volume. For example: `--size=500Mi`, `--size=1G` -``:: The file path of the image. +where: ++ +``:: Specifies the name of the data volume. +``:: Specifies the size of the data volume. For example: `--size=500Mi`, `--size=1G` +``:: Specifies the file path of the image. + [NOTE] ==== diff --git a/networking/hardware_networks/configuring-sriov-device.adoc b/networking/hardware_networks/configuring-sriov-device.adoc index 0cdd02b70018..fdb48b7a14bb 100644 --- a/networking/hardware_networks/configuring-sriov-device.adoc +++ b/networking/hardware_networks/configuring-sriov-device.adoc @@ -6,6 +6,7 @@ include::_attributes/common-attributes.adoc[] toc::[] +[role="_abstract"] You can configure a Single Root I/O Virtualization (SR-IOV) device in your cluster. Before you perform any tasks in the following documentation, ensure that you xref:../../networking/networking_operators/sr-iov-operator/installing-sriov-operator.adoc#installing-sriov-operator[installed the SR-IOV Network Operator].