Configure Intel Wireless FEC Accelerators using SR-IOV FEC operator¶
This section provides the instructions for installing the SR-IOV FEC operator that provides detailed configurability of Wireless FEC Accelerators on StarlingX (AIO-SX).
About this task
The SR-IOV FEC Operator for Intel Wireless FEC Accelerators supports the following vRAN FEC accelerators:
Intel® vRAN Dedicated Accelerator ACC100.
Intel® vRAN Boost Accelerator 1.0 VRB1 (formerly ACC200).
Intel® vRAN Boost Accelerator 2.0 VRB2.
Intel® FPGA Programmable Acceleration Card N3000.
Prerequisites
The system has been provisioned and unlocked.
Procedure
Source the platform environment.
$ source /etc/platform/openrc ~(keystone_admin)$
Upload the SR-IOV FEC Operator.
~(keystone_admin)$ system application-upload /usr/local/share/applications/helm/sriov-fec-operator-<version>.tgz +---------------+-------------------------------------+ | Property | Value | +---------------+-------------------------------------+ | active | False | | app_version | 1.0-1 | | created_at | 2022-09-29T19:47:29.427225+00:00 | | manifest_file | fluxcd-manifests | | manifest_name | sriov-fec-operator-fluxcd-manifests | | name | sriov-fec-operator | | progress | None | | status | uploading | | updated_at | None | +---------------+-------------------------------------+
(Optional) Configure a different resource name for FEC devices as desired.
To change the resource name for ACC100, use the following command:
~(keystone_admin)$ system helm-override-update sriov-fec-operator sriov-fec-operator sriov-fec-system --set env.SRIOV_FEC_ACC100_RESOURCE_NAME=intel_acc100_fec
Apply the SR-IOV FEC Operator.
~(keystone_admin)$ system application-apply sriov-fec-operator +---------------+-------------------------------------+ | Property | Value | +---------------+-------------------------------------+ | active | False | | app_version | 1.0-1 | | created_at | 2022-09-29T19:47:29.427225+00:00 | | manifest_file | fluxcd-manifests | | manifest_name | sriov-fec-operator-fluxcd-manifests | | name | sriov-fec-operator | | progress | None | | status | applying | | updated_at | 2022-09-29T19:47:33.599867+00:00 | +---------------+-------------------------------------+
~(keystone_admin)$ system application-show sriov-fec-operator +---------------+-------------------------------------+ | Property | Value | +---------------+-------------------------------------+ | active | True | | app_version | 1.0-1 | | created_at | 2022-09-29T19:47:29.427225+00:00 | | manifest_file | fluxcd-manifests | | manifest_name | sriov-fec-operator-fluxcd-manifests | | name | sriov-fec-operator | | progress | completed | | status | applied | | updated_at | 2022-09-29T19:50:27.543655+00:00 | +---------------+-------------------------------------+
Verify that all the operator pods are up and running.
$ kubectl get pods -n sriov-fec-system NAME READY STATUS RESTARTS AGE accelerator-discovery-svh87 1/1 Running 0 3m26s sriov-device-plugin-j54hh 1/1 Running 0 3m26s sriov-fec-controller-manager-77bb5b778b-bjmr8 2/2 Running 0 3m28s sriov-fec-daemonset-stnjh 1/1 Running 0 3m26s
List all the nodes in the cluster with FEC accelerators installed.
ACC100 and N3000
$ kubectl get sriovfecnodeconfigs.sriovfec.intel.com -n sriov-fec-system NAME CONFIGURED controller-0 NotRequested
VRB1 and VRB2
$ kubectl get sriovvrbnodeconfigs.sriovvrb.intel.com -n sriov-fec-system NAME CONFIGURED controller-0 NotRequested
Find the PCI address of the PF of SR-IOV FEC accelerator device to be configured.
ACC100
$ kubectl get sriovfecnodeconfigs.sriovfec.intel.com -n sriov-fec-system controller-0 -o yaml apiVersion: sriovfec.intel.com/v2 kind: SriovFecNodeConfig metadata: creationTimestamp: "2022-08-25T01:33:35Z" generation: 1 name: controller-0 namespace: sriov-fec-system resourceVersion: "8298897" selfLink: /apis/sriovfec.intel.com/v2/namespaces/sriov-fec-system/sriovfecnodeconfigs/controller-0 uid: dcab90d9-2fe2-4769-81b0-fdd54e96e287 spec: physicalFunctions: [] status: conditions: - lastTransitionTime: "2022-08-25T01:33:35Z" message: "" observedGeneration: 1 reason: NotRequested status: "False" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c driver: "" maxVirtualFunctions: 16 pciAddress: "0000:8a:00.0" vendorID: "8086" virtualFunctions: []
VRB1
$ kubectl get sriovvrbnodeconfigs.sriovvrb.intel.com -n sriov-fec-system controller-0 -o yaml apiVersion: sriovvrb.intel.com/v1 kind: SriovVrbNodeConfig metadata: creationTimestamp: "2024-05-17T01:35:36Z" generation: 1 name: controller-0 namespace: sriov-fec-system resourceVersion: "1420543" uid: 4db81a14-2ddf-4fc3-9f09-939ece5fd33a spec: physicalFunctions: [] status: conditions: - lastTransitionTime: "2024-05-17T01:35:36Z" message: "" observedGeneration: 1 reason: NotRequested status: "False" type: Configured inventory: sriovAccelerators: - deviceID: 57c0 driver: vfio-pci maxVirtualFunctions: 16 pciAddress: 0000:f7:00.0 vendorID: "8086" virtualFunctions: [] pfBbConfVersion: v24.03-0-g1bbb3ac
VRB2
$ kubectl get sriovvrbnodeconfigs.sriovvrb.intel.com -n sriov-fec-system controller-0 -o yaml apiVersion: sriovvrb.intel.com/v1 kind: SriovVrbNodeConfig metadata: creationTimestamp: "2024-06-26T20:32:51Z" generation: 1 name: controller-0 namespace: sriov-fec-system resourceVersion: "9384433" uid: 31a7325e-d943-400b-aa14-2449d2d019c3 spec: physicalFunctions: [] status: conditions: - lastTransitionTime: "2024-06-26T20:32:52Z" message: "" observedGeneration: 1 reason: NotRequested status: "False" type: Configured inventory: sriovAccelerators: - deviceID: 57c2 driver: vfio-pci maxVirtualFunctions: 64 pciAddress: "0000:07:00.0" vendorID: "8086" virtualFunctions: [] pfBbConfVersion: v24.03-0-g1bbb3ac
N3000
$ kubectl get sriovfecnodeconfigs.sriovfec.intel.com -n sriov-fec-system controller-0 -o yaml apiVersion: sriovfec.intel.com/v2 kind: SriovFecNodeConfig metadata: creationTimestamp: "2022-10-21T18:17:55Z" generation: 1 name: controller-0 namespace: sriov-fec-system resourceVersion: "1996828" selfLink: /apis/sriovfec.intel.com/v2/namespaces/sriov-fec-system/sriovfecnodeconfigs/controller-0 uid: 05db8606-8236-4efd-99bb-7b5ca20cd02e spec: physicalFunctions: [] status: conditions: - lastTransitionTime: "2022-10-21T18:17:55Z" message: "" observedGeneration: 1 reason: NotRequested status: "False" type: Configured inventory: sriovAccelerators: - deviceID: 0d8f driver: "" maxVirtualFunctions: 8 pciAddress: 0000:1c:00.0 vendorID: "8086" virtualFunctions: []
Apply the FEC device configuration.
ACC100 device configuration.
The maximum number of VFs that can be configured for ACC100 is 16 VFs.
There are 8 queue groups available which can be allocated to any available operation (4GUL/4GDL/5GUL/5GDL) based on the
numQueueGroups
parameter.The product of
numQueueGroups
xnumAqsPerGroups
xaqDepthLog2
xnumVfBundles
must be less than 32K.The following example creates 1 VF, configures ACC100’s 8 queue groups; allocating 4 queue groups for 5G Uplink and another 4 queue groups for 5G Downlink.
apiVersion: sriovfec.intel.com/v2 kind: SriovFecClusterConfig metadata: name: config namespace: sriov-fec-system spec: priority: 1 nodeSelector: kubernetes.io/hostname: controller-0 acceleratorSelector: pciAddress: 0000:8a:00.0 physicalFunction: pfDriver: "vfio-pci" vfDriver: "vfio-pci" vfAmount: 1 bbDevConfig: acc100: # pfMode: false = VF Programming, true = PF Programming pfMode: false numVfBundles: 1 maxQueueSize: 1024 uplink4G: numQueueGroups: 0 numAqsPerGroups: 16 aqDepthLog2: 4 downlink4G: numQueueGroups: 0 numAqsPerGroups: 16 aqDepthLog2: 4 uplink5G: numQueueGroups: 4 numAqsPerGroups: 16 aqDepthLog2: 4 downlink5G: numQueueGroups: 4 numAqsPerGroups: 16 aqDepthLog2: 4 drainSkip: true
The following example creates 2 VFs, configures ACC100’s 8 queue groups; allocating 2 queue groups each for 4G Uplink, 4G downlink, 5G Uplink and 5G downlink.
apiVersion: sriovfec.intel.com/v2 kind: SriovFecClusterConfig metadata: name: config namespace: sriov-fec-system spec: priority: 1 nodeSelector: kubernetes.io/hostname: controller-0 acceleratorSelector: pciAddress: 0000:8a:00.0 physicalFunction: pfDriver: "vfio-pci" vfDriver: "vfio-pci" vfAmount: 2 bbDevConfig: acc100: # pfMode: false = VF Programming, true = PF Programming pfMode: false numVfBundles: 2 maxQueueSize: 1024 uplink4G: numQueueGroups: 2 numAqsPerGroups: 16 aqDepthLog2: 4 downlink4G: numQueueGroups: 2 numAqsPerGroups: 16 aqDepthLog2: 4 uplink5G: numQueueGroups: 2 numAqsPerGroups: 16 aqDepthLog2: 4 downlink5G: numQueueGroups: 2 numAqsPerGroups: 16 aqDepthLog2: 4 drainSkip: true
VRB1 device configuration.
The maximum number of VFs that can be configured for VRB1 is 16 VFs.
There are 16 queue groups available which can be allocated to any available operation (4GUL/4GDL/5GUL/5GDL/FFT) based on the
numQueueGroups
parameter.The product of
numQueueGroups
xnumAqsPerGroups
xaqDepthLog2
xnumVfBundles
must be less than 64K.The following configuration creates 1 VF, configures VRB1’s 12 queue groups; allocating 16 queues per VF for 5G processing engine functions(5GUL/5GDL/FFT).
apiVersion: sriovvrb.intel.com/v1 kind: SriovVrbClusterConfig metadata: name: config namespace: sriov-fec-system spec: acceleratorSelector: pciAddress: 0000:f7:00.0 nodeSelector: kubernetes.io/hostname: controller-0 priority: 1 drainSkip: true physicalFunction: pfDriver: vfio-pci vfDriver: vfio-pci vfAmount: 1 bbDevConfig: vrb1: numVfBundles: 1 pfMode: false maxQueueSize: 1024 downlink4G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 0 uplink4G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 0 downlink5G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 4 uplink5G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 4 qfft: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 4
The following configuration creates 2 VF, configures VRB1’s 16 queue groups; allocating 16 queues per VF for 4G and 5G processing engine functions(4GUL/4GDL/5GUL/5GDL/FFT).
apiVersion: sriovvrb.intel.com/v1 kind: SriovVrbClusterConfig metadata: name: config namespace: sriov-fec-system spec: acceleratorSelector: pciAddress: 0000:f7:00.0 nodeSelector: kubernetes.io/hostname: controller-0 priority: 1 drainSkip: true physicalFunction: pfDriver: vfio-pci vfDriver: vfio-pci vfAmount: 2 bbDevConfig: vrb1: numVfBundles: 2 pfMode: false maxQueueSize: 1024 downlink4G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 4 uplink4G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 4 downlink5G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 4 uplink5G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 4 qfft: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 4
VRB2 device configuration.
The maximum number of VFs that can be configured for VRB2 is 64 VFs.
There are 32 queue groups available which can be allocated to any available operation (4GUL/4GDL/5GUL/5GDL/FFT/MLD) based on the
numQueueGroups
parameter.The product of
numQueueGroups
xnumAqsPerGroups
xaqDepthLog2
xnumVfBundles
must be less than 256K.The following configuration creates 1 VF, configures VRB2’s 32 queue groups; allocating 64 queues per VF for 5G processing engine functions(5GUL/5GDL/FFT/MLD).
apiVersion: sriovvrb.intel.com/v1 kind: SriovVrbClusterConfig metadata: name: config namespace: sriov-fec-system spec: priority: 1 nodeSelector: kubernetes.io/hostname: controller-0 acceleratorSelector: pciAddress: 0000:07:00.0 physicalFunction: pfDriver: vfio-pci vfDriver: vfio-pci vfAmount: 1 bbDevConfig: vrb2: # Pf mode: false = VF Programming, true = PF Programming pfMode: false numVfBundles: 1 maxQueueSize: 1024 uplink4G: numQueueGroups: 0 numAqsPerGroups: 64 aqDepthLog2: 5 downlink4G: numQueueGroups: 0 numAqsPerGroups: 64 aqDepthLog2: 5 uplink5G: numQueueGroups: 8 numAqsPerGroups: 64 aqDepthLog2: 5 downlink5G: numQueueGroups: 8 numAqsPerGroups: 64 aqDepthLog2: 5 qfft: numQueueGroups: 8 numAqsPerGroups: 64 aqDepthLog2: 5 qmld: numQueueGroups: 8 numAqsPerGroups: 64 aqDepthLog2: 5 drainSkip: true
N3000 device configuration.
The maximum number of VFs that can be configured for N3000 is 8 VFs.
The maximum number of queues that can be mapped to each VF for uplink or downlink is 32.
The following configuration for N3000 creates 1 VF with 32 queues each for 5G uplink and 5G downlink.
apiVersion: sriovfec.intel.com/v2 kind: SriovFecClusterConfig metadata: name: config namespace: sriov-fec-system spec: priority: 1 nodeSelector: kubernetes.io/hostname: controller-0 acceleratorSelector: pciAddress: 0000:1c:00.0 physicalFunction: pfDriver: pci-pf-stub vfDriver: vfio-pci vfAmount: 1 bbDevConfig: n3000: # Network Type: either "FPGA_5GNR" or "FPGA_LTE" networkType: "FPGA_5GNR" # Pf mode: false = VF Programming, true = PF Programming pfMode: false flrTimeout: 610 downlink: bandwidth: 3 loadBalance: 128 queues: vf0: 32 vf1: 0 vf2: 0 vf3: 0 vf4: 0 vf5: 0 vf6: 0 vf7: 0 uplink: bandwidth: 3 loadBalance: 128 queues: vf0: 32 vf1: 0 vf2: 0 vf3: 0 vf4: 0 vf5: 0 vf6: 0 vf7: 0 drainSkip: true
The following configuration for N3000 creates 2 VFs with 16 queues each, mapping 32 queues with 2 VFs for 5G uplink and another 32 queues with 2 VFs for 5G downlink.
apiVersion: sriovfec.intel.com/v2 kind: SriovFecClusterConfig metadata: name: config namespace: sriov-fec-system spec: priority: 1 nodeSelector: kubernetes.io/hostname: controller-0 acceleratorSelector: pciAddress: 0000:1c:00.0 physicalFunction: pfDriver: vfio-pci vfDriver: vfio-pci vfAmount: 2 bbDevConfig: n3000: # Network Type: either "FPGA_5GNR" or "FPGA_LTE" networkType: "FPGA_5GNR" # Pf mode: false = VF Programming, true = PF Programming pfMode: false flrTimeout: 610 downlink: bandwidth: 3 loadBalance: 128 queues: vf0: 16 vf1: 16 vf2: 0 vf3: 0 vf4: 0 vf5: 0 vf6: 0 vf7: 0 uplink: bandwidth: 3 loadBalance: 128 queues: vf0: 16 vf1: 16 vf2: 0 vf3: 0 vf4: 0 vf5: 0 vf6: 0 vf7: 0 drainSkip: true
The
SriovFecClusterConfig
orSriovVrbClusterConfig
sets the default valuespec.drainSkip: True
to avoid node draining.Create and apply a
SriovFecClusterConfig
orSriovVrbClusterConfig
custom resource using the above examples as templates, setting the parametersnodeSelector:kubernetes.io/hostname
andacceleratorSelector:pciAddress
to select the desired device and configuringvfAmount
andnumVfBundles
as desired.For ACC100 and N3000
$ kubectl apply -f <sriov-fec-config-file-name>.yaml sriovfecclusterconfig.sriovfec.intel.com/config created
For VRB1 and VRB2
$ kubectl apply -f <sriov-vrb-config-file-name>.yaml sriovvrbclusterconfig.sriovvrb.intel.com/config created
Note
The
vfAmount
andnumVfBundles
inSriovFecClusterConfig
orSriovVrbClusterConfig
must be always equal for ACC100, VRB1, VRB2.
Verify that the FEC or VRB configuration is applied.
Note
When using FEC operator, there is no integration between FEC operator and system inventory, so the configuration applied by FEC operator may not reflect in the system inventory.
An example of ACC100 status after applying 1 VF configuration.
$ kubectl get sriovfecnodeconfigs.sriovfec.intel.com -n sriov-fec-system controller-0 -o yaml apiVersion: sriovfec.intel.com/v2 kind: SriovFecNodeConfig metadata: creationTimestamp: "2024-06-24T18:15:02Z" generation: 2 name: controller-0 namespace: sriov-fec-system resourceVersion: "204896" uid: bb5d5443-0ac3-4a5b-863f-1d81717979bf spec: drainSkip: true physicalFunctions: - bbDevConfig: acc100: downlink4G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 0 downlink5G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 4 maxQueueSize: 1024 numVfBundles: 1 uplink4G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 0 uplink5G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 4 pciAddress: 0000:8a:00.0 pfDriver: vfio-pci vfAmount: 1 vfDriver: vfio-pci status: conditions: - lastTransitionTime: "2024-06-24T18:21:06Z" message: Configured successfully observedGeneration: 2 reason: Succeeded status: "True" type: Configured inventory: sriovAccelerators: - deviceID: 0d5c driver: vfio-pci maxVirtualFunctions: 16 pciAddress: 0000:8a:00.0 vendorID: "8086" virtualFunctions: - deviceID: 0d5d driver: vfio-pci pciAddress: 0000:8b:00.0 pfBbConfVersion: v24.03-0-g1bbb3ac
An example of VRB1 status after applying 1 VF configuration.
$ kubectl get sriovvrbnodeconfigs.sriovvrb.intel.com -n sriov-fec-system controller-0 -o yaml apiVersion: sriovvrb.intel.com/v1 kind: SriovVrbNodeConfig metadata: creationTimestamp: "2024-05-17T01:35:36Z" generation: 2 name: controller-0 namespace: sriov-fec-system resourceVersion: "9659405" uid: 4db81a14-2ddf-4fc3-9f09-939ece5fd33a spec: drainSkip: true physicalFunctions: - bbDevConfig: vrb1: downlink4G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 0 downlink5G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 4 fftLut: fftChecksum: "" fftUrl: "" maxQueueSize: 1024 numVfBundles: 1 qfft: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 4 uplink4G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 0 uplink5G: aqDepthLog2: 4 numAqsPerGroups: 16 numQueueGroups: 4 pciAddress: 0000:f7:00.0 pfDriver: vfio-pci vfAmount: 1 vfDriver: vfio-pci status: conditions: - lastTransitionTime: "2024-06-27T22:35:50Z" message: Configured successfully observedGeneration: 2 reason: Succeeded status: "True" type: Configured inventory: sriovAccelerators: - deviceID: 57c0 driver: vfio-pci maxVirtualFunctions: 16 pciAddress: 0000:f7:00.0 vendorID: "8086" virtualFunctions: - deviceID: 57c1 driver: vfio-pci pciAddress: 0000:f7:00.1 pfBbConfVersion: v24.03-0-g1bbb3ac
An example of VRB2 status after applying 1 VF configuration.
$ kubectl get sriovvrbnodeconfigs.sriovvrb.intel.com -n sriov-fec-system controller-0 -o yaml apiVersion: sriovvrb.intel.com/v1 kind: SriovVrbNodeConfig metadata: creationTimestamp: "2024-06-26T20:32:51Z" generation: 2 name: controller-0 namespace: sriov-fec-system resourceVersion: "9400270" uid: 31a7325e-d943-400b-aa14-2449d2d019c3 spec: drainSkip: true physicalFunctions: - bbDevConfig: vrb2: downlink4G: aqDepthLog2: 5 numAqsPerGroups: 64 numQueueGroups: 0 downlink5G: aqDepthLog2: 5 numAqsPerGroups: 64 numQueueGroups: 8 fftLut: fftChecksum: "" fftUrl: "" maxQueueSize: 1024 numVfBundles: 1 qfft: aqDepthLog2: 5 numAqsPerGroups: 64 numQueueGroups: 8 qmld: aqDepthLog2: 5 numAqsPerGroups: 64 numQueueGroups: 8 uplink4G: aqDepthLog2: 5 numAqsPerGroups: 64 numQueueGroups: 0 uplink5G: aqDepthLog2: 5 numAqsPerGroups: 64 numQueueGroups: 8 pciAddress: "0000:07:00.0" pfDriver: vfio-pci vfAmount: 1 vfDriver: vfio-pci status: conditions: - lastTransitionTime: "2024-06-26T22:27:05Z" message: Configured successfully observedGeneration: 2 reason: Succeeded status: "True" type: Configured inventory: sriovAccelerators: - deviceID: 57c2 driver: vfio-pci maxVirtualFunctions: 64 pciAddress: "0000:07:00.0" vendorID: "8086" virtualFunctions: - deviceID: 57c3 driver: vfio-pci pciAddress: "0000:07:00.1" pfBbConfVersion: v24.03-0-g1bbb3ac
An example of N3000 status after applying 2 VFs configuration.
$ kubectl get sriovfecnodeconfigs.sriovfec.intel.com -n sriov-fec-system controller-0 -o yaml apiVersion: sriovfec.intel.com/v2 kind: SriovFecNodeConfig metadata: creationTimestamp: "2024-06-26T23:18:46Z" generation: 2 name: controller-0 namespace: sriov-fec-system resourceVersion: "1206023" uid: 2946a968-aa5e-4bec-8ad7-1a3fca678c1b spec: drainSkip: true physicalFunctions: - bbDevConfig: n3000: downlink: bandwidth: 3 loadBalance: 128 queues: vf0: 16 vf1: 16 vf2: 0 vf3: 0 vf4: 0 vf5: 0 vf6: 0 vf7: 0 flrTimeout: 610 networkType: FPGA_5GNR uplink: bandwidth: 3 loadBalance: 128 queues: vf0: 16 vf1: 16 vf2: 0 vf3: 0 vf4: 0 vf5: 0 vf6: 0 vf7: 0 pciAddress: 0000:1c:00.0 pfDriver: vfio-pci vfAmount: 2 vfDriver: vfio-pci status: conditions: - lastTransitionTime: "2024-06-26T23:22:54Z" message: Configured successfully observedGeneration: 2 reason: Succeeded status: "True" type: Configured inventory: sriovAccelerators: - deviceID: 0d8f driver: vfio-pci maxVirtualFunctions: 8 pciAddress: 0000:1c:00.0 vendorID: "8086" virtualFunctions: - deviceID: 0d90 driver: vfio-pci pciAddress: 0000:1c:00.1 - deviceID: 0d90 driver: vfio-pci pciAddress: 0000:1c:00.2 pfBbConfVersion: v24.03-0-g1bbb3ac
Modify FEC or VRB Cluster config.
To further modify FEC or VRB device configuration, make desired modifications to the sriov-fec/vrb custom resource file and re-apply.
ACC100 and N3000
$ kubectl apply -f <sriov-fec-config-file-name>.yaml sriovfecclusterconfig.sriovfec.intel.com/config configured
VRB1 and VRB2
$ kubectl apply -f <sriov-vrb-config-file-name>.yaml sriovvrbclusterconfig.sriovvrb.intel.com/config configured
Delete
SriovFecClusterConfig
orSriovVrbClusterConfig
.ACC100 and N3000
$ kubectl delete -f <sriov-fec-config-file-name>.yaml sriovfecclusterconfig.sriovfec.intel.com "config" deleted
VRB1 and VRB2
$ kubectl delete -f <sriov-vrb-config-file-name>.yaml sriovvrbclusterconfig.sriovvrb.intel.com "config" deleted
Configure VFIO for PF interface.
SR-IOV FEC operator also supports
vfio-pci
driver for PF interface.If the
vfio-pci
driver is used to bind the PF interface, then a UUID token must be configured as aVFIO_TOKEN
to both PF and VF interfaces.For the PF interface, the
VFIO_TOKEN
is configured by SR-IOV FEC operator and has the default value of02bddbbf-bbb0-4d79-886b-91bad3fbb510
It is highly recommended to change the default vfio-token when configuring the accelerator in vfio mode (ie., vfio-pci driver for PF interface).
The
VFIO_TOKEN
could be changed by settingSRIOV_FEC_VFIO_TOKEN
before application Apply with system helm-override-update.This example sets the
SRIOV_FEC_VFIO_TOKEN
usinguuidgen
.~(keystone_admin)$ system helm-override-update sriov-fec-operator sriov-fec-operator sriov-fec-system --set env.SRIOV_FEC_VFIO_TOKEN=`uuidgen`
Note
You must configure
SRIOV_FEC_VFIO_TOKEN
before installing the application. IfSRIOV_FEC_VFIO_TOKEN
needs to be updated after the application is installed, removing the application will be necessary before proceeding with updatingSRIOV_FEC_VFIO_TOKEN
and reinstalling the application.For the VF interface, the same
VFIO_TOKEN
must be configured by the application. You can get the token using the command system helm-override-show sriov-fec-operator sriov-fec-operator sriov-fec-system.To configure ACC100, N3000, VRB1 and VRB2 in vfio mode, you should provide
SriovFecClusterConfig
orSriovVrbClusterConfig
withspec.physicalFunction.pfDriver: vfio-pci
.
Switch from Static method configuration to Operator method.
Delete configuration using the static method. system host-device-modify controller-0 pci_0000_f7_00_0 --driver igb_uio --vf-driver none -N 0
Postrequisites
Resource Request: The resource name for FEC VFs configured with SR-IOV FEC operator must be
intel.com/intel_fec_acc100
for ACC100,intel.com/intel_fec_5g
for N3000,intel.com/intel_fec_acc200
for VRB1 andintel.com/intel_vrb_vrb2
for VRB2 when requested in a pod spec unless the resource name was modified using the system helm-override-update command.Resource request for ACC100.
resources: requests: intel.com/intel_fec_acc100: '16' limits: intel.com/intel_fec_acc100: '16'
Resource request for VRB1.
resources: requests: intel.com/intel_fec_acc200: '16' limits: intel.com/intel_fec_acc200: '16'
Resource request for VRB2.
resources: requests: intel.com/intel_vrb_vrb2: '64' limits: intel.com/intel_vrb_vrb2: '64'
Resource request for N3000.
resources: requests: intel.com/intel_fec_5g: '2' limits: intel.com/intel_fec_5g: '2'
Run the following command once the application pod is ready to get the PCI address of the allocated FEC or VRB device along with the VFIO token when applicable.
ACC100
sysadmin@controller-0:~$ kubectl exec -ti app-pod -- env | grep PCI PCIDEVICE_INTEL_COM_INTEL_FEC_ACC100=0000:8b:00.0 PCIDEVICE_INTEL_COM_INTEL_FEC_ACC100_INFO={"0000:8b:00.0":{"extra":{"VFIO_TOKEN":"02bddbbf-bbb0-4d79-886b-91bad3fbb510"},"generic":{"deviceID":"0000:8b:00.0"},"vfio":{"mount":"/dev/vfio/vfio"}}}
VRB1
sysadmin@controller-0:~$ kubectl exec -ti app-pod -- env | grep PCI PCIDEVICE_INTEL_COM_INTEL_FEC_ACC200_INFO={"0000:f7:00.1":{"extra":{"VFIO_TOKEN":"02bddbbf-bbb0-4d79-886b-91bad3fbb510"},"generic":{"deviceID":"0000:f7:00.1"},"vfio":{"mount":"/dev/vfio/vfio"}}} PCIDEVICE_INTEL_COM_INTEL_FEC_ACC200=0000:f7:00.1
VRB2
sysadmin@controller-0:~$ kubectl exec -ti app-pod -- env | grep PCI PCIDEVICE_INTEL_COM_INTEL_VRB_VRB2_INFO={"0000:07:00.1":{"extra":{"VFIO_TOKEN":"02bddbbf-bbb0-4d79-886b-91bad3fbb510"},"generic":{"deviceID":"0000:07:00.1"},"vfio":{"mount":"/dev/vfio/vfio"}}} PCIDEVICE_INTEL_COM_INTEL_VRB_VRB2=0000:07:00.1
N3000
sysadmin@controller-0:~$ kubectl exec -ti app-pod -- env | grep PCI PCIDEVICE_INTEL_COM_INTEL_FEC_5G_INFO={"0000:1c:00.1":{"extra":{"VFIO_TOKEN":"02bddbbf-bbb0-4d79-886b-91bad3fbb510"},"generic":{"deviceID":"0000:1c:00.1"},"vfio":{"mount":"/dev/vfio/vfio"}}} PCIDEVICE_INTEL_COM_INTEL_FEC_5G=0000:1c:00.1
Applications that are using FEC VFs when the PF interface is bound with the
vfio-pci
driver, should provide thevfio-token
to the VF interface.For example, a sample DPDK application can provide
vfio-vf-token
via Environment Abstraction Layer (EAL) parameters. ./test-bbdev.py -e="--vfio-vf-token=02bddbbf-bbb0-4d79-886b-91bad3fbb510 -a$PCIDEVICE_INTEL_COM_INTEL_FEC_ACC200"