Configure Intel E810 NICs using Intel Ethernet Operator¶
About this task
This section provides instructions for installing and using Intel Ethernet operator to orchestrate and manage the configuration and capabilities provided by Intel E810 Series network interface cards (NICs).
Note
For more details refer to Intel Ethernet Operator repository.
The Intel Ethernet operator supports the following NICs:
Intel® Ethernet Network Adapter E810-CQDA1/CQDA2
Intel® Ethernet Network Adapter E810-XXVDA4
Intel® Ethernet Network Adapter E810-XXVDA2
Prerequisites
The system has been provisioned and unlocked.
To use flow configuration - Hugepages have been configured on selected nodes. For more details refer to Allocate Host Memory Using the CLI
Install Intel Ethernet Operator¶
Procedure
Source the platform environment.
$ source /etc/platform/openrc ~(keystone_admin)$
Install the Node Feature Discovery app.
Upload and apply the Intel Ethernet Operator.
Note
Intel Ethernet Operator installs SR-IOV Network Operator v1.2.0 in
intel-ethernet-operator
namespace as a dependency.~(keystone_admin)$ system application-upload /usr/local/share/applications/helm/intel-ethernet-operator-<version>.tgz +---------------+------------------------------------------+ | Property | Value | +---------------+------------------------------------------+ | active | False | | app_version | 1.0-1 | | created_at | 2023-08-03T09:31:43.338703+00:00 | | manifest_file | fluxcd-manifests | | manifest_name | intel-ethernet-operator-fluxcd-manifests | | name | intel-ethernet-operator | | progress | None | | status | uploading | | updated_at | None | +---------------+------------------------------------------+
~(keystone_admin)$ system application-apply intel-ethernet-operator +---------------+------------------------------------------+ | Property | Value | +---------------+------------------------------------------+ | active | False | | app_version | 1.0-1 | | created_at | 2023-08-03T09:31:43.338703+00:00 | | manifest_file | fluxcd-manifests | | manifest_name | intel-ethernet-operator-fluxcd-manifests | | name | intel-ethernet-operator | | progress | None | | status | applying | | updated_at | 2023-08-03T09:31:46.561703+00:00 | +---------------+------------------------------------------+
~(keystone_admin)$ system application-show intel-ethernet-operator +---------------+------------------------------------------+ | Property | Value | +---------------+------------------------------------------+ | active | True | | app_version | 1.0-1 | | created_at | 2023-08-03T09:31:43.338703+00:00 | | manifest_file | fluxcd-manifests | | manifest_name | intel-ethernet-operator-fluxcd-manifests | | name | intel-ethernet-operator | | progress | completed | | status | applied | | updated_at | 2023-08-03T09:32:56.714130+00:00 | +---------------+------------------------------------------+
Verify that all operator pods are up and running.
$ kubectl get pods -n intel-ethernet-operator NAME READY STATUS RESTARTS AGE clv-discovery-qkc29 1/1 Running 0 10m fwddp-daemon-qf7xh 1/1 Running 0 10m intel-ethernet-operator-controller-manager-74fddd5bf5-8tb88 1/1 Running 0 10m intel-ethernet-operator-controller-manager-74fddd5bf5-kbtbz 1/1 Running 0 10m intel-ethernet-operator-sriov-network-operator-6986d6548c-96qpr 1/1 Running 0 10m sriov-network-config-daemon-sxw5r 3/3 Running 0 10m
Update firmware and DDP of E810 NICs¶
Procedure
Create and deploy the webserver to store required files.
You must create local cache (e.g. webserver), which will serve required firmware and DDP files. Create the cache on a system with Internet access.
Create a dedicated folder for the webserver.
$ mkdir webserver $ cd webserver
Create the NGINX Dockerfile.
$ echo " FROM nginx COPY files /usr/share/nginx/html " >> Dockerfile
Create a files folder.
$ mkdir files $ cd files
Download the required packages into the files directory.
$ curl -OjL https://downloadmirror.intel.com/769278/E810_NVMUpdatePackage_v4_20_Linux.tar.gz
Build the image with packages.
$ cd .. $ podman build -t webserver:1.0.0 .
Push the image to a registry that is available from the cluster.
$ podman push localhost/webserver:1.0.0 $IMAGE_REGISTRY/webserver:1.0.0
Create a deployment on the cluster that will display the packages.
apiVersion: apps/v1 kind: Deployment metadata: name: ice-cache namespace: default spec: selector: matchLabels: run: ice-cache replicas: 1 template: metadata: labels: run: ice-cache spec: containers: - name: ice-cache image: $IMAGE_REGISTRY/webserver:1.0.0 ports: - containerPort: 80
Add a service to make it accessible the cluster.
apiVersion: v1 kind: Service metadata: name: ice-cache namespace: default labels: run: ice-cache spec: ports: - port: 80 protocol: TCP selector: run: ice-cache
The package is available in the cluster using the following URL:
http://ice-cache.default.svc.cluster.local/E810_NVMUpdatePackage_v4_20_Linux.tar.gz
List all the nodes in the cluster with the E810 NIC devices present.
$ kubectl get enc -n intel-ethernet-operator NAME UPDATE MESSAGE controller-0 NotRequested Inventory up to date
Use the following command to find information about the E810 devices on the selected node.
$ kubectl get enc -n intel-ethernet-operator -n intel-ethernet-operator controller-0 -o jsonpath={.status} | jq { "conditions": [ { "lastTransitionTime": "2023-08-03T09:33:02Z", "message": "Inventory up to date", "observedGeneration": 1, "reason": "NotRequested", "status": "True", "type": "Updated" } ], "devices": [ { "DDP": { "packageName": "ICE OS Default Package", "trackId": "0xc0000001", "version": "1.3.16.0" }, "PCIAddress": "0000:18:00.0", "deviceID": "1592", "driver": "ice", "driverVersion": "1.11.17.1", "firmware": { "MAC": "40:a6:b7:67:22:70", "version": "4.00 0x800117e8 1.3236.0" }, "name": "Ethernet Controller E810-C for QSFP", "vendorID": "8086" } ] }
Firmware update
Note
The
/lib/firmware
directory, which by default is used for firmware related operations, is read-only on StarlingX. Intel Ethernet Operator uses/var/lib/firmware
elevated to firmware search path instead. This action is performed by init containers and a customized path will be enabled on nodes with manager and fwddp (firmware-ddp) pods present. For more information, see https://docs.kernel.org/driver-api/firmware/fw_search_path.html.Create an EthernetClusterConfig and change values according to your environment:
apiVersion: ethernet.intel.com/v1 kind: EthernetClusterConfig metadata: name: <name> namespace: intel-ethernet-operator spec: nodeSelectors: kubernetes.io/hostname: controller-0 deviceSelector: pciAddress: "0000:18:00.0" deviceConfig: fwURL: "<URL_to_firmware>" fwChecksum: "<file_checksum_SHA-1_hash>"
CR can be applied by running:
$ kubectl apply -f <filename>
Check the status of the update using the following command:
$ kubectl get enc controller-0 -o jsonpath={.status.conditions} -n intel-ethernet-operator | jq
Once the firmware update is complete, the following status is reported:
[ { "lastTransitionTime": "2023-08-03T10:52:36Z", "message": "Updated successfully", "observedGeneration": 2, "reason": "Succeeded", "status": "True", "type": "Updated" } ]
See the output below for the Card’s NIC firmware:
[ { "DDP": { "packageName": "ICE OS Default Package", "trackId": "0xc0000001", "version": "1.3.16.0" }, "PCIAddress": "0000:18:00.0", "deviceID": "1592", "driver": "ice", "driverVersion": "1.11.17.1", "firmware": { "MAC": "40:a6:b7:67:22:70", "version": "4.30 0x80019da7 1.3415.0" }, "name": "Ethernet Controller E810-C for QSFP", "vendorID": "8086" } ]
DDP update
Warning
For DDP profile update to take effect, the ice driver needs to be reloaded. A reboot is performed by an operator after updating the DDP profile to one requested in
EthernetClusterConfig
. Reloading the ice driver should be done by the user.Note
The
/lib/firmware
(the directory from which the DDP profile is read) is read-only on StarlingX. As a result, the DDP profile is updated in/var/lib/firmware
directory and can be successfully read by the driver, when the customized firmware search path is set to that directory (this happens after the manager and fwddp pods are created on the nodes).For an example of the
systemd
service, used to reload the ice driver, see Intel Ethernet Operator repository.Create the
EthernetClusterConfig
and change the values according to your environment:apiVersion: ethernet.intel.com/v1 kind: EthernetClusterConfig metadata: name: <name> namespace: intel-ethernet-operator spec: nodeSelectors: kubernetes.io/hostname: controller-0 deviceSelector: pciAddress: "0000:18:00.0" deviceConfig: ddpURL: "<URL_to_DDP>" ddpChecksum: "<file_checksum_SHA-1_hash>"
CR can be applied by running:
$ kubectl apply -f <filename>
To check the status of the update:
$ kubectl get enc controller-0 -o jsonpath={.status.conditions} -n intel-ethernet-operator | jq
Once the DDP profile update is complete, the following status is reported:
[ { "lastTransitionTime": "2023-08-03T10:56:36Z", "message": "Updated successfully", "observedGeneration": 2, "reason": "Succeeded", "status": "True", "type": "Updated" } ]
See the output below for the Card’s NIC DDP profile..:
[ { "DDP": { "packageName": "ICE COMMS Package", "trackId": "0xc0000002", "version": "1.3.37.0" }, "PCIAddress": "0000:18:00.0", "deviceID": "1592", "driver": "ice", "driverVersion": "1.11.17.1", "firmware": { "MAC": "40:a6:b7:67:22:70", "version": "4.30 0x80019da7 1.3415.0" }, "name": "Ethernet Controller E810-C for QSFP", "vendorID": "8086" } ]
Note
The firmware and DDP can be described in one EthernetClusterConfig
by adding
the requested versions to deviceConfig in CR.
Deploy Flow Configuration Agent¶
The Flow Configuration Agent Pod runs UFT to configure Flow rules for a PF. UFT requires that trust mode is enabled for the first VF (VF0) of a PF so that it has the capability of creating/modifying flow rules for that PF. This VF also needs to be bound to vfio-pci driver. The SR-IOV VFs pools are K8s extended resources that are exposed via the SR-IOV Network Operator.
Note
Make sure sufficient huge pages are configured on the nodes selected for flow configuration.
View available Intel E810 series NICs using
SriovNetworkNodeStates
.$ kubectl get sriovnetworknodestates -n intel-ethernet-operator NAME AGE controller-0 4d1h
$ kubectl describe sriovnetworknodestates controller-0 -n intel-ethernet-operator Name: controller-0 Namespace: intel-ethernet-operator Labels: <none> Annotations: <none> API Version: sriovnetwork.openshift.io/v1 Kind: SriovNetworkNodeState Metadata: Creation Timestamp: 2023-08-03T09:32:46Z Generation: 1 Managed Fields: API Version: sriovnetwork.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: k:{"uid":"74c54187-3895-4ccf-85be-aacde9eeca57"}: f:spec: .: f:dpConfigVersion: Manager: sriov-network-operator Operation: Update Time: 2023-08-03T09:32:46Z API Version: sriovnetwork.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:status: .: f:interfaces: f:syncStatus: Manager: sriov-network-config-daemon Operation: Update Subresource: status Time: 2023-08-03T09:33:10Z Owner References: API Version: sriovnetwork.openshift.io/v1 Block Owner Deletion: true Controller: true Kind: SriovNetworkNodePolicy Name: default UID: 74c54187-3895-4ccf-85be-aacde9eeca57 Resource Version: 6494992 UID: e09c032a-61e9-4ece-affc-19dc5aa5bfdc Spec: Dp Config Version: 6494584 Status: Interfaces: Device ID: 1592 Driver: ice E Switch Mode: legacy Link Type: ETH Mac: 40:a6:b7:67:22:70 Mtu: 1500 Name: enp24s0 Pci Address: 0000:18:00.0 Totalvfs: 256 Vendor: 8086 Sync Status: Succeeded Events: <none>
The
SriovNetworkNodeStates
status provides NIC information such as the PCI address and interface names to defineSriovNetworkNodePolicy
to create required VFs pools.For example, the following three
SriovNetworkNodePolicy
CRs will create a trusted VFs pool name with resourceNamecvl_uft_admin
, along with two additional VFs pools for the application.Save the yaml contents shown below to a file named
sriov-network-policy.yaml
and then apply to create the VFs pools.apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: uft-admin-policy namespace: intel-ethernet-operator spec: deviceType: vfio-pci nicSelector: pfNames: - ens1f0#0-0 - ens1f1#0-0 vendor: "8086" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 8 priority: 99 resourceName: cvl_uft_admin --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: cvl-vfio-policy namespace: intel-ethernet-operator spec: deviceType: vfio-pci nicSelector: pfNames: - ens1f0#1-3 - ens1f1#1-3 vendor: "8086" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 8 priority: 89 resourceName: cvl_vfio --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: cvl-iavf-policy namespace: intel-ethernet-operator spec: deviceType: netdevice nicSelector: pfNames: - ens1f0#4-7 - ens1f1#4-7 vendor: "8086" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 8 priority: 79 resourceName: cvl_iavf
$ kubectl create -f sriov-network-policy.yaml
Check the node status to confirm that
cvl_uft_admin
resource pool registered DCF capable VFs on the node.$ kubectl describe node controller-0 -n intel-ethernet-operator | grep -i allocatable -A 20 Allocatable: cpu: 94 ephemeral-storage: 9417620260 hugepages-1Gi: 0 hugepages-2Mi: 12000Mi memory: 170703432Ki openshift.io/cvl_iavf: 4 openshift.io/cvl_uft_admin: 1 openshift.io/cvl_vfio: 3 pods: 110 System Info: Machine ID: 403149f2be594772baaa5edec199c0d0 System UUID: 80010d95-824b-e911-906e-0017a4403562 Boot ID: 61770a41-ac9a-45ad-8b64-9b32cfa86fe1 Kernel Version: 5.10.0-6-amd64 OS Image: Debian GNU/Linux 11 (bullseye) Operating System: linux Architecture: amd64 Container Runtime Version: containerd://1.4.12 Kubelet Version: v1.24.4 Kube-Proxy Version: v1.24.4
Create a DCF capable SR-IOV Network.
cat <<EOF | kubectl apply -f - apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-cvl-dcf spec: trust: 'on' networkNamespace: intel-ethernet-operator resourceName: cvl_uft_admin EOF
Create
FlowConfigNodeAgentDeployment
CR.Note
The Admin VFs pool prefix in
DCFVfPoolName
should be similar to the description in Step 2 (b) using the commandkubectl describe node controller-0 -n intel-ethernet-operator | grep -i allocatable -A 20
.Apply the updates to the yaml file.
cat <<EOF | kubectl apply -f - apiVersion: flowconfig.intel.com/v1 kind: FlowConfigNodeAgentDeployment metadata: labels: control-plane: flowconfig-daemon name: flowconfig-daemon-deployment namespace: intel-ethernet-operator spec: DCFVfPoolName: openshift.io/cvl_uft_admin NADAnnotation: sriov-cvl-dcf EOF
Verify that
FlowConfigNodeAgentDeployment
is running using the following commands.$ kubectl get pods -n intel-ethernet-operator NAME READY STATUS RESTARTS AGE clv-discovery-xsvw7 1/1 Running 0 6m21s flowconfig-daemon-controller-0 2/2 Running 0 29s fwddp-daemon-6tqc5 1/1 Running 0 6m21s intel-ethernet-operator-controller-manager-7975fd4b86-5b9x4 1/1 Running 0 6m27s intel-ethernet-operator-controller-manager-7975fd4b86-js9bm 1/1 Running 0 6m27s intel-ethernet-operator-sriov-network-operator-6986d6548c-28tq6 1/1 Running 0 6m27s sriov-device-plugin-2jwq2 1/1 Running 0 3m12s sriov-network-config-daemon-lsqs4 3/3 Running 0 6m22s
$ kubectl logs -n intel-ethernet-operator flowconfig-daemon-controller-0 -c uft Generating server_conf.yaml file... Done! server : ld_lib : "/usr/local/lib64" ports_info : - pci : "0000:18:01.0" mode : dcf server's pid=13 do eal init ... [{'pci': '0000:18:01.0', 'mode': 'dcf'}] [{'pci': '0000:18:01.0', 'mode': 'dcf'}] the dcf cmd line is: a.out -v -c 0x30 -n 4 -a 0000:18:01.0,cap=dcf -d /usr/local/lib64 --file-prefix=dcf -- EAL: Detected CPU lcores: 96 EAL: Detected NUMA nodes: 2 EAL: RTE Version: 'DPDK 22.07.0' EAL: Detected shared linkage of DPDK EAL: Multi-process socket /var/run/dpdk/dcf/mp_socket EAL: Selected IOVA mode 'VA' EAL: VFIO support initialized EAL: Using IOMMU type 1 (Type 1) EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:18:01.0 (socket 0) EAL: Releasing PCI mapped resource for 0000:18:01.0 EAL: Calling pci_unmap_resource for 0000:18:01.0 at 0x2101000000 EAL: Calling pci_unmap_resource for 0000:18:01.0 at 0x2101020000 EAL: Using IOMMU type 1 (Type 1) EAL: Probe PCI driver: net_ice_dcf (8086:1889) device: 0000:18:01.0 (socket 0) ice_load_pkg_type(): Active package is: 1.3.37.0, ICE COMMS Package (double VLAN mode) TELEMETRY: No legacy callbacks, legacy socket not created grpc server start ... now in server cycle
Create Flow Configuration rules.
ClusterFlowConfig
With trusted VF and application VFs ready to be configured, create a sample
ClusterFlowConfig
CR.cat <<EOF | kubectl apply -f - apiVersion: flowconfig.intel.com/v1 kind: ClusterFlowConfig metadata: name: pppoes-sample namespace: intel-ethernet-operator spec: rules: - pattern: - type: RTE_FLOW_ITEM_TYPE_ETH - type: RTE_FLOW_ITEM_TYPE_IPV4 spec: hdr: src_addr: 10.56.217.9 mask: hdr: src_addr: 255.255.255.255 - type: RTE_FLOW_ITEM_TYPE_END action: - type: to-pod-interface conf: podInterface: net1 attr: ingress: 1 priority: 0 podSelector: matchLabels: app: vagf role: controlplane EOF
To verify if the flow rules have been applied create a sample pod that meets the criteria.
Create a sample SR-IOV pod network.
apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-podnet namespace: intel-ethernet-operator spec: networkNamespace: intel-ethernet-operator resourceName: cvl_iavf ipam: |- { "type": "host-local", "subnet": "10.56.217.0/24", "rangeStart": "10.56.217.171", "rangeEnd": "10.56.217.181", "routes": [ { "dst": "0.0.0.0/0" } ], "gateway": "10.56.217.1" }
Create a sample pod attached to the network referenced above.
kind: Pod apiVersion: v1 metadata: name: example-pod namespace: intel-ethernet-operator labels: app: vagf role: controlplane annotations: k8s.v1.cni.cncf.io/networks: sriov-podnet spec: containers: - name: appcntr image: alpine command: - /bin/sh - '-c' - '--' args: - ' while true; do sleep 30; done ' resources: limits: openshift.io/cvl_iavf: '1' requests: openshift.io/cvl_iavf: '1' imagePullPolicy: IfNotPresent
Verify that the rules have been applied.
$ kubectl logs flowconfig-daemon-controller-0 -c uft -n intel-ethernet-operator Generating server_conf.yaml file... Done! server : ld_lib : "/usr/local/lib64" ports_info : - pci : "0000:18:01.0" mode : dcf server's pid=13 do eal init ... [{'pci': '0000:18:01.0', 'mode': 'dcf'}] [{'pci': '0000:18:01.0', 'mode': 'dcf'}] the dcf cmd line is: a.out -v -c 0x30 -n 4 -a 0000:18:01.0,cap=dcf -d /usr/local/lib64 --file-prefix=dcf -- EAL: Detected CPU lcores: 96 EAL: Detected NUMA nodes: 2 EAL: RTE Version: 'DPDK 22.07.0' EAL: Detected shared linkage of DPDK EAL: Multi-process socket /var/run/dpdk/dcf/mp_socket EAL: Selected IOVA mode 'VA' EAL: VFIO support initialized EAL: Using IOMMU type 1 (Type 1) EAL: Probe PCI driver: net_iavf (8086:1889) device: 0000:18:01.0 (socket 0) EAL: Releasing PCI mapped resource for 0000:18:01.0 EAL: Calling pci_unmap_resource for 0000:18:01.0 at 0x2101000000 EAL: Calling pci_unmap_resource for 0000:18:01.0 at 0x2101020000 EAL: Using IOMMU type 1 (Type 1) EAL: Probe PCI driver: net_ice_dcf (8086:1889) device: 0000:18:01.0 (socket 0) ice_load_pkg_type(): Active package is: 1.3.37.0, ICE COMMS Package (double VLAN mode) TELEMETRY: No legacy callbacks, legacy socket not created grpc server start ... now in server cycle flow.rte_flow_attr flow.rte_flow_item flow.rte_flow_item flow.rte_flow_item_ipv4 flow.rte_ipv4_hdr flow.rte_flow_item_ipv4 flow.rte_ipv4_hdr flow.rte_flow_item flow.rte_flow_action flow.rte_flow_action_vf flow.rte_flow_action rte_flow_attr(group=0, priority=0, ingress=1, egress=0, transfer=0, reserved=0) [rte_flow_item(type_=9, spec=None, last=None, mask=None), rte_flow_item(type_=11, spec=rte_flow_item_ipv4(hdr=rte_ipv4_hdr(version_ihl=0, type_of_service=0, total_length=0, packet_id=0, fragment_offset=0, time_to_live=0, next_proto_id=0, hdr_checksum=0, src_addr=171497737, dst_addr=0)), last=None, mask=rte_flow_item_ipv4(hdr=rte_ipv4_hdr(version_ihl=0, type_of_service=0, total_length=0, packet_id=0, fragment_offset=0, time_to_live=0, next_proto_id=0, hdr_checksum=0, src_addr=4294967295, dst_addr=0))), rte_flow_item(type_=0, spec=None, last=None, mask=None)] [rte_flow_action(type_=11, conf=rte_flow_action_vf(reserved=0, original=0, id=4)), rte_flow_action(type_=0, conf=None)] rte_flow_attr(group=0, priority=0, ingress=1, egress=0, transfer=0, reserved=0) 1 Finish ipv4: {'hdr': {'version_ihl': 0, 'type_of_service': 0, 'total_length': 0, 'packet_id': 0, 'fragment_offset': 0, 'time_to_live': 0, 'next_proto_id': 0, 'hdr_checksum': 0, 'src_addr': 165230602, 'dst_addr': 0}} Finish ipv4: {'hdr': {'version_ihl': 0, 'type_of_service': 0, 'total_length': 0, 'packet_id': 0, 'fragment_offset': 0, 'time_to_live': 0, 'next_proto_id': 0, 'hdr_checksum': 0, 'src_addr': 4294967295, 'dst_addr': 0}} rte_flow_action(type_=11, conf=rte_flow_action_vf(reserved=0, original=0, id=4)) rte_flow_action_vf(reserved=0, original=0, id=4) Action vf: {'reserved': 0, 'original': 0, 'id': 4} rte_flow_action(type_=0, conf=None) Validate ok... flow.rte_flow_attr flow.rte_flow_item flow.rte_flow_item flow.rte_flow_item_ipv4 flow.rte_ipv4_hdr flow.rte_flow_item_ipv4 flow.rte_ipv4_hdr flow.rte_flow_item flow.rte_flow_action flow.rte_flow_action_vf flow.rte_flow_action rte_flow_attr(group=0, priority=0, ingress=1, egress=0, transfer=0, reserved=0) [rte_flow_item(type_=9, spec=None, last=None, mask=None), rte_flow_item(type_=11, spec=rte_flow_item_ipv4(hdr=rte_ipv4_hdr(version_ihl=0, type_of_service=0, total_length=0, packet_id=0, fragment_offset=0, time_to_live=0, next_proto_id=0, hdr_checksum=0, src_addr=171497737, dst_addr=0)), last=None, mask=rte_flow_item_ipv4(hdr=rte_ipv4_hdr(version_ihl=0, type_of_service=0, total_length=0, packet_id=0, fragment_offset=0, time_to_live=0, next_proto_id=0, hdr_checksum=0, src_addr=4294967295, dst_addr=0))), rte_flow_item(type_=0, spec=None, last=None, mask=None)] [rte_flow_action(type_=11, conf=rte_flow_action_vf(reserved=0, original=0, id=4)), rte_flow_action(type_=0, conf=None)] rte_flow_attr(group=0, priority=0, ingress=1, egress=0, transfer=0, reserved=0) rte_flow_attr(group=0, priority=0, ingress=1, egress=0, transfer=0, reserved=0) 1 Finish ipv4: {'hdr': {'version_ihl': 0, 'type_of_service': 0, 'total_length': 0, 'packet_id': 0, 'fragment_offset': 0, 'time_to_live': 0, 'next_proto_id': 0, 'hdr_checksum': 0, 'src_addr': 165230602, 'dst_addr': 0}} Finish ipv4: {'hdr': {'version_ihl': 0, 'type_of_service': 0, 'total_length': 0, 'packet_id': 0, 'fragment_offset': 0, 'time_to_live': 0, 'next_proto_id': 0, 'hdr_checksum': 0, 'src_addr': 4294967295, 'dst_addr': 0}} rte_flow_action(type_=11, conf=rte_flow_action_vf(reserved=0, original=0, id=4)) rte_flow_action_vf(reserved=0, original=0, id=4) Action vf: {'reserved': 0, 'original': 0, 'id': 4} rte_flow_action(type_=0, conf=None) free attr free item ipv4 free item ipv4 free list item free action vf conf free list action Flow rule #0 created on port 0
NodeFlowConfig
If
ClusterFlowConfig
does not satisfy your requirements, useNodeFlowConfig
.Create a sample Node specific
NodeFlowConfig
CR named same as a target node with an empty spec.cat <<EOF | kubectl apply -f - apiVersion: flowconfig.intel.com/v1 kind: NodeFlowConfig metadata: name: controller-0 namespace: intel-ethernet-operator spec: EOF
$ kubectl describe nodeflowconfig controller-0 -n intel-ethernet-operator Name: controller-0 Namespace: intel-ethernet-operator Labels: <none> Annotations: <none> API Version: flowconfig.intel.com/v1 Kind: NodeFlowConfig Metadata: Creation Timestamp: 2023-08-07T12:11:53Z Generation: 2 Managed Fields: API Version: flowconfig.intel.com/v1 Fields Type: FieldsV1 fieldsV1: f:status: .: f:portInfo: Manager: flowconfig-daemon Operation: Update Subresource: status Time: 2023-08-07T12:11:53Z API Version: flowconfig.intel.com/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: Manager: kubectl-client-side-apply Operation: Update Time: 2023-08-07T12:19:19Z Resource Version: 7663427 UID: 61db7b0b-8776-4015-98de-c5cd319a9310 Status: Port Info: Port Id: 0 Port Mode: dcf Port Pci: 0000:18:01.0 Events: <none>
You can see the DCF port information from
NodeFlowConfig
CR status for a node. This port information can be used to identify for which port on a node the Flow rules should be applied.You can update the Node Flow configuration with a sample rule for a target port as shown below.
cat <<EOF | kubectl apply -f - apiVersion: flowconfig.intel.com/v1 kind: NodeFlowConfig metadata: name: controller-0 namespace: intel-ethernet-operator spec: rules: - pattern: - type: RTE_FLOW_ITEM_TYPE_ETH - type: RTE_FLOW_ITEM_TYPE_IPV4 spec: hdr: src_addr: 10.56.217.9 mask: hdr: src_addr: 255.255.255.255 - type: RTE_FLOW_ITEM_TYPE_END action: - type: RTE_FLOW_ACTION_TYPE_DROP - type: RTE_FLOW_ACTION_TYPE_END portId: 0 attr: ingress: 1 EOF