Interface and Networks

Connecting a VM to a network consists of two parts. First, networks are specified in spec.networks. Then, interfaces backed by the networks are added to the VM by specifying them in spec.domain.devices.interfaces.

Each interface must have a corresponding network with the same name.

An interface defines a virtual network interface of a virtual machine (also called a frontend). A network specifies the backend of an interface and declares which logical or physical device it is connected to (also called as backend).

Multus

It is also possible to connect VMIs to secondary networks using Multus. This assumes that multus is installed across your cluster and a corresponding NetworkAttachmentDefinition CRD was created.

Example:

Note

First create the respective network attachment, for example if you want to use SR-IOV for a secondary interface.

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  labels:
    special: vmi-host-network
  name: vmi-host-network-2
spec:
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/size: small
        kubevirt.io/domain: fedora
    spec:
      domain:
        cpu:
          cores: 2
        devices:
          disks:
          - name: containerdisk
            disk:
              bus: virtio
          - name: cloudinitdisk
            disk:
              bus: virtio
          interfaces:
          - name: default
            masquerade: {}
          - name: hostnetwork
            sriov: {}

        resources:
          requests:
            memory: 1024M
      networks:
      - name: default
        pod: {}
      - name: hostnetwork
        multus:
          networkName: sriovnetwork
      volumes:
      - name: containerdisk
        containerDisk:
          image: docker.io/kubevirt/fedora-cloud-container-disk-demo:devel
      - name: cloudinitdisk
        cloudInitNoCloud:
          userData: |-
            #!/bin/bash
            echo "fedora" |passwd fedora --stdin

In the example manifest above, the VM uses the first interface as default and second interface is SR-IOV which is mapped using multus.

SRIOV

In SR-IOV mode, virtual machines are directly exposed to an SR-IOV PCI device, usually allocated by Intel SR-IOV device plugin. The device is passed through into the guest operating system as a host device, using the VFIO userspace interface, to maintain high networking performance.

Note

In StarlingX SR-IOV device plugin is part of the default platform functionality.

Note

KubeVirt relies on VFIO userspace driver to pass PCI devices into the VMI guest. As a result, when configuring SR-IOV, define a pool of VF resources that uses driver: vfio.

Example:

Note

Make sure an SR-IOV interface is configured on a DATANETWORK (sriovnet0 for example below) on the StarlingX host. For more details see Provision SR-IOV Interfaces using the CLI.

  1. Create the Network attachment.

    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: sriov-net1
      annotations:
        k8s.v1.cni.cncf.io/resourceName: intel.com/pci_sriov_net_sriovnet0
    spec:
      config: '{
        "type": "sriov",
        "vlan": 5,
        "cniVersion": "0.3.1",
        "name": "sriov-net1"
      }'
    
  2. Launch the VM.

    apiVersion: kubevirt.io/v1alpha3
    kind: VirtualMachine
    metadata:
      labels:
        special: vmi-sriov-network
      name: vmi-sriov-network
    spec:
      running: true
      template:
        metadata:
          labels:
            kubevirt.io/size: small
            kubevirt.io/domain: fedora
        spec:
          domain:
            cpu:
              cores: 1
            devices:
              disks:
              - name: containerdisk
                disk:
                  bus: virtio
              - name: cloudinitdisk
                disk:
                  bus: virtio
              interfaces:
              - masquerade: {}
                name: default
              - name: sriov-net1
                sriov: {}
            resources:
              requests:
                memory: 1024M
          networks:
          - name: default
            pod: {}
          - multus:
              networkName: sriov-net1
            name: sriov-net1
          volumes:
          - name: containerdisk
            containerDisk:
              image: docker.io/kubevirt/fedora-cloud-container-disk-demo:devel
          - name: cloudinitdisk
            cloudInitNoCloud:
              userData: |-
                #!/bin/bash
                echo "fedora" |passwd fedora --stdin