It is possible to use Hyper-V as a compute node within an OpenStack Deployment. The
nova-compute
service runs as "openstack-compute," a 32-bit service directly upon the Windows
platform with the Hyper-V role enabled. The necessary Python components as well as the
nova-compute
service are installed directly onto the Windows platform. Windows Clustering
Services are not needed for functionality within the OpenStack infrastructure. The use of
the Windows Server 2012 platform is recommend for the best experience and is the platform
for active development. The following Windows platforms have been tested as compute nodes:
Windows Server 2008 R2
Both Server and Server Core with the Hyper-V role enabled (Shared Nothing Live migration is not supported using 2008 R2)
Windows Server 2012 and Windows Server 2012 R2
Server and Core (with the Hyper-V role enabled), and Hyper-V Server
The only OpenStack services required on a Hyper-V node are
nova-compute
and
neutron-hyperv-agent
.
Regarding the resources needed for this host you have to consider that Hyper-V will require 16 GB - 20 GB of
disk space for the OS itself, including updates. Two NICs are required,
one connected to the management network and one to the guest data network.
The following sections discuss how to prepare the Windows Hyper-V node for operation as an OpenStack compute node. Unless stated otherwise, any configuration information should work for the Windows 2008 R2, 2012 and 2012 R2 platforms.
The Hyper-V compute node needs to have ample storage for storing the virtual machine images running on the compute nodes. You may use a single volume for all, or partition it into an OS volume and VM volume. It is up to the individual deploying to decide.
Network time services must be configured to ensure proper operation of the OpenStack nodes. To set network time on your Windows host you must run the following commands:
C:\>net stop w32time C:\>w32tm /config /manualpeerlist:pool.ntp.org,0x8 /syncfromflags:MANUAL C:\>net start w32time
Keep in mind that the node will have to be time synchronized with the other nodes of your OpenStack environment, so it is important to use the same NTP server. Note that in case of an Active Directory environment, you may do this only for the AD Domain Controller.
Information regarding the Hyper-V virtual Switch can be located here: http://technet.microsoft.com/en-us/library/hh831823.aspx
To quickly enable an interface to be used as a Virtual Interface the following PowerShell may be used:
PS C:\>$if = Get-NetIPAddress -IPAddress 192* | Get-NetIPInterface
PS C:\>New-VMSwitch -NetAdapterName $if.ifAlias -Name YOUR_BRIDGE_NAME
-AllowManagementOS $false
Note | |
---|---|
It is very important to make sure that when you are using an Hyper-V node with only 1 NIC the -AllowManagementOS
option is set on |
To prepare the Hyper-V node to be able to attach to volumes provided by cinder you must first make sure the Windows iSCSI initiator service is running and started automatically.
PS C:\>Set-Service -Name MSiSCSI -StartupType Automatic PS C:\>Start-Service MSiSCSI
Detailed information on the configuration of live migration can be found here: http://technet.microsoft.com/en-us/library/jj134199.aspx
The following outlines the steps of shared nothing live migration.
The target hosts ensures that live migration is enabled and properly configured in Hyper-V.
The target hosts checks if the image to be migrated requires a base VHD and pulls it from the Image service if not already available on the target host.
The source hosts ensures that live migration is enabled and properly configured in Hyper-V.
The source hosts initiates a Hyper-V live migration.
The source hosts communicates to the manager the outcome of the operation.
The following two configuration options/flags are needed in order to support Hyper-V
live migration and must be added to your nova.conf
on the Hyper-V
compute node:
instances_shared_storage = False
This needed to support "shared nothing" Hyper-V live migrations. It is used in nova/compute/manager.py
limit_cpu_features = True
This flag is needed to support live migration to hosts with different CPU features. This flag is checked during instance creation in order to limit the CPU features used by the VM.
instances_path = DRIVELETTER:\PATH\TO\YOUR\INSTANCES
Additional Requirements:
Hyper-V 2012 R2 or Windows Server 2012 R2 with Hyper-V role enabled
A Windows domain controller with the Hyper-V compute nodes as domain members
The instances_path command-line option/flag needs to be the same on all hosts.
The
openstack-compute
service deployed with the setup must run with domain credentials. You can set the service credentials with:C:\>sc config openstack-compute obj="DOMAIN\username" password="password"
How to setup live migration on Hyper-V
To enable 'shared nothing live' migration, run the 3 PowerShell instructions below on each Hyper-V host:
PS C:\>Enable-VMMigration
PS C:\>Set-VMMigrationNetwork IP_ADDRESS
PS C:\>Set-VMHost -VirtualMachineMigrationAuthenticationTypeKerberos
Note | |
---|---|
Please replace the |
Additional Reading
Here's an article that clarifies the various live migration options in Hyper-V:
http://ariessysadmin.blogspot.ro/2012/04/hyper-v-live-migration-of-windows.html
In case you want to avoid all the manual setup, you can use Cloudbase Solutions' installer. You can find it here:
https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi
It installs an independent Python environment, in order to avoid conflicts with existing applications, generates dynamically a nova.conf
file based on the parameters provided by you.
The installer can also be used for an automated and unattended mode for deployments on a massive number of servers. More details about how to use the installer and its features can be found here:
Python 2.7 32bit must be installed as most of the libraries are not working properly on the 64bit version.
Procedure 3.2. Setting up Python prerequisites
Download and then install it using the MSI installer from here:
http://www.python.org/ftp/python/2.7.3/python-2.7.3.msi
PS C:\> $src = "http://www.python.org/ftp/python/2.7.3/python-2.7.3.msi" PS C:\> $dest = "$env:temp\python-2.7.3.msi" PS C:\> Invoke-WebRequest –Uri $src –OutFile $dest PS C:\> Unblock-File $dest PS C:\> Start-Process $dest
Make sure that the
Python
andPython\Scripts
paths are set up in thePATH
environment variable.PS C:\>$oldPath = [System.Environment]::GetEnvironmentVariable("Path") PS C:\>$newPath = $oldPath + ";C:\python27\;C:\python27\Scripts\" PS C:\>[System.Environment]::SetEnvironmentVariable("Path", $newPath, [System.EnvironmentVariableTarget]::User
The following packages need to be downloaded and manually installed:
- setuptools
http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11.win32-py2.7.exel
- pip
- MySQL-python
- PyWin32
http://sourceforge.net/projects/pywin32/files/pywin32/Build%20217/pywin32-217.win32-py2.7.exe
- Greenlet
- PyCryto
http://www.voidspace.org.uk/downloads/pycrypto26/pycrypto-2.6.win32-py2.7.exe
The following packages must be installed with pip:
ecdsa
amqp
wmi
PS C:\> pip install ecdsa PS C:\> pip install amqp PS C:\> pip install wmi
qemu-img
is required for some of the image
related operations. You can get it from here: http://qemu.weilnetz.de/. You
must make sure that the qemu-img
path is set in the
PATH environment variable.
Some Python packages need to be compiled, so you may use MinGW or
Visual Studio. You can get MinGW from here:
http://sourceforge.net/projects/mingw/. You must configure which
compiler to be used for this purpose by using the
distutils.cfg
file in
$Python27\Lib\distutils
, which can contain:
[build] compiler = mingw32
As a last step for setting up MinGW, make sure that the MinGW binaries' directories are set up in PATH.
Use Git to download the necessary source code. The installer to run Git on Windows can be downloaded here:
Download the installer. Once the download is complete, run the installer and follow the prompts in the installation wizard. The default should be acceptable for the needs of the document.
PS C:\>$src = "https://github.com/msysgit/msysgit/releases/download/Git-1.9.2-preview20140411/Git-1.9.2-preview20140411.exe" PS C:\>$dest = "$env:temp\Git-1.9.2-preview20140411.exe" PS C:\>Invoke-WebRequest –Uri $src –OutFile $dest PS C:\>Unblock-File $dest PS C:\>Start-Process $dest
Run the following to clone the Nova code.
PS C:\>git.exe clone https://github.com/openstack/nova.git
To install Nova-compute
, run:
PS C:\>cd c:\Nova PS C:\>python setup.py install
The nova.conf
file must be placed in
C:\etc\nova
for running OpenStack on Hyper-V. Below is a sample
nova.conf
for Windows:
[DEFAULT] [DEFAULT] auth_strategy = keystone image_service = nova.image.glance.GlanceImageService compute_driver = nova.virt.hyperv.driver.HyperVDriver volume_api_class = nova.volume.cinder.API fake_network = true instances_path = C:\Program Files (x86)\OpenStack\Instances glance_api_servers =IP_ADDRESS
:9292 use_cow_images = true force_config_drive = false injected_network_template = C:\Program Files (x86)\OpenStack\Nova\etc\interfaces.template policy_file = C:\Program Files (x86)\OpenStack\Nova\etc\policy.json mkisofs_cmd = C:\Program Files (x86)\OpenStack\Nova\bin\mkisofs.exe verbose = false allow_resize_to_same_host = true running_deleted_instance_action = reap running_deleted_instance_poll_interval = 120 resize_confirm_window = 5 resume_guests_state_on_host_boot = true rpc_response_timeout = 1800 lock_path = C:\Program Files (x86)\OpenStack\Log\ rpc_backend = nova.openstack.common.rpc.impl_kombu rabbit_host =IP_ADDRESS
rabbit_port = 5672 rabbit_userid = guest rabbit_password = Passw0rd logdir = C:\Program Files (x86)\OpenStack\Log\ logfile = nova-compute.log instance_usage_audit = true instance_usage_audit_period = hour network_api_class = nova.network.neutronv2.api.API neutron_url = http://IP_ADDRESS
:9696 neutron_auth_strategy = keystone neutron_admin_tenant_name = service neutron_admin_username = neutron neutron_admin_password = Passw0rd neutron_admin_auth_url = http://IP_ADDRESS:35357
/v2.0 [hyperv] vswitch_name = newVSwitch0 limit_cpu_features = false config_drive_inject_password = false qemu_img_cmd = C:\Program Files (x86)\OpenStack\Nova\bin\qemu-img.exe config_drive_cdrom = true dynamic_memory_ratio = 1 enable_instance_metrics_collection = true [rdp] enabled = true html5_proxy_base_url = https://IP_ADDRESS
:4430
Table 3.31, “Description of HyperV configuration options” contains a reference of all options for hyper-v.
Hyper-V currently supports only the VHD and VHDX file format for virtual machine instances. Detailed instructions for installing virtual machines on Hyper-V can be found here:
http://technet.microsoft.com/en-us/library/cc772480.aspx
Once you have successfully created a virtual machine, you can then upload the image to glance using the native glance-client:
PS C:\>glance image-create --name "VM_IMAGE_NAME
" --is-public False --container-format bare --disk-format vhd
Note | |
---|---|
VHD and VHDX files sizes can be bigger than their maximum internal size, as such you need to boot instances using a flavor with a slightly bigger disk size than the internal size of the disk file. To create VHDs, use the following PowerShell cmdlet: PS C:\>New-VHD |
I ran the nova-manage service list command from my controller; however, I'm not seeing smiley faces for Hyper-V compute nodes, what do I do?
Verify that you are synchronized with a network time source. For instructions about how to configure NTP on your Hyper-V compute node, see the section called “Configure NTP”.
How do I restart the compute service?
PS C:\>net stop nova-compute && net start nova-compute
How do I restart the iSCSI initiator service?
PS C:\>net stop msiscsi && net start msiscsi